id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.05830 | Kinemon: inductively shunted transmon artificial atom | We experimentally investigate inductively shunted transmon-type artificial
atoms as an alternative to address the challenges of low anharmonicity and the
need for strong charge dispersion in superconducting quantum systems. We
characterize several devices with varying geometries and parameters (Josephson
energies and capacitances), and find a good agreement with calculations. Our
approach allows us to retain the benefits of transmon qubit engineering and
fabrication technology and high coherence, while potentially increasing
anharmonicity. The approach offers an alternative platform for the development
of scalable multi-qubit systems in quantum computing. | Daria Kalacheva, Gleb Fedorov, Julia Zotova, Shamil Kadyrmetov, Alexey Kirkovskii, Aleksei Dmitriev, Oleg Astafiev | 2023-06-09T12:03:25Z | http://arxiv.org/abs/2306.05830v1 | # Kinemon: inductively shunted transmon artificial atom
###### Abstract
We experimentally investigate inductively shunted transmon-type artificial atoms as an alternative to address the challenges of low anharmonicity and the need for strong charge dispersion in superconducting quantum systems. We characterize several devices with varying geometries and parameters (Josephson energies and capacitances), and find a good agreement with calculations. Our approach allows us to retain the benefits of transmon qubit engineering and fabrication technology and high coherence, while potentially increasing anharmonicity. The approach offers an alternative platform for the development of scalable multi-qubit systems in quantum computing.
+
Footnote †: preprint: APS/123-QED
Superconducting artificial quantum systems, such as the capacitively shunted charge qubits (transmons and X-mons) are now commonly used to build prototypes of quantum processors because of their simple design and low decoherence rates [1; 2; 3; 4]. However, scaling up quantum registers composed of low anharmonicity physical qubits faces challenges due to the uncontrolled transitions to upper states and limitations in speed of quantum operations [5; 6; 7; 8; 9; 10]. Additionally, non-negligible charge dispersion of the higher energy levels complicates the use of such artificial atoms as qudits [11]. These problems drive the search for alternative physical qubits and materials [12; 13; 14; 15].
While retaining simplicity in fabrication and operation, together with charge noise insensitivity, one can increase the non-linearity of a transmon by decreasing its shunting capacitance and, at the same time, shunting it by a linear inductance. Strictly speaking, this modification produces a flux qubit [16; 17; 18; 19; 20], more specifically, an rf-SQUID or a fluxonium [21]; however, its parameters can be chosen so that the resulting eigenstates are transmon-like, living in a single-well potential, not a two-well one. The latter helps to avoid the exponential sensitivity of the transition frequencies to the Josephson energy variations, which has so far limited the applications of flux qubits in multi-qubit devices. Also, one can expect that the inductive shunt will remove charge dispersion for arbitrarily high energy states.
In this study, we explore a new hybrid design combining the transmon circuit with a compact kinetic inductor - a kinemon (kinetic-inductance-shunted transmon) artificial atom. We design and investigate experimentally a family of such systems with various combinations of Josephson energy \(E_{J}\), inductive energy \(E_{L}\) and charging energy \(E_{C}\)[3]. We also show that the inductive element can be placed inside an \(\alpha\)-SQUID [22] which, for a correct ratio of resulting loop areas, opens a way to modulate the effective Josephson energy, while keeping the parabolic potential contribution fixed. Importantly, we find that, confirming our previous tests of coplanar resonators [23], the kinetic inductor based on aluminum ultra-thin-film does not cause any noticeable deterioration of the coherence times. Finally, as this kind of inductor exhibits relatively good reproducibility in fabrication, we find it a promising component for future quantum circuits.
Without the inductive shunt, a Josephson junction is characterized by a periodic potential \(U_{J}=-E_{J}\cos\varphi\), where \(\varphi\) is the phase across the junction [Fig. 1(a)]. Adding a small parallel capacitance to the circuit results in the formation of energy bands of Bloch waves in the periodic potential [24], which can be represented as \(\psi^{\prime}(\varphi)=e^{i\frac{\varphi}{2\pi}\varphi}u(\varphi)\), so that \(\psi^{\prime}(\varphi)\neq\psi^{\prime}(\varphi+2\pi)\) and only \(u(\varphi)=u(\varphi+2\pi)\). Here, \(q^{\prime}\) represents the quasicharge, being the analogue of the crystal momentum in solids, and the energies inside the band are periodic functions of \(q^{\prime}\) [condensed matter book]. We note that for a charge qubit with discrete number of Cooper pairs allowed on the island, we can apply the rotor analogy [1; 25], so that the states after a full rotation are indistinguishable. Then, the wave function is \(2\pi\)-periodic in the \(\varphi\) representation, and the bands seem to disappear. However, as there is a mathematical correspondence between the quasicharge \(q^{\prime}\) and the induced charge \(n_{g}\): \(n_{g}=q^{\prime}/2e\), the energy configuration of the system still exhibits the same oscillatory behavior as a function of \(n_{g}\). While a larger capacitance localizes the lower energy states in distinct potential wells and significantly reduces the widths of the lowest bands, higher lying bands remain open, as shown in Fig. 1(a), and thus are still sensitive to the induced charge. Moreover, increasing the capacitance always comes at the cost of reducing the anharmonicity \(\alpha\)[1].
To completely prevent the formation of energy bands (remove any energy dependence on the induced charge \(n_{g}\)), it is necessary to disrupt the periodicity. This can be achieved by implementing a shunting inductance, which introduces a parabolic potential \(U_{L}=E_{L}\varphi^{2}/2\) to \(U_{J}\) as
exemplified in Fig. 1(a). As a result, the wavefunctions of the lowest states become localized in the central well. Note that in the case of a small \(E_{L}\) (\(\leq E_{J}\)), the anharmonicity and the energy structure are predominantly determined by \(E_{J}\) and \(E_{C}\). This is because the Josephson energy \(E_{J}\) governs the energy landscape of the system, with the smaller inductive energy \(E_{L}\) providing only a minor perturbation. Larger \(E_{L}/E_{J}\) results in less anharmonicity but can be compensated by increasing \(E_{C}\). In the present work, though, we study inductively shunted artificial atoms fabricated using the standard transmon technology (\(E_{C}\ll E_{J}\lesssim E_{L}\)) and aim to verify their properties and coherence.
A schematic equivalent circuit of kinemon artificial atom is depicted in Fig. 1(b), illustrating two different configurations of qubits investigated in this work. Qubits consist of two main parts: Al/AlO\({}_{x}\)/Al Josephson junctions and a kinetic inductance wire with energy \(E_{L}=\Phi_{0}^{2}/4\pi^{2}L_{k}\) due to the wire inductance \(L_{k}\) to form a SQUID loop. The first circuit scheme operates with an asymmetric topology, utilizing a small single loop formed by an aluminum wire interrupted by a single Josephson junction (Group A in Fig. 1(b)). A key characteristic of this mode is the variation of shunting capacitance values (\(E_{C}=2e^{2}/C\)) to examine coherence time. The second circuit scheme employs a symmetric topology that merges the benefits of both rf-SQUID and transmon designs (Group B in Fig. 1(b)). This approach aims to enhance qubit performance by incorporating a double-loop architecture connected by a shared kinetic inductance wire.
The common Hamiltonian, which covers both the considered geometries (single- and double-loop circuits), experesses as follows
\[\begin{split}\hat{\mathcal{H}}=-E_{C}\frac{\partial^{2}}{ \partial\varphi^{2}}&+\frac{1}{2}E_{L}\varphi^{2}-E_{J1}\cos( \varphi+\kappa\varphi_{\mathrm{e}})\\ &-E_{J2}\cos\left(\varphi-(1-\kappa)\varphi_{\mathrm{e}}\right), \end{split} \tag{1}\]
where \(\varphi_{\mathrm{e}}\) is the total flux phase induced by an external magnetic field, \(\kappa\) is the coefficient of \(\varphi_{\mathrm{e}}\) distribution between two loops, \(E_{J1}\) and \(E_{J2}\) are the Josephson energies of the junctions in each loop according to Fig. 1(b). The Hamiltonian for Group A is obtained by putting \(E_{J2}=0\).
An optical image of the sample is presented in Fig. 1(c), depicting the microfabricated superconducting circuit containing eight kinemon artificial atoms. SQUIDs are tiny loops of superconducting wire and Josephson junctions connected to other circuit elements, such as capacitors and resonators. The SQUIDs are highlighted in green and violet false colors depending on their architecture Fig. 1(d, e).
The fabrication of the kinetic inductance wire is done as follows. A silicon substrate is cooled down by liquid nitrogen during metal deposition to obtain uniform films up to 3.5 nm thick. It is known that aluminum films, de
Figure 1: **(a)** The potential (solid black line) and the energy levels (gray lines) for the conventional transmon (top) and for the inductivly shunted one (bottom) for \(\varphi_{\mathrm{e}}=0\). **(b)** Circuit model of the sample. SQUIDs consists of Josephson junction and a kinetic wire with the energies \(E_{J}\) and \(E_{L}\) respectively and are shunted by a large capacitance with an energy \(E_{C}\) forming a kinemon atom. Group A (green color) represents assymetric kinemons with different energy ratios (Kinemons I - VI). Group B (violet color) represents symmetric kinemons with two JJs and a kinetic inductance wire between them (Kinemons VII - VIII). **(c)** Overview optical image of the fabricated sample. Microfabrication design includes two group of art. atoms with different topologies and variations of \(E_{C}\), \(E_{L}\) and \(E_{J}\). **(d, e)** Enlarged optical image of two different kinemon modification.
posited at room temperature are negatively impacted by formation of granules. Cold film deposition allows us to fabricate long ultrathin wires with high degree of homogeneity [23]. The detailed fabrication process described in App. B. To achieve the necessary inductive energy \(E_{L}\), 200 nm wide and 8 nm thick aluminum wires with varying lengths are integrated into the circuit. Since the kinetic inductance per square of such a film is about 0.03 nH/\(\square\), the wire length in the device ranges from 80 to 240 \(\mu\)m, depending on the desired \(E_{L}\).
In Fig. 2(a) top, we display the data of transmission spectroscopy of the sample via the feedline, showing the microwave response of the readout resonator I as a function of the flux bias \(\varphi_{\mathrm{e}}\) through the SQUID of kinem I (Group A). See all resonators spectra in Fig. 6 in App. D. The pattern is a combination of a smooth dependence, formed by the kinemon first excited state located below the resonator frequency, and an avoided crossing pattern with the second excited state [26]. Direct observation of these transitions is enabled by the cross-Kerr dispersive spectroscopy, for which the data are displayed in Fig. 2(a) bottom. The minimum frequency (one-half flux quantum) and the maximum frequency (zero flux) are considered the flux'sweet spots'. Fig. 2(a) bottom, also presents the fits to the experimental transition frequencies based on the circuit Hamiltonian (Eq. 1). The fits align well with the experimental transition frequencies near the sweet spots \(\varphi_{\mathrm{e}}=0\) and \(\varphi_{\mathrm{e}}=\pi/2\). The single-photon transition between the ground \(\left|0\right\rangle\) and excited \(\left|1\right\rangle\) states occurs at \(f_{01}=4.947\,\mathrm{GHz}\) at \(\varphi_{\mathrm{e}}=0\). Furthermore, the bottom sweet spot of the transition from \(\left|0\right\rangle\) to \(\left|2\right\rangle\) is at \(f_{02}=5.6\,\mathrm{GHz}\) and not visible at \(\varphi_{\mathrm{e}}=\pi\) and corresponds to anti-crossings in the transmission spectroscopy. However, the two-photon transition can be observed and is associated with the spectroscopic line at \(f_{02/2}=4.8,\mathrm{GHz}\). Also, at \(\varphi_{\mathrm{e}}\approx 0.75\,\pi\) and \(\varphi_{\mathrm{e}}\approx 1.25\,\pi\) there are regimes when the first three energy levels are equidistant and are the pivot points in the anharmonicity sign. The atom with zero anharmonicity for the first two transitions [27] (\(\left|0\right\rangle\rightarrow\left|1\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|2\right\rangle\)), which is intermediate between a two-level system (TLS) and a harmonic oscillator, may find interesting applications in the generation of non-classical light [28]. All extracted parameters from the fits are summarised in Table 1. The extracted parameters are also used to make predictions and test the validity of the theoretical model employed in the analysis. The experimental setup and measurement equipment are described in App. C.
The symmetric kinetomons (Group B) are also characterized using spectroscopic measurements. The kinemon
Figure 2: **(a)**Kinemon I: **(Top)** Transmission spectroscopy of the coupled resonator showing feedline frequency response depending on the external flux bias \(\varphi_{\mathrm{e}}\). **(Bottom)** Experimental two-tone spectroscopy, displaying the magnitude of the readout signal at a properly chosen readout frequency as a function of flux bias \(\varphi_{\mathrm{e}}\) and kinem atom excitation frequency. For Group A, the anharmonicity sign changes at the \(\varphi_{\mathrm{e}}=0.75\,\pi\) and gets back at \(\varphi_{\mathrm{e}}=1.25\,\pi\). Numerical simulation of the spectrum reproducing experimental results with labeled transitions. **(b)**Kinemon VII: **(Top)** Transmission spectroscopy of the coupled resonator. **(Bottom)** Experimental two-tone spectroscopy, with insets (middle and right) showing magnifications at the bottom and top flux sweet spots. For Group B, the regimes of harmonic oscillator are noticeable near \(\varphi_{\mathrm{e}}=\pi+2\pi k,\ k\in Z\) (left and middle inset), while at \(\varphi_{\mathrm{e}}\approx 1.15\,\pi\) only the three lowest levels are equidistant and shown in the left inset.
VII transmission and two-tone spectra are presented in Fig. 2(b), where in insets we highlight several distinct features at and near the sweet spots. In the current scheme, we operate in the regime where \(E_{J1}=E_{J2}\), leading to zero anharmonicity at bottom sweet spots (\(\alpha=0\) for \(\varphi_{\rm e}=\pi+2\pi k,\ k\in Z\)). In other words, the Josephson energy is cancelled, when a half flux quantum \(\Phi_{0}\) penetrates through the SQUID, causing a transition into the harmonic regime. The modulation of the frequency at the top sweet spots corresponds to the non-identical areas of the SQUIDs. By fitting the spectrum, we evaluate \(\kappa\) to 0.35 and 0.37 for kinemon VII and VIII, respectively, which is in a good agreement with the design areas. These values determine the locations of non-periodic spots of three-level equidistance (similar to Group A) at \(\varphi_{\rm e}\approx 1.15\,\pi\) and \(\varphi_{\rm e}\approx 4.9\,\pi\), respectively.
To illustrate better the behavior of the spectra, we plot the anharmonicity of the kinesons I, V and VII vs. the magnetic flux in Fig. 3. For all devices, we calculate the anharmonicity as \(\alpha/h=2\times(f_{02}/2-f_{01})\). For kinemon I this yields -86 MHz and 219 MHz for the top and bottom sweet spots, respectively. Kinemon V shows a lower peak anharmonicity, but in overall demonstrates a flatter dependence on \(\varphi_{\rm e}\) due to an increased \(E_{C}\) value. For kinemon VII, zero anharmonicity at \(\varphi=\pi+2\pi k,\ k\in Z\) is associated with a transition into a harmonic oscillator; the additional feature at \(\varphi\approx 1.15\,\pi\) mirrors the effect observed in kinesons I and V, corresponding to the three-level equidistance. Interestingly, for this qubit the anharmonicity is mostly negative and is higher in absolute value in the top sweet spot near \(\varphi_{\rm e}=2\,\pi\) than at \(\varphi_{\rm e}=0\). Another peculiarity is that due to the non-trivial \(\kappa\)-dependence of potential one can find a sign change of anharmonicity at \(\varphi=\pi+2\pi k,\ k\in Z\). However, if the flux is simultaneously near one of the other set of special points \(\varphi_{\rm e}/\pi=\frac{1+2k^{\prime}}{2\kappa-1},\ k^{\prime}\in Z\) (for our case, this is near \(\varphi_{\rm e}\approx 3\,\pi,\ k^{\prime}=-1\)), the sign of \(\alpha\) remains the same.
Finally, we analyse the energy relaxation time \(T_{1}\), the Ramsey coherence time \(T_{2}\), and the echo coherence time \(T_{2}^{E}\) (See in Table 1). The experimental results for kinemon VI are presented in Fig. 4, which provides a comparative analysis of relaxation times under two distinct flux bias conditions, specifically at \(\varphi_{\rm e}=0\) (represented in many color) and \(\varphi_{\rm e}=\pi/2\) (represented in red color). Corresponding measurement protocols are presented on each subplot. We also measure the coherence outside the sweet spot and find the characteristic times to be about 600 ns and 200 ns for \(T_{2}^{E}\) and \(T_{2}\), respectively; the reduction is probably caused by the flux noise due to the insufficient filtering. The relaxation time remains the same with respect to the flux value. The experimental data for \(T_{1}\) are presented in the top left subplot. At the top sweet spot, a specific value noted as 20.39 \(\pm\) 0.93 \(\mu\)s. The values represented by triangles on the red line seem slightly higher than at \(\varphi_{\rm e}=0\), indicating a slower relaxation rate at \(\varphi_{\rm e}=\pi/2\) with a particular time noted as 23.74 \(\pm\) 0.85 \(\mu\)s. The echo coherence time is plotted in the bottom left subplot, with measurement results of \(18.25\pm 1.06\,\mu\)s and 19.28 \(\pm\) 0.50 \(\mu\)s at flux biases of \(\varphi_{\rm e}=0\) and \(\varphi_{\rm e}=\pi/2\), respectively. The Ramsey coherence times are presented in the remaining subplot
Figure 3: Anharmonicity as a function of the external flux bias \(\varphi_{\rm e}\) for kinemon I, V and VII. For kinemons I and V, a zero anharmonicity implies that the first three energy levels are equidistant. As for kinemon VII, the anharmonicity becomes exactly zero \(\varphi_{\rm e}=\pi\) at as it transitions into the harmonic regime, while at \(\varphi_{\rm e}\approx 1.15\,\pi\) the behavior is similar to Group A.
\begin{table}
\begin{tabular}{c c c c c c c c c} & I & II & III & IV & V & VI & VII & VIII \\ \hline \(E_{J}/h\), GHz & 5.38 & 6.00 & 4.00 & 2.92 & 2.44 & 5.90 & 8.61 & 14.00 \\ \(E_{C}/h\), GHz & 0.90 & 1.10 & 1.50 & 1.95 & 1.80 & 0.7 & 0.47 & 0.32 \\ \(E_{L}/h\), GHz & 8.59 & 8.75 & 7.40 & 8.40 & 9.07 & 14.65 & 8.11 & 12.2 \\ \(\omega_{01}^{(t)}/2\pi\), GHz & 4.947 & 5.596 & 5.719 & 6.508 & 6.359 & 5.312 & 4.769 & 5.008 \\ \(\omega_{r}/2\pi\), GHz & 7.185 & 7.284 & 7.341 & 7.433 & 7.495 & 7.608 & 7.688 & 7.779 \\ \(g_{s}/2\pi\), MHz & 64 & 44 & 34 & 35 & 34 & 90 & 83 & 68 \\ \(\alpha^{(t)}/h\), MHz & \(-86\) & \(-118\) & \(-131\) & \(-116\) & \(-87\) & \(-49\) & \(-84\) & \(-80\) \\ \(\alpha^{(b)}/h\), MHz & 219 & 301 & 257 & 182 & 124 & 96 & \(-\) & \(-\) \\ \(T_{1}\), \(\mu s\) & \(17.92\pm 0.95\) & \(17.56\pm 0.86\) & \(19.45\pm 1.62\) & \(8.95\pm 0.33\) & \(8.61\pm 0.29\) & \(20.39\pm 0.93\) & \(19.18\pm 0.78\) & \(14.83\pm 0.87\) \\ \(T_{2}\), \(\mu s\) & \(11.45\pm 0.95\) & \(17.30\pm 1.65\) & \(11.63\pm 1.20\) & \(7.80\pm 0.85\) & \(9.69\pm 0.42\) & \(13.98\pm 0.75\) & \(6.59\pm 0.30\) & \(12.28\pm 0.65\) \\ \(T_{2E}\), \(\mu s\) & \(7.92\pm 2.35\) & \(20.84\pm 1.11\) & \(-\) & \(8.05\pm 0.83\) & \(14.87\pm 0.56\) & \(18.25\pm 1.06\) & \(12.73\pm 0.50\) & \(13.32\pm 1.02\) \\ \end{tabular}
\end{table}
Table 1: Kinemons parameters extracted by fitting. Coherence times are given at \(\varphi_{\rm e}=0\).
and yield 13.98 \(\pm\) 0.75 \(\mu\)s and 14.53 \(\pm\) 0.35 \(\mu\)s at both sweet spots, respectively. Also, additional measurements of conventional transmons fabricated with the same technological process give about 14 \(\mu\)s, 8 \(\mu\)s and 9 \(\mu\)s for \(T_{1}\), \(T_{2}\), \(T_{2}^{E}\), respectively; however, observed performance improvement of kinesons could be caused by differences between the measurement setups.
The use of low-loss material, such as ultra-thin aluminum film inductors, not only improves device scalability but also enhances performance compared to other materials [20, 29, 30]. These observations indicate that the shunting capacitance may be further reduced to make the kinemon design even more compact compared to the transmons. For example, the typical value of \(E_{c}\) = 1.5 GHz for transmons [31, 32] could be increased up to 5-6 GHz for kinesons which will scale down the capacitors, which is promising for the scalability of multi-qubit systems. Additionally, the advantage of having a sign-changing anharmonicity attracts interest in waveguide quantum optics [33], for example, allowing to emit pairs of correlated photons [34] and observe nonlinear intermodulation processes [35] or could help to optimise gate errors caused by a parasitic partial CPHASE operation induced by high-order coupling [36, 37], or be useful for new regimes of Bose-Hubbard model simulators [38, 39].
In conclusion, this study demonstrates that inductively shunted transmon qubits, utilizing ultra-thin aluminum film inductors, provide a promising platform for scalable quantum computing applications. To increase anharmonicity and strike a balance between high anharmonicity and low charge noise sensitivity, a reduction in capacitance should be considered. Thus, introducing a non-zero kinetic inductance component can address this issue. While this study represents a significant step towards realizing large-scale, practical quantum computing systems, further research is required to fully explore and validate the potential of this approach in the field of quantum computing.
## Data availability
The raw data that support the findings of this study are available on a reasonable request from the corresponding author.
Figure 4: Energy relaxation and coherence times for kinemon VI measured at different flux bias. The analysis considers two flux bias conditions, \(\varphi_{e}\) = 0 (navy circles) and \(\varphi_{e}\) = \(\pi/2\) (red triangles). The top left subplot reveals the \(T_{1}\) time at the top sweet spot, demonstrating stability with a value of 20.39 \(\pm\) 0.93 \(\mu\)s. Triangles on the red line represent higher \(T_{1}\) values, 23.74 \(\pm\) 0.85 \(\mu\)s, suggesting slower relaxation at \(\varphi_{e}\) = \(\pi/2\). The echo coherence time (\(T_{2}^{E}\)) is depicted in the bottom left subplot. Measurements display times of 18.25 \(\pm\) 1.06 \(\mu\)s and 19.28 \(\pm\) 0.50 \(\mu\)s for \(\varphi_{e}\) = 0 and \(\varphi_{e}\) = \(\pi/2\) respectively. Lastly, the remaining subplot illustrates Ramsey coherence times (\(T_{2}\)), yielding values of 13.98 \(\pm\) 0.75 \(\mu\)s and 14.53 \(\pm\) 0.35 \(\mu\)s at the respective sweet spots.
## Acknowledgments
The authors are grateful to Russian Science Foundation Project Grant No. 21-42-00025 for financial support. The sample was fabricated using equipment of MIPT Shared Facilities Center.
## Appendix A Design of the qubit samples
A layer-by-layer design was generated using the Klayout-python library [40], which automates the design of superconducting quantum circuits. This library utilizes the KLayout layout design program API and enables the execution of arbitrary Python code through an embedded interpreter. The library specializes in designing microwave and superconducting qubit planar designs, including drawing patterns, simulation, and domain-specific design rule checkers.
## Appendix B Device fabrication
The device fabrication steps consist of five main stages: ground plane construction, nanofabrication of Josephson junction and kinetic wire, bandage deposition, and airbridges construction.
We start with silicon substrate treatment, which includes piranha etching and BHF dipping [41, 42]. The substrate is then immediately placed in the Plassys e-beam evaporation system, and a 100 nm 99.999% aluminum film is evaporated. The metallized substrate is spin-coated with optical resist AZ1517. The coplanar waveguide feedline, resonators, qubit capacitors, and ground plane hole array are patterned using a laser maskless optical lithography system, followed by dry etching of the optical resist mask structure in BC13/Cl2 inductively coupled plasma. Residual resist is then removed in N-methyl-2-pyrrolidone (NMP) and cleaned in O2 plasma.
The next step includes hard-mask preparation [43]. The substrate is spin-coated with polymer resist PMGI SF9. Then, a 30 nm tungsten nanolayer is deposited in a Torr magnetron sputtering system, followed by ARP-04 resist coating. Josephson junctions are patterned by electron lithography and evaporated using the Dolan bridge technique [44], followed by lift-off in NMP. To form the tunnel barrier, the first 25 nm aluminum junction electrode is oxidized at 40 mBar. Then, a 45 nm electrode is evaporated and preventively oxidized at 10 mBar. Residual resist is removed in NMP and cleaned in O2 plasma.
The kinetic part is fabricated during an additional cycle of e-beam lithography. We use a single layer of ARP-04 e-beam resist to construct the pattern. After development, an 8 nm aluminum film is evaporated at the Plassys stage temperature of 170 K at a normal angle. Residual resist is then removed in NMP and cleaned in O2 plasma.
Good galvanic contact between the layers is obtained by aluminum bandages [45]. We use a similar process of single layer mask as for the kinetic part above, but without cooling. A 150 nm aluminum film is evaporated with in-situ Ar ion milling. Residual resist is removed during the lift-off process in NMP and cleaned in O2 plasma.
Due to the presence of coplanar lines on the ground plane, we need to achieve a uniform electrical potential; therefore, the final stage of sample fabrication is the implementation of aluminum free-standing air-bridges [46]. A 7 um layer of SPR220 photoresist is spin-coated, and the base layer is patterned using a laser maskless optical lithography system. After development, the substrate is heated to create a height gradient on the resist edges, followed by a 600 nm aluminum evaporation with in-situ Ar ion etching. A second layer of SPR220 photoresist is used to form a bridge structure. Finally, the excess metal is dry-etched in BC13/Cl2 inductively coupled plasma. Residual resist is then removed in NMP and cleaned in O2 plasma.
Figure 5: The schematic of experimental setup to measure the sample depicting room temperature equipment and line configuration inside the BlueFors dilution refrigerator, with the base temperature of 10 mK.
## Appendix C Microwave experimental setup
The device under investigation is measured in a dilution refrigerator at 10 mK (Fig. 5). Signals are generated using an arbitrary waveform generator (Keysight M3202A) and RF-synthesizers (SignalCore 5502A), followed by upconversion in IQ-mixers (Marki IQ4509 and IQ0307). Excitation and dispersive readout signals are combined using a directional coupler and sent to the fridge, where they are attenuated by 60 dB to reduce thermal noise reaching the sample. The response from the readout resonators located on the sample is amplified using a HEMT- and room-temperature amplifiers. Finally, the readout signal is downconverted and digitized at a 100 MHz IF frequency using a Spectrum Instruments m4x PXI card. Using this scheme, qubit spectra are obtained under continuous excitation and readout. Artificial atoms coherence times are characterised using conventional time-domain techniques [3; 47].
## Appendix D Cavity coupling
To plot the model curves describing the dependence of the resonator frequency on the magnetic flux (the upper panels of Fig. 2), we solve the following Hamiltonian in the \(\varphi\)-basis for the kinemon and in the Fock basis for the cavity:
\[\hat{\mathcal{H}}_{\text{cQED}}=\hat{\mathcal{H}}+\hbar\omega_{r}(\hat{a}^{ \dagger}\hat{a}+1/2)+\hbar g(\hat{a}^{\dagger}-\hat{a})\otimes\frac{\partial }{\partial\varphi},\]
Figure 6: Transmission spectroscopy for all resonators coupled to kinemons with varying energy ratios. \(|S_{21}|\) includes the attenuation and amplification in the measurement chain. Avoided crossings occur when a qubit’s \(|0\rangle\rightarrow|2\rangle\) transition (or even higher levels for kinemons VII and VIII) intersects with its readout cavity frequency. Some very sharp avoided crossings are just barely resolved in the simulations. Also, some of the predicted features are smeared in experiment due to the power broadening effects (located around 1.1 mA for VII and 1.4 mA for VIII).
Figure 7: Simulated spectra for the kinemon I, obtained from the master equation solution. Color shows the expectation value of the cavity field \(|\langle\hat{a}\rangle\,|\) (top) and the depopulation of the ground state, \(1-P(|g\rangle)\), (bottom). Simulation parameters: \(\Omega/2\pi=200\) MHz for the kinemon spectrum (\(=0.1\) MHz for the cavity spectrum), \(\omega_{r}/2\pi=7.1851\) GHz, \(g/2\pi=64\) MHz, \(\kappa=2.5\)\(\mu\)s\({}^{-1}\), \(\gamma=10\,\mu\)s\({}^{-1}\).
where \(\hat{\mathcal{H}}\) is defined by Eq. (1), \(\hat{a}\) is the bosonic annihilation operator, and the capacitive coupling term is derived using the canonical quantization expression for the \(\hat{n}\) operator, \([\hat{n},\hat{\varphi}]=i\). The corresponding coupling coefficient \(g\) is defined by fitting the model to the data.
For numerical solution of the stationary Shroedinger equation for \(\hat{\mathcal{H}}_{\text{cQED}}\), we use separate finite difference formulas of the \(6^{\text{th}}\) order to approximate the first and the second derivatives. This allows to achieve convergence of the necessary low-lying eigenenergies on a rough \(\varphi\) grid of 30 to 50 nodes, which then facilitates execution of the computationally-demanding fitting algorithms.
To reproduce the multiphoton transitions which can be observed in the lower panels of Fig. 2, we also perform a time-domain_simulation by solving the GKSL equation based on the \(\hat{\mathcal{H}}_{\text{cQED}}\)-defined unitary evolution and dissipative energy relaxation dynamics characterized by the collapse operators \(\sqrt{\kappa}\hat{a}\) and \(\sqrt{\gamma}\hat{b}\), \(\kappa,\gamma\) being the corresponding decay rates. Here, the kinemon lowering operator \(\hat{b}\) for each value of \(\varphi_{e}\) is constructed from the lowest eigenstates \(\ket{E_{n}}\),
\[\hat{b}=\hat{1}_{\text{r}}\otimes\sum_{n}\ket{E_{n}}\bra{E_{n+1}}.\]
Then, at a given \(\varphi_{e}\), the master equation is solved with an addition of a resonator driving term to \(\hat{\mathcal{H}}_{\text{cQED}}\) of the form \(\hbar\Omega(\hat{a}^{\dagger}+\hat{a})\cos\omega_{\text{d}}t\). The driving frequency \(\omega_{\text{d}}/2\pi\) is scanned through the same range that is used in the spectroscopy, i.e., from 2.5 to 5.5 GHz, and the resulting steady-state density matrices are saved for further calculation of the observables.
We show the results for both the transmission spectroscopy and the two-tone spectroscopy in Fig. 7. We note in overall a good agreement with the experimental data, but find an extra spectral line in Fig. 2(a), having its minimum at around 4.7 GHz, which is not reproduced in the simulation. We attribute it to a sideband two-photon process taking one thermal photon from the cavity and exciting the kinemon to its 4-th excited state located around 12 GHz above the ground state at the lower sweet spot (in the modeling, the cavity it at zero temperature, so such a process can not be observed). The upside-down-looking spectral line having a maximum frequency of 4.5 GHz is a vice-versa process, depopulating the kinemon from \(\ket{e}\) to \(\ket{g}\) and exciting the cavity. It is observable in the simulation as \(\ket{e}\) is slighlty populated for \(\varphi\in[-1.2,-0.8]\) due to the numerical errors.
|
2303.11043 | Squeeze cementing of micro-annuli: a visco-plastic invasion flow | Squeeze cementing is a process used to repair leaking oil and gas wells, in
which a cement slurry is driven under pressure to fill an uneven leakage
channel. This results in a Hele-Shaw type flow problem involving a yield stress
fluid. We solve the flow problem using an augmented Lagrangian approach and
advect forward the fluid concentrations until the flow stops. A planar invasion
and a radial (perforation hole) invasion flow are studied. The characteristics
of the flow penetration are linked to the channel thickness profile. The
distribution of streamlines, flowing and non-flowing zones, evolves during the
invasion flow. An interesting aspect of the results is the extreme variability
in penetration metrics computed. These depend not only on the stochastic nature
of the microannulus thickness, which has significant natural variation in both
azimuthal and axial directions, but also on the ``luck'' of where the
perforation hole is, relative to the larger/smaller microannulus gaps. This may
explain the unreliability of the process. | Mahdi Izadi, Emad Chaparian, Elizabeth Trudel, Ian Frigaard | 2023-03-20T11:48:48Z | http://arxiv.org/abs/2303.11043v1 | # Squeeze cementing of micro-annuli: a visco-plastic invasion flow
###### Abstract
Squeeze cementing is a process used to repair leaking oil and gas wells, in which a cement slurry is driven under pressure to fill an uneven leakage channel. This results in a Hele-Shaw type flow problem involving a yield stress fluid. We solve the flow problem using an augmented Lagrangian approach and advect forward the fluid concentrations until the flow stops. A planar invasion and a radial (perforation hole) invasion flow are studied. The characteristics of the flow penetration are linked to the channel thickness profile. The distribution of streamlines, flowing and non-flowing zones, evolves during the invasion flow. An interesting aspect of the results is the extreme variability in penetration metrics computed. These depend not only on the stochastic nature of the microannulus thickness, which has significant natural variation in both azimuthal and axial directions, but also on the "luck" of where the perforation hole is, relative to the larger/smaller microannulus gaps. This may explain the unreliability of the process.
keywords: Hele-Shaw flows; yield-stress fluid; squeeze cementing +
Footnote †: journal: J. non-Newtonian Fluid Mech
## 1 Introduction
This paper concerns the modelling of the invasion of a viscoplastic fluid (a microfine cement slurry) into an uneven narrow channel (microannulus). This _squeeze_ cementing operation occurs in the repair of leaking oil and gas wells, which may either be emitting greenhouse gases (GHG) or allowing oil seepage to surface, both flows occurring along the channel to be filled.
When oil and gas wells are constructed, a critical part of the completion process is an operation _primary_ cementing, whereby cement is pumped into the narrow annular space between the borehole and a steel casing (through which the produced hydrocarbons will flow); see Nelson and Guillot [1]. This cement sheath is vital for mechanical support, to protect the steel from corrosive formation brine and to seal hydraulically by adhering to both the steel casing and the borehole wall, referred to as well integrity. In a typical well, a number of casings will be cemented concentrically inside one another as the well extends deeper.
Wellbore leakage occurs fairly frequently and is an indication of failure to provide a hydraulic seal around the cemented annulus. A recent study found that 13.9 % of all wells in British Columbia (BC), Canada, registered an instance of surface casing vent flow (SCVF) at some point throughout their operating life, meaning that leaking fluids are released through the surface casing assembly. For wells drilled between 2010 and 2018, 28.5 % reported an instance of SCVF [2]. Leakage pathways associated with SCVF include microannuli, cracks within the primary cement sheath, debonding between the annular cement sheath and the interface, and in rare cases, the cement sheath itself may provide a pathway to leakage if its permeability has been compromised during the primary cementing operation (e.g. from excessive contamination); see Fig. 1. It is widely acknowledged that some form of coherent microannular gap is needed for significant leakage to surface, along the casing. The microannulus may arise from cement shrinkage or might result from debonding following pressurization of the well casing (e.g. during a hydraulic fracturing operation). For our purposes, we simply assume that such a gap occurs.
Microannuli and their role in well leakage has become of interest recently not only for GHG emission concerns, but also as part of the design process for potential leakage of \(CO_{2}\) in carbon capture and storage (CCS) operations. Thus, from the CCS direction there are a number of simple models that assign an effective permeability to a well, to represent the microannulus leakage [3; 4; 5]. In well decommissioning on the other hand, the microannulus is generally assigned a thickness and interpreted as a uniform narrow gap. With a view to developing probabilistic approaches to assessing leakage risk, researchers have developed models in which cement permeability, microannulus thickness and crack dimensions are all considered uncertain inputs and sampled from a distribution [6; 7; 8; 9]. Uniform microannulus thicknesses of \(5-300\mu m\) have been used.
In addition to the works discussed above, a number of experimental studies have attempted to measure either flow through a microannulus, to determine its effective permeability, or microannulus size/geometry directly. These experiments are typically conducted on scaled-down sections of wells and aim to recreate downhole conditions such as temperature and pressure cycling. Computer tomography (CT) has been used to determine the size of leakage pathways [10; 11; 12; 13], showing that microannulus geometries are highly variable in nature. A complex network of pathways with non-uniform thickness were observed in their samples with connectivity of pathways in the axial direction. However, the technique is limited in regards to microannulus size by the resolution of the CT scanner, as discussed in Vralstad, Skorpa and co-authors [10; 11; 14]. Similiar non-destructive visualization of microannuli was conducted by Yang _et al._[15], with improved resolution. Both micro-CT and electron microscopy (ESEM) were used and resolutions of \(11.92\mu m\) for the micro-CT scanner and \(0.05\mu m\) with ESEM were achieved. This study determined the presence of leakage pathways at the cement/formation interface and a varied gap size according to formation properties. Garcia Fernandez _et al._[16] recreated microannuli experimentally under various conditions and characterised the distribution of thickness values (as right skewed distributions such as gamma and lognormal distributions). Thicknesses with range from 0 to \(50\mu m\) were found for thermally debonded annuli and 0 to \(1000\mu m\) for larger microannuli, where a residual layer was found between the cement and the casing/formation. Their results showed highly varied microannulus aperture size in the azimuthal direction. This conclusion is also supported by Storm
Figure 1: Schematic of typical unconventional gas wells in British Columbia, Canada. Inset shows microannuli and cracked cement, compromising well integrity.
_et al._[17], who observed hydraulic apertures in the range of 0 to \(118\mu m\). Results from experimental studies thus point to microannuli being highly varied in the azimuthal direction but presenting some continuity in the well axis direction and with microannulus thicknesses ranging from 0 to upwards of \(200\mu m\). Taking these basic features Trudel and Frigaard [18] derived a probabilistic model for microannulus thickness along sections of well. The base model parameters were calibrated against the distribution of well leakage data collected from over 3000 wells. Later, we use the same model to generate representative microannuli for our study.
This paper targets the repair of microannular well leakage, through an operation called squeeze cementing. For a leaking well, having identified the leaking zone, the steel casing and cement sheath are perforated using shaped charges; Fig. 2. After some washing of debris from the holes, a section of the well is isolated longitudinally and a cement slurry is pumped into the well under pressure. The cement enters into the (large diameter) perforations and any induced fractures/crevices in the surrounding formation, but the pump pressure is generally kept low enough so as not to hydraulically fracture. In the larger channels the cement slurry slows and begins to pack, back towards the well. The arrest of the slurry happens via a combination of rheological and filtration mechanisms, not fully understood [19]. As the resistance downstream builds, the cement slurry is also forced laterally along the well into the narrower cavities of the microannuli and any damaged zones near the perforation; see Fig. 3. It is of interest to understand this invasion process and eventually be able to predict features such as the invasion depth of the slurry, as a function of modelled geometry, at least in a probabilistic sense. This explains the main motivation and direction of this paper.
Whereas much of the squeeze cementing flow will occur in the near wellbore region, from the leakage perspective the microannulus flow is of key interest. This consists of invasion of a viscoplastic slurry into a narrow uneven channel. Squeeze cementing shares features with at least two other industrial processes. In terms of the geometric uncertainty away from the wellbore, grouting processes are analogous [20; 21]. Regarding the microannulus flow, this is quite similar to the initial primary cementing operation performed on the well, which is typically modelled as a Hele-Shaw flow along a narrow eccentric annulus [22; 23]. There are three significant differences. First, the invasion of the cement into the microannulus is not displacing a rheologically complex fluid, such as the drilling mud in primary cementing. Typically, water is used to
Figure 2: a) Schematic of typical perforation gun pattern on the inside of the casing. b) Unwrapped irregular microannulus with perforation holes through casing and into formation.
clean the perforations from debris in squeeze cementing and may therefore be in the microannulus, but no other preflush is used. Secondly, the geometry in primary cementing is a uniform eccentric annulus whereas the microannulus geometry is likely more irregular. Thirdly, given the median size of cement particles and typical size of microannular gap, it is clear that for squeeze cementing we are often close to the limit where the cement slurry may be regarded as a single phase continuum, as opposed to a suspension of particles, as [19]. However, depending on operator and service company involved, the use of microfine cements is generally recommended, which have particle sizes \(\sim 5-20\mu m\).
Flows of a viscoplastic fluid past uneven cavities [24; 25], and along uneven fracture geometries [26], have been studied in depth. Hele-Shaw approaches remain valid in the limits of long-thin geometries. More recently, there have been a number of studies of displacement flows through uneven wellbore geometries [27; 28; 29; 30; 31; 32; 33; 34], which have used both Hele-Shaw and Navier-Stokes formulations.
An outline of the paper is as follows.
## 2 Model derivation
In this paper we look at the pressure-driven invasion of a visco-plastic (cement) slurry into an uneven narrow cavity (the microannulus), which is itself filled with another miscible fluid (e.g. water). In general, the setup involves a micro-annulus domain that is bounded by surfaces of at least 2 types: injection/invasion surface(s) and far-field/outflow surfaces. The typical micro-annulus gap has a length-scale \(2\hat{H}_{0}\sim 100\)\(\mu\)m and the lateral dimensions of the micro-annulus domain are \(O(\hat{L})\sim 1\)m. The disparity of length-scales (\(\delta=\hat{H}_{0}/\hat{L}\sim 10^{-4}\ll 1\)), suggests that curvature may be neglected and we may effectively unwrap the micro-annulus domain into a two-dimensional domain with lateral dimensions \((\hat{x},\hat{y})\) and transverse coordinate \(\hat{z}\in[-\hat{H}(\hat{x},\hat{y}),\hat{H}(\hat{x},\hat{y})]\).
In the above, as discussed in SS1, the situation is rather similar to the primary cementing flows of [22; 23] in terms of underlying model. However, primary cementing flows are strongly influenced by buoyancy effects, which become significant/evident when the buoyancy number \(b\):
\[b=\frac{\Delta\hat{\rho}\hat{g}\hat{H}_{0}^{2}}{\hat{\mu}_{s}\hat{V}_{0}} \gtrsim 10. \tag{1}\]
Here \(\Delta\hat{\rho}\) represents a density difference between fluids, \(\hat{g}\) is the gravitational acceleration, \(\hat{\mu}_{s}\) and \(\hat{V}_{0}\) are representative viscosities and velocities of the slurry, respectively. In general for squeeze cementing, due to the reduced \(\hat{H}_{0}\) compared to primary cementing, we will have \(b\lesssim 1\). While there may still be extreme situations in which buoyancy, between the cement and in-situ fluid plays a role, we initially ignore. This also simplifies significantly as the direction of gravity, relative to \((\hat{x},\hat{y})\), would otherwise change significantly around the annulus.
Figure 3: Schematic of squeeze cementing of a single perforation: a) directly after perforation; b) at end of squeeze cementing, indicating penetration radius.
Under the above assumptions regarding buoyancy, together with \(\delta\ll 1\) and \(\delta Re\ll 1\), the leading order1 terms in the Navier-Stokes equations are as follows:
Footnote 1: Here \(Re\) is a Reynolds number based on \(\hat{H}_{0}\), \(\hat{\mu}_{s}\), \(\hat{V}_{0}\) and a representative density. We will not use it further.
\[0 =-\frac{\partial\hat{p}}{\partial\hat{x}}+\frac{\partial\hat{ \tau}_{xz}}{\partial\hat{z}},\] \[0 =-\frac{\partial\hat{p}}{\partial\hat{y}}+\frac{\partial\hat{ \tau}_{yz}}{\partial\hat{z}}, \tag{2}\] \[0 =-\frac{\partial\hat{p}}{\partial\hat{z}}\] \[0 =\hat{\nabla}\cdot\hat{\mathbf{u}}. \tag{3}\]
The constitutive laws are also simplified, being based on the leading order shear flow components of the deviatoric stress and strain rate tensors:
\[\left\{\begin{array}{ll}\hat{\mathbf{\tau}}=\left(\hat{\kappa}\|\hat{\mathbf{\gamma }}\|^{n-1}+\frac{\hat{\tau}_{y}}{\|\hat{\mathbf{\gamma}}\|}\right)\hat{\mathbf{\gamma} }&\mbox{iff}\quad\|\hat{\mathbf{\tau}}\|>\hat{\tau}_{y},\\ \hat{\mathbf{\gamma}}=0&\mbox{iff}\quad\|\hat{\mathbf{\tau}}\|\leqslant\hat{\tau}_{y},\end{array}\right. \tag{4}\]
\[\|\hat{\gamma}\|=\sqrt{\left(\frac{\partial\hat{u}}{\partial\hat{z}}\right)^{ 2}+\left(\frac{\partial\hat{v}}{\partial\hat{z}}\right)^{2}}. \tag{5}\]
The parameters above are the yield stress (\(\hat{\tau}_{y}\)), consistency (\(\hat{\kappa}\)), and power law index (\(n\)), in the usual Herschel-Bulkley law.
The micro-annulus might initially be filled with gas, brine or oil, depending on the type of leakage. Typically, a water-based preflush is pumped ahead of the slurry to wash residual fluids out as well as debris from the perforation. In specialised cases a more rheologically complex preflush may be used [35]. Our assumption is that whatever the in situ fluid is, it is of much smaller viscosity than the invading cement slurry. Thus, although we have a Hele-Shaw type problem we do not expect fingering type instabilities. Due to the large viscosity ratio we assume that the fluid distribution across the gap is uniform, i.e. non-dispersive. We model the cement-preflush flow as a miscible displacement, characterised by the volume fraction \(c(\hat{x},\hat{y},\hat{t})\) of the slurry.
\[\frac{\partial c}{\partial\hat{t}}+\hat{\nabla}\cdot[c\hat{\mathbf{u}}]=\hat{ \nabla}\cdot[\hat{D}_{d}\hat{\nabla}c]. \tag{6}\]
Here \(\hat{D}_{d}\) represents the effects of diffusive transport which can be a combination of molecular diffusion and dispersion effects. We will assume that \(\hat{D}_{d}/(\hat{V}_{0}\hat{L})\ll 1\) (large Peclet number limit), so that the main transport method is advective. On averaging across the micro-annulus gap, and neglecting the diffusive terms, the gap-averaged concentration \(\bar{c}\) satisfies the simplified equation:
\[\frac{\partial}{\partial\hat{t}}[\hat{H}\bar{c}]+\hat{\nabla}\cdot[\hat{H} \bar{c}(\hat{\bar{u}},\hat{\bar{v}})]=0, \tag{7}\]
where now \(\hat{\nabla}\) operates only in the \((\hat{x},\hat{y})\)-plane.
The concentration is used, together with the properties of both fluids, to define _mixture_ rheological properties. The mixture laws we use are based on simple linear interpolation of stresses, using the slurry properties as the main scale as this provides the most resistance to the pressure applied. To clarify, a velocity scale \(\hat{V}_{0}\) is based on a viscous balance, i.e.
\[\frac{\hat{\kappa}_{s}\hat{V}_{0}^{n_{s}}}{\hat{H}_{0}^{n_{s}}}=\frac{\hat{p }_{0}}{\hat{L}}\hat{H}_{0},\]
with \(\hat{\kappa}_{s}\) and \(n_{s}\) the slurry consistency and power law index. The viscosity scale \(\hat{\mu}_{s}\) used above is \(\hat{\mu}_{s}=\hat{\kappa}_{s}\hat{V}_{0}^{n_{s}-1}/\hat{H}_{0}^{n_{s}-1}\). The mixture power law index is: \(n=cn_{s}+(1-c)n_{p}\), with the \(p\) subscript denoting prefluish properties. The mixture yield stress and consistency are defined as:
\[\hat{\tau}_{y}=c\hat{\tau}_{y,s}+(1-c)\hat{\tau}_{y,p},\qquad\hat{\kappa}\frac{ \hat{V}_{0}^{n}}{\hat{H}_{0}^{n}}=c\hat{\kappa}_{s}\frac{\hat{V}_{0}^{n_{s}}}{ \hat{H}_{0}^{n_{s}}}+(1-c)\hat{\kappa}_{p}\frac{\hat{V}_{0}^{n_{p}}}{\hat{H}_{ 0}^{n_{p}}}. \tag{8}\]
Often the prefluish will be Newtonian, in which case \(\hat{\tau}_{y,p}=0\), \(n_{p}=1\) and \(\hat{\kappa}_{p}\) is the viscosity. Of course, (8) is one of many possible mixture laws and there is no particular justification. For the fluid pairs we shall consider, with much less viscous prefluish, we effectively assume that
\[\hat{\kappa}_{s}\frac{\hat{V}_{0}^{n_{s}}}{\hat{H}_{0}^{n_{s}}}\gg\hat{ \kappa}_{p}\frac{\hat{V}_{0}^{n_{p}}}{\hat{H}_{0}^{n_{p}}}\quad\text{ and }\quad\hat{\tau}_{y,s}\gg\hat{\tau}_{y,p}.\]
### Stream-function formulation
From the above reduced model, we can follow the usual Hele-Shaw derivation for a visco-plastic fluid [22; 23], given briefly here. First, we see that the areal flow rates are divergence-free in the \((\hat{x},\hat{y})\)-plane, which leads to definition of the stream function \(\hat{\psi}\):
\[\left(\frac{\partial\hat{\psi}}{\partial\hat{y}},-\frac{\partial\hat{\psi}}{ \partial\hat{x}}\right)=\int_{0}^{\hat{H}}\left(\hat{u},\hat{v}\right)\; \mathrm{d}\hat{z}=\hat{H}\left(\hat{\bar{u}},\hat{\bar{v}}\right). \tag{9}\]
Here we have assumed symmetry of the flow about \(\hat{z}=0\). From (2) and (4) it follows upon rearranging and integrating 2 times, that \(\left(\hat{\bar{u}},\hat{\bar{v}}\right)\) is parallel to the pressure gradient \(\hat{\nabla}\hat{p}\), flowing down the gradient.
Now by orienting the coordinates in the direction of the gap-averaged flow, say locally along \(\mathbf{e}_{s}\), there will be only a single non-zero component of the velocity and pressure gradient. The velocity profile is easily found:
\[\hat{u}_{s}=\left\{\begin{array}{ll}-\frac{n}{1+n}\frac{1}{|\hat{ \nabla}\hat{p}|^{2}\hat{\kappa}\frac{1}{n}}\left[\left(|\hat{\nabla}\hat{p}| \hat{H}-\hat{\tau}_{y}\right)^{1+\frac{1}{n}}-\left(|\hat{\nabla}\hat{p}||\hat {z}|-\hat{\tau}_{y}\right)^{1+\frac{1}{n}}\right]\frac{\partial\hat{p}}{ \partial\hat{s}}&\text{iff}\quad|\hat{z}|>\frac{\hat{\tau}_{y}}{|\hat{\nabla} \hat{p}|},\\ -\frac{n}{1+n}\frac{1}{|\hat{\nabla}\hat{p}|^{2}\hat{\kappa}\frac{1}{n}}\left( |\hat{\nabla}\hat{p}|\hat{H}-\hat{\tau}_{y}\right)^{1+\frac{1}{n}}\frac{ \partial\hat{p}}{\partial\hat{s}}&\text{iff}\quad|\hat{z}|\leqslant\frac{\hat{ \tau}_{y}}{|\hat{\nabla}\hat{p}|},\end{array}\right. \tag{10}\]
which is the plane Poiseuille flow for a Herschel-Bulkley fluid. On integrating across the half-gap:
\[|\hat{\nabla}\hat{\psi}|=\hat{H}\hat{\bar{u}}_{s}=\int_{0}^{\hat{H}}\hat{u}_{ s}\;\mathrm{d}\hat{z}=\frac{n\left(|\hat{\nabla}\hat{p}|\hat{H}-\hat{\tau}_{y} \right)^{1+\frac{1}{n}}_{+}\left((n+1)|\hat{\nabla}\hat{p}|\hat{H}+n\hat{\tau} _{y}\right)}{(n+1)(2n+1)\hat{\kappa}^{1/n}|\hat{\nabla}\hat{p}|^{2}}. \tag{11}\]
Here \((\cdot)_{+}\) denotes the positive part of the bracketed expression and note that \(|\hat{\nabla}\hat{p}|=-\frac{\partial\hat{p}}{\partial\hat{s}}\). The combination \(|\hat{\nabla}\hat{p}|\hat{H}\) that appears above is simply the wall shear stress \(\hat{\tau}_{w}\). We observe that the flow stops (locally), provided that \(|\hat{\nabla}\hat{p}|\hat{H}\leq\hat{\tau}_{y}\), showing that the flow is of limiting pressure gradient type.
While (11) gives the algebraic relationship between the areal flow rate and the pressure gradient, we also want to use the inverse of this relation, which we write as
\[|\hat{\nabla}\hat{p}|=\hat{S}(|\hat{\nabla}\hat{\psi}|).\]
The function \(\hat{S}(|\hat{\nabla}\hat{\psi}|)\) can be found numerically, provided that \(|\hat{\nabla}\hat{\psi}|>0\), i.e. by inverting the explicit function (11). The value of \(\hat{S}\) as \(|\hat{\nabla}\hat{\psi}|\to 0\), is the limiting pressure gradient: \(\hat{\tau}_{y}/\hat{H}\). Subtracting off this limiting value, we can write:
\[\hat{S}(|\hat{\nabla}\hat{\psi}|)=\frac{\hat{\tau}_{y}}{\hat{H}}+\hat{\chi}(| \hat{\nabla}\hat{\psi}|)=\frac{\hat{\tau}_{y}}{\hat{H}}+\frac{\hat{\tau}_{w}(| \hat{\nabla}\hat{\psi}|)-\hat{\tau}_{y}}{\hat{H}}. \tag{12}\]
The first expression follows [22], whereas the second (equivalent) expression favours a description in terms of the wall shear stress [23]. The properties of \(\chi(|\hat{\nabla}\hat{\psi}|\) are analysed in [36], wherein also various results on existence and uniqueness of solutions are developed.
Once \(\hat{S}(|\hat{\nabla}\hat{\psi}|)\) has been computed, we may eliminate the pressure as follows:
\[0=\hat{\nabla}\cdot\left(\frac{\partial\hat{p}}{\partial\hat{y}},-\frac{ \partial\hat{p}}{\partial\hat{x}}\right)=\hat{\nabla}\cdot\left(\hat{S}(|\hat {\nabla}\hat{\psi}|)\frac{\hat{\nabla}\hat{\psi}}{|\hat{\nabla}\hat{\psi}|} \right). \tag{13}\]
### Scaled problem
For the scaled problem, we first scale the coordinates:
\[(x,y)=\frac{(\hat{x},\hat{y})}{\hat{L}}\quad\text{and}\quad z=\frac{\hat{z}}{ \hat{H}_{0}}.\]
We suppose that the far-field pressure is 0 and at the inflow \(\hat{p}=\hat{p}_{0}\) is imposed, used to scale the pressure. The velocity scale \(\hat{V}_{0}\) is used for velocities in the \((\hat{x},\hat{y})\) plane and \(\hat{H}_{0}\hat{V}_{0}\) for the stream function. A timescale for the filling: \(\hat{t}_{0}=\hat{L}/\hat{V}_{0}\), is used for time, which only appears in the concentration equation.
Dimensionless variables use the same symbols as their dimensional equivalents, but without the \(\hat{\cdot}\) symbol. Equation (11) becomes:
\[|\nabla\psi|=\frac{n\left(H|\nabla p|-Y\right)_{+}^{1+\frac{1}{n}}\left((n+1) H|\nabla p|+nY\right)}{(n+1)(2n+1)\kappa^{2}|\nabla p|^{2}}, \tag{14}\]
where
\[\kappa=c+(1-c)\mu_{p}:\qquad\mu_{p}=\frac{\hat{\kappa}_{p}}{\hat{\kappa}_{s}} \frac{\hat{V}_{0}^{n_{p}-n_{s}}}{\hat{H}_{0}^{n_{p}-n_{s}}}\ll 1. \tag{15}\]
The wall shear stress is \(H|\nabla p|\) and \(Y=\hat{\tau}_{y}\hat{L}/(\hat{p}_{0}\hat{H}_{0})\). Note the limiting pressure gradient is now \(Y/H\) and the scaled pressure gradient term is \(|\nabla p|=S(|\nabla\psi|)\). The dimensionless viscous part of the pressure gradient is \(\chi(|\nabla\psi|)=S(|\nabla\psi|)-Y/H\), which is obtained from (14):
\[|\nabla\psi|=\frac{n\left(H\chi\right)^{1+\frac{1}{n}}\left((n+1)H\chi+(n+2) Y\right)}{(n+1)(2n+1)(\chi+Y/H)^{2}}. \tag{16}\]
After scaling, the stream function satisfies:
\[0=\nabla\cdot\mathbf{S}:\qquad\mathbf{S}=\left[\chi(|\nabla\psi|)+\frac{Y}{H }\right]\frac{\nabla\psi}{|\nabla\psi|}. \tag{17}\]
The time advance is via the fluid concentration, which satisfies:
\[\frac{\partial}{\partial t}[H\bar{c}]+\nabla\cdot(H\bar{c}\bar{u},H\bar{c} \bar{v})=0. \tag{18}\]
## 3 Model parameters and scope of study
For our dimensionless model we essentially explore invasion (displacement) flows driven by unit pressure drop, where the in situ fluid is a low viscosity Newtonian fluid. The main resistance comes from the cement slurry and the limiting pressure gradient activates when the wall shear stress is less than the yield stress. This balance is captured in the scaled yield stress parameter \(Y\). We expect that for large enough \(Y\) the penetrating fluid will be unable to penetrate to the boundaries of the domain. The other parameters \(\kappa\) and \(n\), mostly influence the speed of the flow via the effective viscosity, i.e. how fast is the steady flow achieved, but cannot stop the flow. The other key parameter is the microannulus geometry.
Our study is mostly focused on the establishment of a viable method for assessing the effectiveness of squeeze cementing operations, which depends on the degree of blockage of the microannulus after the cement slurry has been pumped. To do this we select the microannulus thickness using a stochastic model based on [18]; see SS3.2. We use the geometry to study two types of invasion problem in this paper: a planar invasion and a radial (perforation squeeze) invasion. Since generally the slurry is more viscous than the in situ fluid, we do not expect fingering type instabilities. With a uniform gap width, both problems would likely have a stable uniform invasion. However, the irregular microannulus geometry ensures that the penetration is far from uniform.
### Planar invasion
Here we have a micro-annulus domain \(\Omega=(x,y)\in(0,1)\times(-1/2,1/2)\), within which (17) & (18) are satisfied. The inflow/invasion occurs along \(\Gamma_{i}\), at \(x=0\), and the outflow/far-field is \(\Gamma_{o}\), along \(x=1\). The boundaries at \(y=\mp 1/2\) are denoted \(\Gamma_{1}\) & \(\Gamma_{2}\). No flow is allowed through \(\Gamma_{1}\) & \(\Gamma_{2}\). Boundary conditions imposed are as follows; see Fig. 4a).
* Along \(\Gamma_{o}\), we set the far-field pressure to \(p=0\), which requires that \[\frac{\partial\psi}{\partial x}(1,y)=0,\ \ \ \ y\in[-1/2,1/2].\] (19) Recall that the gradients of pressure and stream function are orthogonal and (19) ensures constant \(p\) along \(\Gamma_{o}\).
* Along \(\Gamma_{i}\), we would like to set the pressure \(p=1\), meaning again that: \[\frac{\partial\psi}{\partial x}(0,y)=0,\ \ \ \ y\in[-1/2,1/2],\] (20) to ensure a constant pressure along \(\Gamma_{i}\).
* Although (19) and (20) ensure a constant pressure on each boundary and consequently a constant pressure drop, there is no means of assuring that \(p(0,y)-p(1,y)=1\). This achieved via the upper and lower wall boundary conditions. Along \(\Gamma_{1}\) & \(\Gamma_{2}\), we impose: \[\psi(x,-1/2)=0,\ \ \ \ \ \psi(x,1/2)=Q\ \ \ \ x\in[0,1].\] (21) These conditions ensure a fixed flow rate \(Q\) between the upper and lower boundaries, and that there is no flow through the boundaries.
* We iterate to find \(Q\), such that \[\int_{\Gamma_{1}}\frac{\partial p}{\partial x}dx=1,\ \ \ \mbox{or}\ \ \ \int_{\Gamma_{2}}\frac{\partial p}{\partial x}dx=1.\] (22) Note that the flow rate monotonically increases with the pressure drop, enabling the solution to be found.
* In advancing the concentration, the micro-annulus initially has \(\bar{c}=0\) everywhere and \(\tilde{c}=1\) is imposed along \(\Gamma_{i}\). An outflow condition is used other boundaries when needed.
#### 3.1.1 Perforation squeeze
For a perforation squeeze, we solve the equations (17) & (18) in a rectangular domain with a perforation hole in the centre. More generally this could be over a stencil of perforations, according to whatever shot pattern is implemented in the perforation gun. Here we are concerned mostly with feasibility. Thus, we consider a single perforation only and simplify to a half-domain, \(\Omega=(x,y)\in(0,1/2)\times(-1/2,1/2)\), with semicircular inflow centred at \((0,0)\), with radius \(r=1/30\). We impose a symmetry condition along \(x=0\). The inflow/invasion occurs radially outwards from \(\Gamma_{i}\), where \(\sqrt{x^{2}+y^{2}}=r\), and the outflow/far-field boundaries are \(\Gamma_{o,1}\), \(\Gamma_{o,2}\), & \(\Gamma_{o,3}\), defined as \(y=1/2\), \(x=1/2\), and \(y=-1/2\), respectively; see Fig. (4b).
The micro-annulus domain initially has \(c=0\) and for \(t>0\), we impose \(c=1\) along \(\Gamma_{i}\) and outflow conditions elsewhere. Boundary conditions for (17) are as follows.
* Along \(\Gamma_{o,1}\), \(\Gamma_{o,2}\) and \(\Gamma_{o,3}\), we have the far-field pressure \(p=0\), which requires that \[\begin{split}&\frac{\partial\psi}{\partial x}(1,y)=0,\ \ \ \ y\in[-1/2,1/2],\\ &\frac{\partial\psi}{\partial y}(x,\pm 1/2)=0,\ \ \ \ x\in[0,1/2].\end{split}\] (23) The three equations in 23 imply a constant pressure along \(\Gamma_{o,1}\), \(\Gamma_{o,2}\) and \(\Gamma_{o,3}\), and as they are connected, the pressure is equal on all these boundaries.
* Along \(\Gamma_{i}\), we would like to set the pressure \(p=1\). As in the planar flow we implement a constant pressure on this boundary, via the Neumann condition: \[\frac{\partial\psi}{\partial n}=0\ \ \ \ \ \sqrt{x^{2}+y^{2}}=r,\] (24) where \(\mathbf{n}\) is the unit normal to \(\Gamma_{i}\).
* As previously, we need to control the pressure drop between \(\Gamma_{i}\) and the outflow. We do this via imposing a flow rate condition on \(\Gamma_{1}\) and \(\Gamma_{2}\): \[\psi(0,y)=0,\ \ \ \ y\in[-1/2,-r],\ \ \ \ \ \ \ \ \psi(0,y)=Q,\ \ \ \ y\in[r,1/2].\] (25) Here \(Q\) represents the volumetric flowrate into the microannulus domain. Note that since \(\psi\) is constant along \(\Gamma_{1}\) and \(\Gamma_{2}\), there is no flow across these boundaries (in agreement with the imposed symmetry).
Figure 4: Schematic of domains and boundary conditions for: a) planar invasion problem; b) perforation squeeze problem.
We now vary \(Q\) in order to iteratively satisfy the pressure condition \(p=1\), i.e. we satisfy,
\[\int_{r}^{1/2}\,\frac{\partial p(x,0)}{\partial x}\ dx=1. \tag{26}\]
### Micro-annulus model
Finally, we summarize the construction of a representative microannulus thickness using a stochastic model. As discussed in SS1, characterization of microannulus geometry has been studied by a range of authors. Most studies point to highly varied microannulus thickness in the azimuthal direction while a certain continuity is noted in the well axis direction. The magnitude of these variation is not well known. The microannulus thickness typically range from 0 to 200 \(\mu m\). While microannulus thickness of up to 1000 \(\mu m\) have been measured, these are relatively uncommon. The distribution of microannulus thickness one might encounter is best described using right-skewed distributions as detailed by [16]. Based on these observations, in [18] we developed a stochastic model for microannulus thickness variation and calibrated the model against BC wellbore leakage data. We describe this model below.
We construct \(\hat{h}(\hat{x},\hat{y})\) over a rectangular domain. A nominal length \(\hat{L}_{c}\), is used to represent axial variations along the well, in the direction \(\hat{y}\), which is set to the length of a stand of casing used in industry. The coordinate \(\hat{y}\) is oriented in the azimuthal direction. Values of \(\hat{h}(\hat{x},\hat{y})\) are assigned on a mesh \((\hat{x}_{i},\hat{y}_{j})\) and used via interpolation. The microannulus thickness at each end of this section, at \(\hat{y}=0\), \(\hat{L}_{c}\), are set based on random sampling of a lognormal distribution with parameters \(\mu=4.1,\ \sigma=1.15\) (with microannulus thickness measured in \(\mu m\)). These end values are then used to populate the microannulus thickness at interior points, with \(\hat{h}(\hat{x},\hat{y})\) made periodic in \(\hat{x}\). The interior points satisfy the following anisotropic averaging (diffusion) law:
\[\hat{h}_{i,j}=\frac{1}{2}m_{y}(\hat{h}_{i,j+1}+\hat{h}_{i,j-1})+\frac{1}{2}m_ {x}(\hat{h}_{i+1,j}+\hat{h}_{i-1,j})+\frac{\hat{\varepsilon}_{i,j}}{\sqrt{2}}, \tag{27}\]
where \(\hat{\varepsilon}_{i,j}\) is a random variable sampled from a normal distribution with zero mean and standard deviation \(a\delta\hat{y}\). Here \(a\) controls the dimensionless amplitude of the stochastic perturbation to the averaging operator and \(\delta\hat{y}\) the mesh size used in \(\hat{y}\). Provided the mesh sizes in \((\hat{x},\hat{y})\) directions have a fixed ratio, including \(\delta\hat{y}\) in the amplitude of \(\hat{\varepsilon}_{i,j}\) controls the mesh dependency. The parameters \(m_{x}\) and \(m_{y}\) satisfy \(m_{x}+m_{y}=1\). The observed anisotropy in \(\hat{h}(\hat{x},\hat{y})\), i.e. stronger variations azimuthally than axially, is controlled by selecting \(m_{x}\ll m_{y}\). In [18] each microannulus geometry is used in a leakage model based on the typical parameters of a BC well, resulting in computation of a leakage rate; an example is given in Fig. 5.
Here we are interested in the squeeze cementing flows that penetrate microannular spaces, rather than gas flow along a microannulus. Although we use the above model for microannulus thickness, the length-scale \(\hat{L}_{c}\) is too long. In squeeze cementing the perforation holes are made in a pattern that involves spacing azimuthally and along the well, typically spaced at distances \(\sim 10-40cm\). Thus, for our purposes we calculate \(\hat{h}(\hat{x},\hat{y})\) as above, sample a rectangle of appropriate size for the geometry we consider, average \(\hat{h}\) to compute \(\hat{\tilde{h}}=2\hat{H}_{0}\) which is used to normalize and define \(H(x,y)\). A selection of normalised \(H(x,y)\) are illustrated in Fig. 6, which show the directional anisotropy and the typical variations.
## 4 Numerical method
The numerical challenge is only to compute the velocity field (stream function) from (17), as computation of the concentration equation is more standard. Our method is based on the variational formulation of (17) and equivalent minimization, as introduced by [37]. Here we outline the method focusing on the planar invasion flow, with the perforation flow being similar. We formally define 2 test sets for the solution:
\[\mathcal{V} = \left\{\phi\in W^{1,1+n}(\Omega):\ \phi=0\ \text{on}\ \Gamma_{1};\ \ \phi=Q\ \text{on}\ \Gamma_{2}\right\} \tag{28}\] \[\mathcal{V}_{0} = \left\{\tilde{\phi}\in W^{1,1+n}(\Omega):\ \phi=0\ \text{on}\ \Gamma_{1};\ \ \phi=0\ \text{on}\ \Gamma_{2}\right\} \tag{29}\]
The Sobolev space \(W^{1,1+n}(\Omega)\) is the general solution space; see [37]. The space \(\mathcal{V}\) includes the essential boundary conditions. For the variational setting and minimization we need a closed space; hence \(\mathcal{V}_{0}\). Evidently \(\mathcal{V}\) is not empty as for example we find \(\psi_{0}=Q(y+1/2)\in\mathcal{V}\),. For a given \(\psi_{0}\), for any \(\phi\in\mathcal{V}\) we have \(\tilde{\phi}=\phi-\psi_{0}\in\mathcal{V}_{0}\). Therefore, the optimization can be formally carried out over \(\mathcal{V}_{0}\) and combined with \(\psi_{0}\) for the solution in \(\mathcal{V}\).
We denote the solution by \(\tilde{\psi}\in\mathcal{V}_{0}\) and \(\tilde{\phi}\) is any other test function in \(\mathcal{V}_{0}\). From (17), on using the divergence theorem:
\[0=\int_{\Gamma_{i}\bigcup\Gamma_{1}\bigcup\Gamma_{o}\bigcup\Gamma_{o}}(\tilde{ \phi}-\tilde{\psi})\mathbf{S}\cdot\mathbf{n}\ d\Gamma-\int_{\Omega}\mathbf{S} \cdot\nabla(\tilde{\phi}-\tilde{\psi})\ d\Omega. \tag{30}\]
The conditions on \(\mathcal{V}_{0}\) ensure that the integrals along \(\Gamma_{1}\) & \(\Gamma_{2}\) vanish. The integrals along \(\Gamma_{i}\) & \(\Gamma_{o}\) vanishes only provided \(S_{x}=0\) for the solution. This is satisfied since \(p\) is constant and \(S_{x}=\frac{\partial p}{\partial y}=0\). With some further manipulations we find that (30) becomes
\[\int_{\Omega}\frac{\chi(|\nabla\psi|)}{|\nabla\psi|}\nabla\psi\cdot\nabla( \tilde{\phi}-\tilde{\psi})+\frac{Y}{H}(|\nabla(\psi_{0}+\tilde{\phi})|-|\nabla (\psi_{0}+\tilde{\psi})|)\ d\Omega\geq 0,\ \ \forall\tilde{\phi}\in\mathcal{V}_{0}. \tag{31}\]
The elliptic variational inequality above is equivalent to the following minimization problem:
\[\min_{\tilde{\phi}\in\mathcal{V}_{0}}J(\nabla\tilde{\phi}),\ \ \ \ \ J(\nabla \tilde{\phi})=\int_{\Omega}\left(\int_{0}^{|\nabla(\psi_{0}+\tilde{\phi})|} \chi(a)\ da+\frac{Y}{H}|\nabla(\psi_{0}+\tilde{\phi})|\right)\ d\Omega. \tag{32}\]
The minimization, equation (32), is of a standardised form. The non-differentiability of the second term in \(J\) is dealt with either by regularization or by an augmented Lagrangian method (which we adopt here). This amounts to solving the following saddle point problem [23; 36; 37]:
\[\min_{\tilde{\phi}\in\mathcal{V}_{0},\ \tilde{\mathbf{q}}\in[L^{1+n}(\Omega)]^ {2}}\max_{\mu\in[L^{1+n}(\Omega)]^{2}}\left\{J(\tilde{\mathbf{q}})+\int_{ \Omega}\mu\cdot(\nabla\tilde{\phi}-\tilde{\mathbf{q}})\ d\Omega+\frac{r}{2} \int_{\Omega}|\nabla\tilde{\phi}-\tilde{\mathbf{q}}|^{2}\ d\Omega\right\}. \tag{33}\]
Figure 5: Microannulus geometry that results in a leakage rate of \(2.00m^{3}/day\), using the leakage model in [18].
Here for the solution we generally have \(\tilde{\phi}\to\tilde{\psi}\), and \(\tilde{\mathbf{q}}\to\nabla\tilde{\psi}\). The variable \(\mu\) is the Lagrange multiplier for the constraint \(\nabla\tilde{\phi}=\tilde{\mathbf{q}}\), and is found to converge to \(\mathbf{S}\). We discuss the solution iteration steps below.
Note first that while the above saddle point finds \(\psi\) for a given \(Q\), the pressure constraint is not satisfied. This constraint can be satisfied either in an outer iteration, or as part of the iteration for solving (33).
### Uzawa iteration
The saddle point problem is solved by sequentially minimizing for each variable. We assume at the \(k\)-iterate that we have \(\tilde{\psi}^{k}\), \(\tilde{\mathbf{q}}^{k}\) and \(\mu^{k}\). Then \(\tilde{\psi}^{k+1}\) satisfies:
\[0=\int_{\Omega}\left(-r\nabla^{2}\tilde{\psi}^{k+1}+r\nabla\cdot\tilde{ \mathbf{q}}^{k}-\nabla\cdot\mu^{k}\right)\tilde{\phi}\ d\Omega\qquad\forall \tilde{\phi}\in\mathcal{V}_{0}. \tag{34}\]
This is a linear Poisson equation at each step. Note that the non-essential conditions on \(\Gamma_{i}\) and \(\Gamma_{o}\) can enforced by assuring that \(\tilde{q}^{k}_{x}=0\) and \(\mu^{k}_{x}=0\) on these boundaries.
Note that \(Q\) does not explicitly appear in this equation, but defines \(\psi_{0}\). Generally 34 requires a matrix system to be solved, for each new value of \(r\nabla\cdot\tilde{\mathbf{q}}^{k}-\nabla\cdot\mu^{k}\), but the same linear system is solved for successive iterates, which can allow significant speed up. Having computed \(\tilde{\psi}^{k+1}\) one can calculate the pressure drop due to \(\tilde{\psi}^{k+1}\) and adjust \(Q\) directly to produce the desired pressure drop, since the pressure drop is proportional to \(Q\).
To find \(\tilde{\mathbf{q}}^{k+1}\in[L^{1+n}(\Omega)]^{2}\), we need only minimize locally, e.g. on each discrete element. It has more physical meaning to add \(\nabla\psi_{0}\) to both \(\tilde{\mathbf{q}}^{k+1}\) and \(\nabla\tilde{\psi}^{k+1}\). We then find \(\mathbf{q}^{k+1}=\nabla\psi_{0}+\tilde{\mathbf{q}}^{k+1}\) as the minimizer
Figure 6: Examples of stochastically generated gap widths \(H(x,y)\), normalized to have an average equal to one.
of:
\[\min_{\mathbf{q}}\left\{\int_{0}^{q}\chi(a)\ da+\frac{Y}{H}q+\frac{r}{2}q^{2}-( \mu^{k}+\nabla\psi^{k+1})\cdot\mathbf{q}\right\}. \tag{35}\]
Here we have written \(q=|\mathbf{q}|\). The minimizer must be parallel to \(\mathbf{m}=(\mu^{k}+\nabla\psi^{k+1})\), so we may write: \(\mathbf{q}=q\mathbf{m}/m\). Equation (35) becomes a minimization over single variable \(q\):
\[\min_{q}\left\{\int_{0}^{q}\chi(a)\ da+\frac{r}{2}q^{2}+\left(\frac{Y}{H}-m \right)q\right\}. \tag{36}\]
Here \(m\) plays the role of the pressure gradient. We see that if \(Y\geq Hm\) then the solution is \(q=0\). Otherwise, supposing that \(Y<Hm\) we differentiate (36) to find \(q\) from:
\[\chi(q)+rq=m-\frac{Y}{H}>0, \tag{37}\]
which has a single solution since \(\chi(q)\) increases strictly. To find this solution, the computation can be accelerated by using the algebraic relation (16), which is the implicit definition of \(\chi\). In this way, we still find the root of a monotone algebraic equation but do not need to evaluate \(\chi\) explicitly at each step. More clearly, instead of (36) we solve (38) for \(\chi\):
\[\chi+r\frac{n\left(H\chi\right)^{1+\frac{1}{n}}\left((n+1)H\chi+(n+2)Y\right) }{(n+1)(2n+1)(\chi+Y/H)^{2}}=m-\frac{Y}{H}>0. \tag{38}\]
We then also have:
\[q=\frac{n\left(H\chi\right)^{1+\frac{1}{n}}((n+1)H\chi+(n+2)Y)}{(n+1)(2n+1)( \chi+Y/H)^{2}}\ \text{and}\ \mathbf{q}^{k+1}=\mathbf{q}=q\frac{\mathbf{m}}{m}. \tag{39}\]
Observe that since \(\mathbf{m}=(\mu^{k}+\nabla\psi^{k+1})\), the flow rate \(Q\) is explicitly included via \(\psi_{0}\). As the iteration proceeds, \(\mu^{k}\rightarrow\mathbf{S}\), and the condition \(Y\geq Hm\), becomes the limiting pressure gradient condition (as also \(\nabla\psi^{k+1}\to 0\) in these regions).
Lastly, for \(\mu^{k+1}\) we use a projection method:
\[\mu^{k+1}=\mu^{k}+\varrho[\nabla\psi^{k+1}-\mathbf{q}^{k+1}]. \tag{40}\]
### Discretization overview
For the implementation we adopt a finite element formulation to discretize the equations. We use a piecewise quadratic continuous element (P2) for the streamfunction, \(\psi\), and a piece-wise linear discontinuous element, (P1-DC), for other fields, i.e, \(\mathbf{q},\ \mu,\ c\), and \(H\). The sequential iteration of the solution of equations (34), (36) & (40) proceeds until
\[||\tilde{\psi}^{n+1}-\tilde{\psi}^{n}||<tol=10^{-4}Q.\]
Note that including \(Q\) preserves the accuracy as the flow stops. This usually takes a few hundred iterations to converge. This is repeated for every time advance.
We used the dual-P1-DC formulation [38], for the concentration equation. This takes the following form:
\[\int_{\Omega}\left(\frac{H(c^{n+1}-c^{n})}{\delta t}+\bar{\mathbf{u}}\cdot \nabla c\right)\omega\ d\Omega+\int_{E}\left(0.5(|\mathbf{n}.\bar{\mathbf{u}}| -\mathbf{n}.\bar{\mathbf{u}})\right)[c]\omega\ ds=0\ \ \ \ \forall\omega, \tag{41}\]
where \(E\) is the set of internal edges, and \([c]\) denotes the jump of \(c\) across an edge.
### Benchmark flow problems
To benchmark, we investigate two single-phase flow problems. Firstly, we model the flow of a viscoplastic fluid, with \(Y=0.5\) and \(n=1\), along a Hele-Shaw cell with a domain \(\Omega=(x,y)\in(0,1)\times(-0.5,0.5)\) under a constant pressure drop equal to one. The Hele-Shaw cell has a thickness \(H(y)\), periodically varying in \(y\). We adopt a sinusoidal variation as:
\[H(y)=\frac{\sin(7\pi y)+1}{2}. \tag{42}\]
The fluid flows in one direction only, along the Hele-Shaw cell. The flow velocity at each \(y\) can be determined via solution of a single nonlinear algebraic equation, i.e. equation (16). Figure (7) shows the velocity magnitude in this domain. We compared the results with the analytical solution, computed by evaluating (14) for a unit pressure gradient. Figure (8) shows a comparison of the analytical solution (solid black line) and the numerical results (red dots).
The second problem considered is a single-phase flow through a randomized \(H\). Figure 9(a) shows the variation of \(H\) and Fig. 9(b) shows the velocity magnitude in the domain, corresponding to a fixed unit pressure drop between inflow and outflow. The white dashed lines separate the unyielded areas from the flow domain. The black lines represent the streamlines. As can be seen, the flow finds a preferential path through areas with higher \(H\). The domain is similar to Fig. 4b, i.e. \(\Omega=(x,y)\in(0,0.5)\times(-0.5,0.5)\), and a constant pressure drop equal to one is applied between the injection hole and the outer boundaries.
Figure 7: The velocity magnitude for the flow of a viscoplastic fluid, \(Y=0.5\), and \(n=1\), under a constant pressure drop equal to one, between entrance (left hand side) and the outflow (right hand side) boundaries. The domain is similar to fig 4a,i.e., \(\Omega=(x,y)\in(0,1)\times(-0.5,0.5)\)
## 5 Results: invasion problems
We now turn to invasion problems. Our simulations are run until a final _stoppage time_, at which the \(\ell^{2}\)-norm of the difference of concentration between two consecutive time steps falls below a set tolerance:
\[||c^{n+1}-c^{n}||<10^{-4}.\]
In reality the pumping operation continues until a the pump pressure rises sharply, when the flow is reduced in order to prevent any fracturing, (called a low pressure squeeze).
### Planar invasion
We first look at a benchmark planar invasion problem where a viscoplastic fluid, \(Y=0.5\) and \(n=1\), displaces a Newtonian fluid in the same corrugated geometrical setup as the example Fig. 7. We expect the viscoplastic fluid to follow preferential pathways where \(H\) is largest. This is indeed the case and the main
Figure 8: Comparison of velocity magnitude from our numerical results (red dots) versus the analytical solution (black line). This is the velocity profile at the outflow shown in figure 7.
Figure 9: Example single phase flow through perforation hole: (a) Logarithm of the velocity magnitude computed in the randomized geometry. The white dashed lines separates the yielded and un-yielded zones, and the black lines represent the streamlines. Parameters are \(Y=0.5\) and \(n=1\); (b) Randomized \(H\) for \(\Omega=(x,y)\in(0,0.5)\times(-0.5,0.5)\).
observation is that the large disparity in velocity leads to some dispersion of the intermediate values of \(c\). Secondly, due to the relatively large \(Y\) values, a part of the channel remains immobile, where \(H\) is small, i.e. the pressure gradient is \(\approx-1\) and therefore, when \(H\approx H|\nabla p|<Y\) there should be zero flow.
We next investigate planar invasion in a randomized geometry, shown at 4 successive timeslots during the run. By looking at the white streamlines, it can be seen that the preferential paths may change as the invasion advances. The yield stress of the displacing fluid, stops the flow locally wherever the pressure gradient is too small. In addition, as the flow progresses, because a larger portion of any pathway is filled with cement, the resistance against flow increases, making the unfilled pathways easier to flow along.
Figure 10: The concentration of the displacing fluid, \(Y=0.5,\ n=1\), in four snapshots. Each snapshot shows the time equal to \(1/4\), \(2/4\), \(3/4\) and \(4/4\) of the convergence time (\(\approx 213.8\)). The domain is similar to fig 4a,i.e., \(\Omega=(x,y)\in(0,1)\times(-0.5,0.5)\), and a constant pressure drop equal to one is applied between entrance and outflow.
### Invasion from a perforation
We now consider radial displacement with the setup of Fig. 7(b). The aim is to explore the potential of the computational simulation for calculating different metrics that may be of help in deciding whether the squeeze flow is successful or not. We start with 3 examples taken from 3 different random geometries. In each we compute the invasion flow, displacing the Newtonian fluid with a viscoplastic fluid: \(n=1\) and 3 yield numbers, \(Y=0.5,\ 1,\ 1.5\). In each figure, image A shows the microannulus thickness \(H\). Images B-D show the simulation results for the 3 yield numbers. In each of these, the colourmap corresponds to the final stoppage time. The dashed lines mark \(\bar{c}=0.5\) at the stoppage time as well as at 1/3 & 2/3 of the stoppage
Figure 11: The concentration of displacing fluid \(Y=0.5,\ n=1\), in four consecutive time steps, \(t=1/4,\ 2/4,\ 3/4\), and \(4/4\) of the convergence time (\(\approx 21.7\)). The white lines show the streamlines. The domain is similar to fig 4A,i.e., \(\Omega=(x,y)\in(0,1)\times(-0.5,0.5)\), and a constant pressure drop equal to one is applied between Entrance and outflow.
time.
This example shows the attenuation and omission of one of the paths exiting from the lower boundary.
The last example reflects the effect of partial blockage of the injection hole. It can be seen that the upper half of the domain is mainly out of the flow domain.
It can be seen that the initial inflow is fast: the pressure drop is concentrated mostly over a thin layer of invading fluid, driving it away from the perforation hole. However, later the invasion slows and is strongly influenced by the geometry. Figure 15 plots the evolution of the filling percentage, i.e. the fraction of area occupied by \(\bar{c}\geq 0.5\). We consistently see that lower \(Y\) values approach their stoppage time faster and have higher filling percentage. Partly this is a viscous effect, in that the lower \(Y\) value fluids move faster. It is not clear that the actual stoppage times have practical relevance.
For the 3 microannulus geometries considered we have performed simulations over a wider range of \(Y\) values. We present a panoramic view of the invasion at the final stoppage time in Fig. 16, with columns a, b and c referring the geometries of Figs. 12, 13 and 14, respectively. To these, we have added 4 further randomised geometries (columns d-f). The yield numbers for each row are given on the left hand side:
Figure 12: The front evolution during displacement of a Newtonian fluid by a Bingham fluid: (b) \(Y=0.5\); (c) \(Y=1\); (d) \(Y=1.5\); all flows through the randomized geometry shown in panel (a). The blue, green, and black dashed lines show the displacing front, where \(\bar{c}=0.5\), at 1/3, 2/3 and 1 of the stoppage time.
\(Y=0.5,0.7,0.9,...1.9\), with each column referring to a different geometry. Operationally, increasing \(Y\) may be interpreted as either using a cement slurry of higher yield stress, or of applying a lower pressure drop.
Note that these geometries each have a microannulus gap thickness normalised to 1, so that each row is comparable. Operationally, one has no _a priori_ way of knowing the precise microannulus geometry and these geometries have been generated following the stochastic model of [18], which is believed to reasonably represent the variation in microannulus thickness. Therefore, the significant differences seen in Fig. 16, comparing between geometries at the same \(Y\), reflect a significant effect of heterogeneity and _luck_, in terms of the position of the perforation hole relative to the smallest gaps. For example, in geometry b for larger \(Y\) it is clearly hard to penetrate much beyond the perforation vicinity, where Fig. 13a) shows that \(H(x,y)\) is very small.
In terms of metrics that might be useful in representing the effectiveness of the squeeze job, firstly we make comparisons via 3 normalised parameters: \(R_{n,min}\), \(R_{n,max}\) and \(R_{n,eq}\). \(R_{n,min}\) and \(R_{n,max}\) are the minimum and maximum values of the filling radius. For the filling radius we adopt a polar coordinate system \((r,\theta)\) fixed at the centre of the perforation hole. We take the length of the line connecting the filling
Figure 13: The front evolution during displacement of a Newtonian fluid by a Bingham fluid: (b) \(Y=0.5\); (c) \(Y=1\); (d) \(Y=1.5\); all flows through the randomized geometry shown in panel (a). The blue, green, and black dashed lines show the displacing front, where \(\bar{c}=0.5\), at 1/3, 2/3 and 1 of the stoppage time.
front to the perforation hole, and divide by the length of the line (at the same \(\theta\)) that intersecting the border of the computational domain, i.e. this length has a value \(0.5\leq R\leq 0.5\sqrt{(}2)\). The radius \(R_{nd,eq}\) is the equivalent radius, computed as \(\sqrt{\mbox{filled area}/\pi}\), divided by the maximum value \(0.5/\sqrt{\pi}\).
These radii are shown in Fig. 17, plotted against \(Y\) for the 3 sample geometries. The trends are intuitive: decreasing with \(Y\) in all cases. In terms of practicality, with a view to the wide variability we have observed, one could envisage a probabilistic approach where e.g. given assumed characteristics of the microannulus, one would like to say guarantee \(R_{n,min}>0.25\) (or dimensional equivalent) with confidence 95%, for given fluid properties. The normalization used is mainly for comparison but also serves to map onto 1 values where the invading fluid has reached the boundary of the computational domain. With reference to Fig. 2, operationally there would be a stencil of perforation holes and the significance of \(R_{n,max}\geq 1\) could be interpreted as a limit of when the invasion of one perforation meets that from another. To apply more practically, one would need to fit the computational domain to the perforation stencil. Again to apply this metric in a probabilistic way would require extensive sampling of microannulus geometries.
Figure 14: The front evolution during displacement of a Newtonian fluid by a Bingham fluid: (b) \(Y=0.5\); (c) \(Y=1\); (d) \(Y=1.5\); all flows through the randomized geometry shown in panel (a). The blue, green, and black dashed lines show the displacing front, where \(\bar{c}=1\), at 1/3, 2/3 and 1 of the stoppage time.
The variable \(R_{n,eq}\) clearly is a measure of the filled area. If we consider the overall objective of the squeeze cementing operation, to restore the well integrity, we might instead look at the volume fraction of the microannnulus that remains unfilled by cement:
\[\bar{H}_{1,void}=\frac{\int_{\Omega}H(1-c)\ dA}{\int_{\Omega}H\ dA},\]
i.e. this represents the volume remaining for the gas to flow through. A variation on this theme would be to compute \(\bar{H}_{3,void}\):
\[\bar{H}_{3,void}=\frac{\int_{\Omega}H^{3}(1-c)\ dA}{\int_{\Omega}H^{3}\ dA}.\]
The point of the cubic power is that the leakage flow rates are generally low and non-inertial, hence proportional to \(H^{3}\). The metric \(\bar{H}_{3,void}\) indicates the fraction of large leakage pathways remaining open, i.e. we might think of interpreting \(\bar{H}_{3,void}\) as an indicator of leakage reduction. These metrics are plotted in Fig. 18.
Lastly, one might also attempt to supplement the above calculated metrics with a simpler "well integrity score" (WIS). Again referring back to the perforation stencil of Fig. 2, the main point of the squeeze operation might be to achieve integrity between adjacent perforation holes. Instead of computing a (very large) multi-perforation flow, one could interpret the single hole simulations in this context. For example, \(\text{WIS}=0\), if the edge of the domain is not reached; \(\text{WIS}=k\), if the cement reaches to \(k\) edges of the domain. Evidently, having large WIS would suggest that either the cement meets between the holes or at the least that the pathway that gas must follow through the perforated/squeezed section of well has got more tortuous and constrained. Table 1 shows the WIS values computed from Fig. 16.
## 6 Summary
In this paper we have introduced a type of invasion problem that has relevance to the remediation of oil and gas well leakage. At its heart, this involves the flow of a yield stress fluid along an uneven channel, typically displacing a less viscous preflush, i.e. water. Apart from the remediation application, a similar
Figure 15: The filling percentage by a Bingham fluid, shown by blue, red and yellow lines representing \(Y=0.5\),-\(Y=1\), and D-\(Y=1.5\), respectively. Figures a), b), c) show the filling % in the randomized geometries of Figs. 12, 13 and 14, respectively.
Figure 16: Columns a, b and c refer to invasion problems computed in the randomized geometries of Figs. 12, 13 and 14, respectively. Four other randomized geometries are considered in columns d-f. Each row denotes a different \(Y\) value, as indicated.
process is undertaken for sealing around wells close to CCS reservoirs and there is also a wide variety of similar flows that occur in grouting. As long as the microannulus thickness varies slowly with distance, a Hele-Shaw type modelling approach is appropriate. It is perhaps ironic that the same approach is used for modelling the initial cementing flow, as is used here for repairing the defective cement seal. The flows are however simpler in that the positive viscosity ratio reduces dispersion and the smaller gap size renders the flow.
Figure 17: Different invasion radii computed. Figures a), b) and c) refer to invasion problems computed in the randomized geometries of Figs. 12, 13 and 14, respectively. The horizontal axis denotes \(Y\).
Figure 18: Different measures of unfilled gap width computed. Figures a), b) and c) refer to invasion problems computed in the randomized geometries of Figs. 12, 13 and 14, respectively. The horizontal axis denotes \(Y\).
buoyancy effects unimportant.
We have developed a displacement flow code that has been used to study 2 specific invasion geometries: planar and radial invasion. In the context of squeeze cementing, it might be wondered where the planar invasion is relevant? With reference to Fig. 2, the perforation gun pattern generally distributes the shots azimuthally but the axial spacing can be relatively tight (e.g. 4 shots per foot), although there must be intact metal between perforations. Therefore, having a long row of perforation holes in a helical arrangement is common. If the cement around each perforation is damaged and washed out, the result is a connected line of invasion holes, analogous to the planar front studied.
Nevertheless, our main focus has been on the radial invasion. Here we have taken a number of randomized microannulus geometries and studied the effects of the invasion parameters. In dimensionless terms, the key parameter is \(Y\), denoting the effects of fluid yield stress divide by the imposed pressure drop. For each geometry one can say that the results are quite intuitive. As \(Y\) increases the penetration is reduced and the repair (filling) of the microannulus is worse. Eventually, the cement slurry does not stop before it attains the boundary of the computational domain. Perhaps less intuitive is the degree of variability of the penetration behaviour with the specific microannulus geometry, captured visually well in Fig. 16, i.e. each microannulus here has a thickness normalised to \(\bar{H}=1\) and each is constructed using the same sampling process from [18]. It is not hard to imagine that this same variability can account for the observed unreliability of the squeeze cementing operation.
We have post-processed our results to give practical and meaningful metrics that could be used as the basis of a probalistic risk-based design process, e.g. being able to specify that the cement has penetrated a minimum distance \(\hat{R}_{min}\) around each perforation hole, with confidence 95%. Further post-processing has targeted metrics such as \(\bar{H}_{3,void}\) that are relevant to leakage flow rate. These are mainly examples, and a more refined analysis might try to either compute a full pattern of perforation holes (Fig. 2), or to scale and orient the rectangular domain to represent a helical strip of microannulus, e.g. with periodicity boundary conditions.
One concern with pursuing a risk-based procedure, as above, is that the computations we have carried out are expensive. The augmented Lagrangian method outlined and adopted is reliable in converging and in effectively resolving those parts of the annulus that are stationary, but is slow to converge. Here that procedure is repeated on each timestep. In a Monte-Carlo approach one would need many hundreds of microannulus geometries in order to generate suitable distributions. Therefore, a faster computational method is needed in order to develop this practically. One approach might be to directly calculate the final stopping distribution of the fluids, without recourse to the actual invasion transient.
## Acknowledgements
The authors gratefully acknowledge funding from the project: Plug and Abandon Strategies for Canada's Oil and Gas Wells, jointly supported by the AUPRF Program of PTAC (grant number 17-WARI-02) and
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Y & a & b & c \\ \hline
0.5 & 3 & 2 & 3 \\
0.7 & 3 & 2 & 1 \\
0.9 & 3 & 2 & 1 \\
1.0 & 3 & 2 & 1 \\
1.1 & 2 & 0 & 1 \\
1.3 & 1 & 0 & 1 \\
1.5 & 0 & 0 & 0 \\
1.7 & 0 & 0 & 0 \\
1.9 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: WIS index for different \(Y\) numbers (first column). Columns a, b and c refer to invasion problems computed in the randomized geometries of Figs. 12, 13 and 14, respectively.
by the Collaborative Research and Development Program of NSERC (grant number CRDPJ 516022-17). We also acknowledge the support from NSERC via scholarship number PGSD3 519200-2018, (ET).
|
2305.03911 | Exploring the link between coffee matrix microstructure and flow
properties using combined X-ray microtomography and smoothed particle
hydrodynamics simulations | Coffee extraction involves many complex physical and transport processes
extremely difficult to model. Among the many factors that will affect the final
quality of coffee, the microstructure of the coffee matrix is one of the most
critical ones. In this article, we use X-ray micro-computed (microCT) technique
to capture the microscopic details of coffee matrices at particle-level and
perform fluid dynamics simulation based on the smoothed particle hydrodynamics
method (SPH) with the 3D reconstructured data. Information like flow
permeability and tortuosity of the matrices can be therefore obtained from our
simulation. We found that inertial effects can be quite significant at the
normal pressure gradient conditions typical for espresso brewing, and can
provide an explanation for the inconsistency of permeability measurements seen
in the literature. Several types of coffee powder are further examined,
revealing their distinct microscopic details and resulting flow features. By
comparing the microCT images of pre- and post-extraction coffee matrices, it is
found that a decreasing porosity profile (from the bottom-outlet to the
top-inlet) always develops after extraction. This counterintuitive phenomenon
can be explained using a pressure-dependent erosion model proposed in our prior
work. Our results show that microCT scan can provide useful microscopic details
for coffee extraction modelling and establish the basis for a data-driven
numerical framework to explore the link between coffee powder microstructure
and extraction dynamics. The latter is the prerequisite to study the time
evolution of both volatile and non-volatile organic compounds and then the
flavour profile of coffee brews. | Chaojie Mo, Richard Johnston, Luciano Navarini, Furio Suggi Liverani, Marco Ellero | 2023-05-06T03:18:41Z | http://arxiv.org/abs/2305.03911v2 | Exploring the link between coffee matrix microstructure and flow properties using combined X-ray microtomography and smoothed particle hydrodynamics simulations
###### Abstract
Coffee extraction involves many complex physical and transport processes extremely difficult to model. Among the many factors that will affect the final quality of coffee, the microstructure of the coffee matrix is one of the most critical ones. In this article, we use X-ray micro-computed (microCT) technique to capture the microscopic details of coffee matrices at particle-level and perform fluid dynamics simulation based on the smoothed particle hydrodynamics method (SPH) with the 3D reconstructed data. Information like flow permeability and tortuosity of the matrices can be therefore obtained from our simulation. We found that inertial effects can be quite significant at the normal pressure gradient conditions typical for espresso brewing, and can provide an explanation for the inconsistency of permeability measurements seen in the literature. Several types of coffee powder are further examined, revealing their distinct microscopic details and resulting flow features. By comparing the microCT images of pre- and post-extraction coffee matrices, it is found that a decreasing porosity profile (from the bottom-outlet to the top-inlet) always develops after extraction. This counterintuitive phenomenon can be explained using a pressure-dependent erosion model proposed in our prior work. Our results show that microCT scan can provide useful microscopic details for coffee extraction modelling and establish the basis for a data-driven numerical framework to explore the link between coffee powder microstructure and extraction dynamics. The latter is the prerequisite to study the time evolution of both volatile and non-volatile organic compounds and then the flavour profile of coffee brews.
## 1 Introduction
A roasted and ground coffee matrix is a highly complex porous medium characterized by multiscale features. A typical coffee matrix used for double-shot espresso brew is about 15 ml in volume and consists of millions of coffee particles. These coffee particles are produced by grinding roasted coffee beans, and have an approximately bimodal size distribution [1]. The first peak of the distribution profile is always at around \(30\sim 40\)\(\mu\)m, representing the so-called "fines", i.e., inner walls cellular fragments1. The second peak represents the so-called "coarses" and their mean size depends on the grinder, with values ranging between \(200\sim 1000\)\(\mu\)m. Moreover, the coffee particles are not entirely solid but porous material themselves, characterized by many cell-pockets of dimensions \(30\sim 60\)\(\mu\)m inside the particles [1]. Finally, the cell-walls are also porous at nanoscale level.
Footnote 1: To be very rigorous, it is necessary to underline that this first peak is preceded by a small submicron fraction.
During coffee extraction, as water flows through this complex porous structure, many chemical compounds including |
2301.00737 | Rotational Abstractions for Verification of Quantum Fourier Transform
Circuits | With the race to build large-scale quantum computers and efforts to exploit
quantum algorithms for efficient problem solving in science and engineering
disciplines, the requirement to have efficient and scalable verification
methods are of vital importance. We propose a novel formal verification method
that is targeted at Quantum Fourier Transform (QFT) circuits. QFT is a
fundamental quantum algorithm that forms the basis of many quantum computing
applications. The verification method employs abstractions of quantum gates
used in QFT that leads to a reduction of the verification problem from Hilbert
space to the quantifier free logic of bit-vectors. Very efficient decision
procedures are available to reason about bit-vectors. Therefore, our method is
able to scale up to the verification of QFT circuits with 10,000 qubits and 50
million quantum gates, providing a meteoric advance in the size of QFT circuits
thus far verified using formal verification methods. | Arun Govindankutty, Sudarshan K. Srinivasan, Nimish Mathure | 2023-01-02T16:13:39Z | http://arxiv.org/abs/2301.00737v1 | # Rotational Abstractions for Verification of Quantum Fourier Transform Circuits
# Rotational Abstractions for Verification of Quantum Fourier Transform Circuits
1st Arun Govindankutty _Department of Electrical and Computer Engineering_
_North Dakota State University_
Fargo, USA
[email protected]
2nd Sudarshan K. Srinivasan _Department of Electrical and Computer Engineering_
_North Dakota State University_
Fargo, USA
[email protected]
3rd Nimish Mathure _Department of Electrical and Computer Engineering_
_North Dakota State University_
Fargo, USA
[email protected]
###### Abstract
With the race to build large-scale quantum computers and efforts to exploit quantum algorithms for efficient problem solving in science and engineering disciplines, the requirement to have efficient and scalable verification methods are of vital importance. We propose a novel formal verification method that is targeted at Quantum Fourier Transform (QFT) circuits. QFT is a fundamental quantum algorithm that forms the basis of many quantum computing applications. The verification method employs abstractions of quantum gates used in QFT that leads to a reduction of the verification problem from Hilbert space to the quantifier free logic of bit-vectors. Very efficient decision procedures are available to reason about bit-vectors. Therefore, our method is able to scale up to the verification of QFT circuits with 10,000 qubits and 50 million quantum gates, providing a meteoric advance in the size of QFT circuits thus far verified using formal verification methods.
Formal verification, Quantum algorithms, Quantum computing, Quantum Fourier transform, Quantum circuit verification.
## I Introduction
The race to build large scale Quantum computers with 1,000 qubits and beyond is in full steam [1][2]. The IBM Condor quantum computer with 1,000 qubits is expected to be released in 2023 [3]. After Condor, IBM plans to use chip-to-chip couplers to build even larger quantum computing systems [4], with a goal of building a system with 1 million qubits [5]. Google's road map is to built a quantum computer with 1 million qubits as well in the near future [6]. There are numerous other quantum computers being developed by corporations such as Xanadu, Rigetti, IonQ, and D-Wave, to name a few. The development of cryogenic control circuits needed for quantum computing is also accelerated as demonstrated by Intel (Horse Ridge chip) [7], which realizes quantum computing and communication applications [8].
The Quantum Algorithm Zoo website tracks algorithms in this domain and currently lists 430 citations of various Quantum algorithms [9].
The 80/20 design rule is well know in computing, i.e., 20% of the design cycle time is spend in the actual design, while 80% is spent in validation and verification. Without verification technologies that can scale, the useful deployment of these large-scale quantum systems will be significantly hampered. It is imperative therefore to develop verification methods for quantum circuits, which is the focus of this work. Formal verification has become a standard in the semiconductor industry with its ability to provide correctness guarantees and flag hard-to-find corner case bugs. There are various formal verification methods proposed for quantum circuits [10].
However, for example, the largest Quantum Fourier Transform (QFT) circuit verified as reported in literature has only 31 qubits [11]. Scalable verification methods are thus the need of the hour.
_Contributions:_ One of the approaches to achieve scalability in formal verification is to develop domain-specific methods. In this work, we target one of the fundamental quantum algorithms, the Quantum Fourier Transform (QFT). In computing and engineering, transformations play a vital role in problem solving and analysis. Quantum computing uses QFT to tackle various problems. QFT is an integral part of numerous quantum algorithms including Shor's factoring algorithm, quantum phase estimation algorithm, and computing discrete logarithm to name a few [12][13]. The real-world applications where QFT is employed include portfolio optimization in computational finance [14], Monte Carlo pricing of financial derivatives [15], quantum meteorology for building interferometers [16], materials examination and analysis [17], analysis of image data [18] in medical applications, and risk analysis [19] among others.
_We have developed a formal verification method that can be
used to efficiently verify Quantum Fourier Transform (QFT) circuits for up to 10,000 qubits and 50 million gates._ Our specific contributions are as follows:
1. Abstractions of the Hadamard (H) gate and the control rotation gate (\(R_{n}\)) that exploits the rotational impact of these gates on the incoming qubit.
2. A correctness framework that exploits these abstractions and allows the verification problem to be reduced from Hilbert space (complex vector space) to the quantifier free logic of bit-vectors (QF_BV).
3. Theorems with proofs to show that the abstractions are sound, i.e., if the abstract QFT circuit is verified to be correct, then the correctness of the QFT circuit under verification is guaranteed
While we have developed our approach with QFT as the target, the key ideas used in the abstractions can be applied to a much larger class of quantum circuits, which is what we plan to do for future work.
The rest of the paper is organized as follows. Section II covers background on quantum circuits and QFT circuits. Section III overviews the related work on formal methods for verification of quantum circuits. Section IV describes the key contributions of the proposed work, including the gate abstractions and the correctness framework. Section V addresses the correctness of the abstractions and the overall approach. Experimental results are provided and discussed in Section VI. Conclusions and future work are outlined in section VII.
## II Background
In this section, we review background on qubits, quantum gates, and QFT circuits. A detailed description of these topics can be found in [12]. Information in the quantum computing domain is represented by qubits. A qubit is the basic unit of information analogous to a bit in classical computing. In general, qubits are represented by a linear combination of ortho-normal (orthogonal and normalized) vectors \(|0\rangle\) and \(|1\rangle\). The vectors are linearly independent i.e., we cannot express one as the linear combination of the other. The independent vectors are shown below.
\[|0\rangle=\begin{bmatrix}1\\ 0\end{bmatrix},and\ \ |1\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\]
The above ortho-normal vectors can be used to represent any vectors in the vector space by using vector addition and scaling (linear combination), and thus they are called the _basis vectors_. A standard representation of a qubit \(|q\rangle\) is shown below where, \(\alpha\) and \(\beta\) are complex numbers such that \(\alpha^{2}+\beta^{2}=1\).
\[|q\rangle=\alpha|0\rangle+\beta|1\rangle\]
Quantum gates are unitary operators that act on qubits and produce a required output. A quantum algorithm is a step by step process that utilizes quantum mechanical properties to solve a particular problem. Quantum algorithms are run on computation models for quantum computing and this work is based on the quantum circuit model, which is the most widely used method [20].
QFT is analogous to the Discrete Fourier Transform (DFT) in the classical domain and efficiently performs the quantum mechanical model's Fourier transform. The QFT operates on the input qubit states (ortho-normal basis vectors \(|0\rangle,.....,|N-1\rangle\)) and transforms them to the corresponding output states. The transformation is shown below [12].
\[|j\rangle\rightarrow\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}e^{2\pi ijk/N}|k\rangle\]
In the above, \(|j\rangle,N,i\), and \(k\) represents the input qubit, the number of QFT points, imaginary number (\(\sqrt{-1}\)), and the iteration variable, respectively. Here \(N=2^{n}\), where \(n\) is the number of qubits in the QFT.
In the transformed domain, this resultant state (transformed \(|j\rangle\)) can be represented as a sum of individual components whose frequencies are integer multiples of \(\frac{2\pi}{N}\). The same equation can be re-organized to obtain the equivalent transformation happening in each qubit independently, which we exploit in this work.
Implementation of QFT as a circuit can be achieved by a series of cascaded Hadamard (H) gates and controlled rotation (\(R_{n}\)) gates. The H gates and R\({}_{n}\) gates are defined below.
\[\text{H}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&e^{\pi i}\end{bmatrix}=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\]
\[\text{R}_{n}=\begin{bmatrix}1&0\\ 0&e^{2\pi i/2^{n}}\end{bmatrix}\]
The H gate introduces equal superposition of the input basis vectors for the qubit. The \(R_{n}\) gates are responsible for the frequency harmonics. QFT circuits are constructed by first applying the H gate to all qubits. Qubit 1 of a QFT circuit with m qubits should have gates R\({}_{2}\),..., R\({}_{m}\) acting on it, with control inputs qubit 2,..., qubit m taken before the H gate is applied to the control qubits, respectively. Qubit 2 should have gates R\({}_{2}\),..., R\({}_{m-1}\) acting on it with control inputs qubit 3,..., qubit m taken before the H gate is applied, respectively, and so on. Figure 1(a) shows the transformations happening while QFT is performed on a 3 qubit system.
## III Related Work
Formal verification of quantum algorithms and circuits has been an active area of research. In this section, we overview these related works and how they contrast with our approach. The main takeaway is that the approaches have not demonstrated the efficiency and scalability that we have been able to achieve. In this sense, our approach is a meteoric advance in the size of quantum circuits thus far verified.
Yamashita and Markov [22] have proposed an equivalence checking approach for quantum circuits. In equivalence checking, the circuit to be verified is compared with a reference
circuit. There are two prominent contrasts with our approach. The first contrast is related to equivalence checking in general, where a golden (already verified, trusted) circuit is required as the reference circuit. For example, to verify a QFT circuit with 10,000 qubits and 50 million gates, a trusted QFT circuit of the same size is required. Therefore, to enable equivalence checking, methods that can verify functional correctness of a given circuit is mandatory. This is the gap that we address. Equivalence checking is useful in synthesis optimizations. Our approach is property based and does not require a reference circuit of the same size for verification. If a QFT circuit with 10,000 qubits and 50 million gates satisfies our proposed correctness property, it is guaranteed to be correct. The second contrast is that if they are not able to reduce the problem to a boolean space, then a hybrid approach is used [23], where the verification problem is solved in the Hilbert space. We use rotational abstractions to reduce the problem fully to a Boolean space, solvers for which are orders of magnitude more efficient and scalable. We also exploit the fact that our approach is domain-specific to QFT circuit verification to enable this. The largest circuits they verified have 5,000 gates and requires about 59 seconds. In contrast, we are able to verify circuits with 8,000 gates in 0.04 seconds, 5 million gates in about 60 seconds, and 50 million gates in 2,380 seconds.
Amy [11] use complex path-sums to model quantum gates for verification. They perform reductions on the resulting circuit, which are implemented using rewrite rules. The reductions are performed using the Haskell theorem prover. The rewrite rules are guaranteed to reduce the circuit to a normal form, which is then used to check correctness. They verify a 16-qubit and a 31-qubit QFT, which required 1.250 seconds and 16.020 seconds for circuits without errors, respectively. In contrast, our approach required 0.02 seconds and 0.03 seconds for 16-qubit and 32-qubit QFT circuits, respectively. They employ a dyadic arithmetic technique, the current implementation of which causes an integer overflow for QFT circuits larger than 31 qubits. Therefore, with this current implementation, they are unable to handle QFT circuits larger than 31 qubits. We are able to handle upto 10,000 qubits.
Liu et al. [24] formalize quantum hoare logic in the Isabelle/HOL theorem prover and use it to prove the correctness of Grover's search algorithm for infinite size input. They report that the proof required 5 person months of effort. They do not describe how this proof can be used to verify a given quantum circuit that implements Grover's algorithm. In contrast, our approach is fully automated for verification of any QFT circuit. They have not addressed QFT verification.
Feng et al. [25] have developed a model checking algorithm that can check the Quantum CTL (QCTL) properties on quantum Markov chains. The method is used to check the correctness of the BB84 protocol when n=1, the corresponding circuit for which has 8 qubits and 24 quantum gates. They have not addressed QFT verification either.
## IV Rotational Abstractions
There are three key ideas in developing the abstractions for the Hadamard (H) gate and the controlled rotation (R\({}_{n}\)) gate.
The first key idea is with regard to the basis vectors. If a QFT circuit works correctly when the input qubits are the basis vectors \(|0\rangle\) or \(|1\rangle\), then the circuit is guaranteed to work correctly for any qubit inputs [26]. Therefore, for verification purposes, we only consider the cases where the input qubits are \(|0\rangle\) or \(|1\rangle\).
The second key idea is with regard to quantum gates and is as follows. If the input qubits are limited to basis vectors, then both the H gate and the R\({}_{n}\) gate can be modelled as gates causing rotation on the basis vectors. The H gate has only one input. We call this the control input \(q_{c}\) as shown in Figure 1(b),
Fig. 1: (a) 3-qubit QFT circuit [21]. (b) Abstract Hadamard gate. (c) Abstract rotation gate. (d) 3-qubit QFT abstract circuit representation.
because if the input is \(|1\rangle\), then the H gate function can be represented as a rotation on \(|1\rangle\). If this control input is \(|0\rangle\), then no rotation is performed. The R\({}_{n}\) gate has two inputs (control and data) and one output, we call the control input \(q_{c}\), the data input \(q_{d}\), and the output \(q_{o}\) (as shown in Figure 1(c)). If \(q_{c}\) is \(|1\rangle\), then R\({}_{n}\) performs a rotation on \(q_{d}\). Otherwise, if \(q_{c}\) is \(|0\rangle\), then no rotation is performed.
The third key idea is with regard to the amount of rotation performed by the quantum gates on data input qubits and the resulting output qubit states, and is as follows. The H gate induces a \(\pi\) (2\(\pi\)/2) rotation on \(|1\rangle\) and does not rotate \(|0\rangle\). The R\({}_{n}\) gate induces a 2\(\pi\)/2\({}^{n}\) (\(\pi\)/2\({}^{n-1}\) ) rotation on \(|1\rangle\) and does not rotate \(|0\rangle\). For example, R\({}_{4}\) induces a rotation of \(\pi\)/8. Thus, the rotation performed by the gates on \(|1\rangle\) are negative powers of 2 with reference to 2\(\pi\).
The QFT circuit structure is such that the control inputs to the quantum gates are always initial qubit states and are used only to make the decision, whether to rotate or not.
Thus, we can abstract the basis vector input values \(|0\rangle\) and \(|1\rangle\) using Boolean values 0 and 1.
The qubits once transformed by these rotations are input to the next quantum gate and finally the output state of the circuit.
If the 2\(\pi i\) term is factored out of the exponent, the final output state of each qubit (after transformation) can be abstractly represented using fractional bit-vectors that essentially capture the amount of rotation on \(|1\rangle\). The fractional bit-vector \(\langle.b_{1}b_{2}b_{3}\rangle\) corresponds to rotation value 2\(\pi\)\(\ast\)\((b_{1}\)\(\ast\)\(2^{-1}+b_{2}\)\(\ast\)\(2^{-2}+b_{3}\)\(\ast\)\(2^{-3}\)). For example, the bit-vector \(\langle.101\rangle\) corresponds to rotation value of 2\(\pi\)(1/2+0+1/8). Abstractions of the H gate and the R\({}_{n}\) gate can be obtained by defining their rotational impact on the fractional bit-vectors, and an abstracted QFT circuit can be obtained by using these abstracted gates. In a QFT circuit with \(m\) qubits, the smallest amount of rotation will be 2\(\pi\)/2\({}^{m}\). Therefore, the fractional bit-vectors used to represent qubits in the abstracted QFT circuit will have to have \(m\) bits.
The abstract H gate is defined below and has one input qubit \(q_{c}\), which is Boolean type. Output qubit \(q_{o}\) is a bit-vector of size equal to \(m\), the number of qubits.
**Definition 1**.: _(Abstract Hadamard Gate) If \(q_{c}\)=1, then \(q_{o}\)\(\leftarrow\)\(\langle.100...0\rangle^{m}\), else \(q_{o}\)\(\leftarrow\)\(\langle.000...0\rangle^{m}\)._
The abstract R\({}_{n}\) gate is defined below and has two qubit inputs \(q_{c}\) and \(q_{d}\). The control input \(q_{c}\) is type Boolean, the data input \(q_{d}\) and the output qubit \(q_{o}\) are both fractional bit-vectors of size \(m\), the number of qubits.
**Definition 2**.: _(Abstract R\({}_{n}\) Gate) If \(q_{c}\)=1, then \(q_{o}\)\(\leftarrow\)\(q_{d}\)\(+\)\(m\)\(\langle.00..01_{m-n}0...0\rangle^{m}\), else \(q_{o}\)\(\leftarrow\)\(q_{d}\)._
In the above, \(m\) represents fixed-point modulo addition w.r.t \(m\). The abstracted QFT circuit is obtained by replacing the H gates and R\({}_{n}\) gates of the original circuit with the abstracted gates. Input qubits are declared as type Boolean and all other qubits are declared as type bit-vector of size \(m\). The abstracted QFT circuit with 3 qubits is shown in Figure 1(d). When the abstract H gate is applied, the qubits at the output of the H gates of the QFT circuit in Figure 1(d) will have the following values:
\(q_{1}^{1}\)\(\leftarrow\)\(\langle.b_{1}00\rangle\)
\(q_{2}^{1}\)\(\leftarrow\)\(\langle.b_{2}00\rangle\)
\(q_{3}^{1}\)\(\leftarrow\)\(\langle.b_{3}00\rangle\)
The QFT correctness property is given next. Let QFT-Abs\({}_{i}\)(\(b_{1}\), \(b_{2}\),..., \(b_{m}\)) denote the output state of the \(i^{th}\) qubit of an abstracted version of a QFT circuit, where \(m\) is the number of qubits and \(b_{1}\), \(b_{2}\),..., \(b_{m}\) are Boolean variables.
**Property 1**.: _(QFT Correctness Property) A QFT circuit is functionally correct if, for all \(1\leq i\leq m\), \(i\) is an integer, QFT-Abs\({}_{i}\)(\(b_{1}\), \(b_{2}\),..., \(b_{m}\)) = \(\langle.b_{i}b_{i+1}...b_{m}0...0\rangle^{m}\)._
Based on the correctness property above, for the QFT circuit from Figure 1(a) to be correct, the state of qubits at the output should be as follows:
\(q_{1}^{3}\)\(=\)\(\langle.b_{1}b_{2}b_{3}\rangle\)
\(q_{2}^{3}\)\(=\)\(\langle.b_{2}b_{3}0\rangle\)
\(q_{3}^{3}\)\(=\)\(\langle.b_{3}00\rangle\)
The abstracted gates, abstracted QFT circuit, and Property 1 are expressible in the Quantifier Free logic of Bit Vectors (Q_F_BV). A number of SMT solvers exist that can very efficiently check properties in this logic. Therefore, verification of a given QFT circuit can be performed by encoding the abstracted circuit and correctness property in this logic (using the SMT_LIB language). An SMT solver will check the property automatically and indicate if the property is satisfied or not. If the property is satisfied, then the QFT circuit is guaranteed to be correct (as will be established in the next section). If the property is not satisfied, the tool will generate a counter example, which can be used to trace the error(s) in the circuit.
## V Abstraction Correctness
In this section, we provide a proof of correctness of our verification approach. The overall approach is that we enumerate through all possible classes of errors in QFT circuits and show how the verification approach will flag each error class. The error classes are depicted in Figure 2. We call bit-vector values as data values as well.
Fig. 2: QFT circuit showing error scenarios.
**Lemma 1**.: If a QFT circuit has an error, where an incorrect input is fed to an H gate, verification of the abstracted version of the QFT circuit will either generate a type error or will not satisfy Property 1.
If the input to the abstract H gate is a bit-vector input, this will be flagged as a type error as the H gate expects a Boolean input. If Boolean input qubit \(b_{j}\) is expected whereas \(b_{k}\) is fed for qubit \(q_{j}\), then the LHS of Property 1 for \(q_{j}\) will be \(\langle.b_{k}...\rangle\) and RHS will be \(\langle.b_{j}...\rangle\). Therefore, Property 1 will not be satisfied.
**Lemma 2**.: _If a QFT circuit has an error, where an incorrect input is fed to an \(\text{R}_{n}\) gate, verification of the abstracted version of the QFT circuit will either generate a type error or will not satisfy Property 1._
If a control value is fed to the data input of an \(\text{R}_{n}\) gate or if a data value is fed to the control input of an \(\text{R}_{n}\) gate, a type error will be generated. If \(b_{j}\) is expected whereas \(b_{k}\) is fed for the control input of an \(\text{R}_{n}\) gate acting on qubit \(q_{j}\), then the LHS of Property 1 for \(q_{j}\) will be \(\langle....b_{k}....\rangle\) and RHS will be \(\langle....b_{j}...\rangle\). Therefore, Property 1 will not be satisfied. If an incorrect data value is fed to an \(\text{R}_{n}\) gate, this will result in a missing \(\text{R}_{n}\) gate on a qubit and this case is dealt with subsequently.
The error above is shown in Figure 2. \(\text{R}_{3}\) gate with input \(q_{1}^{2}\) should have \(b_{3}\) as its control input. Instead \(b_{2}\) is erroneously fed as the control input.
**Lemma 3**.: _If a QFT circuit has an error, where an H gate is missing on a qubit or there is more than one H gate acting on a qubit, verification of the abstracted version of the QFT circuit will generate a type error._
In the abstracted version of a QFT circuit, the input of an H gate is a control value and the output is a data value. Thus, if there is more than one H gate acting on a qubit, the H gates after the first one will receive data inputs and this will result in a type error. If there are no H gates acting on a qubit, the subsequent \(\text{R}_{n}\) gates will not get a data value at its data input and this will again result in a type error.
An example of a missing H gate error is shown in Figure 2. The H gate on \(q_{2}\) is missing.
**Lemma 4**.: _If a QFT circuit has an error where an incorrect set of \(\text{R}_{n}\) gates are acting on a qubit, i.e., required \(\text{R}_{n}\) gates are missing or additional \(\text{R}_{n}\) gates are present or both, verification of the abstract version of the QFT circuit will not satisfy Property 1._
Qubit 1 of a QFT circuit with \(m\) qubits should have gates \(\text{R}_{2}\),..., \(\text{R}_{m}\) acting on it. Qubit 2 should have gates \(\text{R}_{2}\),..., \(\text{R}_{m-1}\) acting on it and so on. Thus, there is only one \(\text{R}_{n}\) gate of a certain \(n\) value required to act on each qubit. If a required \(\text{R}_{n}\) gate is missing, then its rotational impact on the fractional bit-vector value abstracting the qubit will not be observed in Property 1. If a qubit has additional erroneous \(\text{R}_{n}\) gates acting on it, then the required rotation of the qubit will be incorrect and this will be reflected on the final fractional bit-vector value of the qubit. In both the above cases, Property 1 will not be satisfied. Note that an \(\text{R}_{n}\) gate can be replaced with two \(\text{R}_{n-1}\) gates, with the same control inputs. For example, \(\text{R}_{2}\) can be substituted with two \(\text{R}_{3}\) gates. If the total rotational impact of a sequence of \(\text{R}_{n}\) gates is what is expected, even though it does not conform with the \(\text{R}_{n}\) gate sequence described above, Property 1 will be satisfied because the fractional bit-vector abstraction accurately captures the rotations.
An example of an incorrect \(\text{R}_{n}\) gate is shown in Figure 2, where the gate on \(q_{1}^{2}\) should be \(\text{R}_{2}\) instead of \(\text{R}_{3}\).
**Lemma 5**.: _If a QFT circuit has a combination of errors from the error classes described in Lemmas 1-4, verification of the abstracted version of the QFT circuit will generate a type error or will not satisfy Property 1._
As can be seen from Lemmas 1-4, the effect that flags each error class is disjoint, i.e., there is no overlap in these effects for type errors or Property 1. Thus a combination of errors will also be flagged as a type error or will not satisfy Property 1.
**Theorem 1**.: _(QFT-Rotational Abstraction Correctness) If a QFT circuit has an error, verification of the abstracted version of the QFT circuit will generate a type error or will not satisfy Property 1._
A QFT circuit has only two types of gates, the H gate and the \(\text{R}_{n}\) gate. Based on this, there are only four classes of
errors possible: Incorrect input to a H gate, incorrect input to an R\({}_{n}\) gate, missing or additional H gates in the circuit, and incorrect set of R\({}_{n}\) gates acting on a qubit. The fifth case of an erroneous QFT circuit is any combination of the above. From Lemmas 1-5, we see that in all the above cases, verification of the abstracted version of the QFT circuit will generate a type error or will not satisfy Property 1.
## VI Results and Discussions
Table I gives the verification results. The verification benchmarks were generated by varying the number of qubits in the QFT circuit from 16 qubits to 10,000 qubits. The table gives the number of quantum gates in each of the QFT benchmark circuits as well (column 2: Gates). The verification experiments were conducted on an Intel(R) Core(TM) i9 -12900K CPU @ 3.2 GHz with 32 GB RAM and Ubuntu 64-bit operating system. The z3 version 4.8.12 SMT solver [27] was used to check Property 1 for all benchmarks.
In the table, "T(s)" indicates verification time in seconds, which is the z3 run time. "M(MB)" gives the z3 run time memory consumption in megabytes. "Correct Circuit" gives the verification statistics for the QFT circuits without errors. For these circuits Property 1 is proved to be satisfied. Property 1 allows for each qubit output to be verified independently. Therefore, the verification of all the qubit output in the circuit were done in parallel and the memory and time reported correspond to the worst case.
"Incorrect Gate Error" are circuits with gates errors and is described as follows. The Gate-2 error here indicates that the R\({}_{3}\) gate is incorrectly acting on qubit 1 instead of R\({}_{2}\). The Gate-n error here indicates that the R\({}_{n-1}\) gate is incorrectly acting on qubit 1 instead of R\({}_{n}\). "Incorrect Control Error" are circuits with incorrect control input to an R\({}_{n}\) gate. The Gate-2 error here indicates that the R\({}_{2}\) gate in qubit 1 is incorrectly controlled by qubit 3 instead of qubit 2. The Gate-n error here indicates that the R\({}_{n}\) gate in qubit 1 is incorrectly controlled by qubit n-1 instead of qubit n. For the circuits with errors, verification of Property 1 generates a counterexample. The time and memory reported corresponds to the verification of the first qubit output that caused a counterexample to be generated.
Figures 3 and 4 plot the verification time and memory from Table I versus the number of quantum gates, respectively. In these graphs, both the x-axis and y-axis use a log scale. As can be seen from these graphs, with increase in the number of gates, both memory and verification time increase linearly for both correct circuits and circuits with errors. The most complex circuit with 10,000 qubits and 50 million gates is verified in only about 37 minutes. This demonstrates the high efficiency and scalability of our approach. The time taken to verify circuits with errors is less than that of correct circuits. However, there is not an order-of-magnitude reduction that is often observed in formal verification.
Figure 5 shows both verification time and memory as the position of the gate error is moved from qubit 1 to qubit 10,000 on the QFT circuit with 10,000 qubits. The x-scale increases linearly, whereas the y-scale is logarithmic. The graph indicates the variation of time and memory with the vertical location of errors. We see that as the error moves from qubit 1 to qubit 10,000, both time and memory reduce exponentially.
and can therefore be encoded as fractional bit-vectors, thus reducing the verification obligations from Hilbert space to Boolean space. For future work, our goal is to extend these ideas to other quantum algorithms to advance efficiency and scalability of formal verification so as to cope with the size and complexity of quantum hardware roadmaps of the near future.
|
2304.05091 | Actually Sparse Variational Gaussian Processes | Gaussian processes (GPs) are typically criticised for their unfavourable
scaling in both computational and memory requirements. For large datasets,
sparse GPs reduce these demands by conditioning on a small set of inducing
variables designed to summarise the data. In practice however, for large
datasets requiring many inducing variables, such as low-lengthscale spatial
data, even sparse GPs can become computationally expensive, limited by the
number of inducing variables one can use. In this work, we propose a new class
of inter-domain variational GP, constructed by projecting a GP onto a set of
compactly supported B-spline basis functions. The key benefit of our approach
is that the compact support of the B-spline basis functions admits the use of
sparse linear algebra to significantly speed up matrix operations and
drastically reduce the memory footprint. This allows us to very efficiently
model fast-varying spatial phenomena with tens of thousands of inducing
variables, where previous approaches failed. | Harry Jake Cunningham, Daniel Augusto de Souza, So Takao, Mark van der Wilk, Marc Peter Deisenroth | 2023-04-11T09:38:58Z | http://arxiv.org/abs/2304.05091v1 | # Actually Sparse Variational Gaussian Processes
###### Abstract
Gaussian processes (GPs) are typically criticised for their unfavourable scaling in both computational and memory requirements. For large datasets, sparse GPs reduce these demands by conditioning on a small set of inducing variables designed to summarise the data. In practice however, for large datasets requiring many inducing variables, such as low-lengthscale spatial data, even sparse GPs can become computationally expensive, limited by the number of inducing variables one can use. In this work, we propose a new class of inter-domain variational GP, constructed by projecting a GP onto a set of compactly supported B-spline basis functions. The key benefit of our approach is that the compact support of the B-spline basis functions admits the use of sparse linear algebra to significantly speed up matrix operations and drastically reduce the memory footprint. This allows us to very efficiently model fast-varying spatial phenomena with tens of thousands of inducing variables, where previous approaches failed.
## 1 Introduction
Gaussian processes (GPs) (Rasmussen and Williams, 2006) provide a rich prior over functions. Their non-parametric form, gold-standard uncertainty estimates and robustness to overfitting have made them common place in geostatistics (Oliver and Webster, 1990), epidemiology (Bhatt et al., 2017), spatio-temporal modelling (Blangiardo et al., 2013; Wikle et al., 2019), robotics and control (Deisenroth and Rasmussen, 2011) and Bayesian optimisation (Osborne et al., 2009). However, GPs scale infamously as \(\mathcal{O}(N^{3})\) in computational complexity and \(\mathcal{O}(N^{2})\) in memory, where \(N\) is the size of the training dataset, making them unfeasible for use with large datasets. To overcome this limitation, there exist a number of different approximate inference techniques, including sparse approximations (Snelson and Ghahramani, 2006; Quinonero-Candela and Rasmussen, 2005; Titsias, 2009), state-space methods (Hartikainen and Sarkka, 2010; Sarkka et al., 2013; Hameljinck et al., 2021) and local-expert models (Tresp, 2000a,b; Rasmussen and Ghahramani, 2001; Deisenroth and Ng, 2015; Cohen et al., 2020). In particular, sparse GP approximations have been developed to reduce the cubic complexity of inference by introducing a set of inducing variables. Sparse approaches summarise the training data by a set of \(M\ll N\) pseudo-data, effectively reducing the rank of the covariance matrix. Amongst these methods, variational approximations have proved popular in improving GPs for regression (Titsias, 2009), classification (Hensman et al., 2015), stochastic optimisation (Hensman et al., 2013), inference with non-conjugate likelihoods (Hensman et al., 2015, 2015) and hierarchical non-parametric modelling (Damianou and Lawrence, 2013; Salimbeni and Deisenroth, 2017).
Introduced by Titsias (2009), Sparse Variational Gaussian processes (SVGPs) approximate the true GP posterior with an approximate one, conditioned on a set of \(M\) induc
Figure 1: Illustration of the sparse matrix structures induced by our proposed method for 1D regression with a Matérn-3/2 kernel. By constructing inter-domain inducing variables \(\mathbf{u}\) as RKHS projections of the GP onto a set of compactly supported B-splines, both the inducing point covariance matrix \(\mathbf{K_{uu}}\) and the covariance matrix between the GP \(f\) and the inducing variables \(\mathbf{K_{uf}}\) become sparse. This admits sparse linear algebra to precompute the sparse matrix product \(\mathbf{K_{uf}}\mathbf{K_{fu}}\), which is used to compute the ELBO.
ing variables. The approximate posterior is then learnt by minimising the Kullback-Leibler (KL) divergence between the approximate and true posterior, allowing us to learn the variational parameters and hyperparameters jointly via gradient descent. The resulting approximation scales as \(\mathcal{O}(NM^{2}+M^{3})\) in computational complexity and \(\mathcal{O}(NM)\) in memory. However, these low-rank approximations are practically limited to \(\approx 10,000\) inducing points, which can be insufficient for complex datasets where a large number of inducing points are required to cover the input space. This limitation is especially apparent in long time-series or spatial datasets with intrinsically low lengthscales, where traditional low-rank approximations based on small sets of localised pseudo-datapoints fail to capture fast variations in the data (Pleiss et al., 2020; Wu et al., 2022).
To alleviate some of these problems, inter-domain GPs (Lazaro-Gredilla and Figueiras-Vidal, 2009; van der Wilk et al., 2020) generalise the idea of inducing variables by transforming the GP to a different domain by means of a linear operator, which admits more expressive features and/or computationally efficient linear algebra. Variational Fourier Features (VFFs) (Hensman et al., 2017), constructs inter-domain inducing variables by projecting the GP onto a Fourier basis. This results in inducing variables that span the width of the domain and therefore describe global variations in the data. By the orthogonality of the Fourier basis, the inducing variables are also almost independent, producing computationally efficient block-diagonal covariance matrices. In one dimension, this can be exploited to reduce the computational complexity to \(\mathcal{O}(M^{3})\) after an initial one-off pre-computation of \(\mathcal{O}(NM^{2})\). However, since the Fourier basis functions are global, whilst computationally efficient, they are inefficient at modelling low-lengthscale data. Indeed, VFF typically requires more inducing variables for an equivalent accuracy than standard sparse GP regression for \(d\geq 2\)(Hensman et al., 2017).
Variational Inducing Spherical Harmonics (VISH) by Dutordoir et al. (2020) remedied some of the problems faced by VFF by first projecting the data onto a \(D\)-dimensional unit hypersphere and then using a basis of spherical harmonics as inter-domain inducing features. As the basis functions are orthogonal, VISH reduces the cost of matrix inversion to \(\mathcal{O}(M)\) and the total cost of inference to \(\mathcal{O}(NM^{2})\). However, by projecting data onto the hypersphere and performing sparse GP regression on the transformed space, VISH is unable to use covariance functions which use the Euclidean distance between data points. This makes VISH sub-optimal for naturally Euclidean spatial data.
In this work, we propose a new inter-domain approach that scales GPs to complex datasets that require a very large number of inducing variables. Specifically, we define a new inter-domain approximation by projecting the GP onto a basis of compactly supported B-splines. Due to the local support of the B-spline basis functions, the covariance between inducing variables yields sparse band-diagonal covariance matrices, admitting highly efficient sparse linear algebra at a complexity that scales linearly with the number of inducing variables. In contrast to both VFF and VISH, which use basis functions with global support, our choice of basis also incites sparse structure in the covariance between inducing variables and the GP itself. Our results show that our method is particularly well suited to spatial data with high-frequency variations, which necessitate a large number of inducing variables. By using computationally cheap, locally supported inducing variables, we can cover the domain with many basis functions that are able to successfully capture local variations.
## 2 Background
A Gaussian process is a collection of random variables, any finite number of which is jointly Gaussian distributed. A GP is fully characterised by its mean \(\mu(\cdot)\) and covariance function \(k(\cdot,\cdot)\)(Rasmussen and Williams, 2006). Given a training dataset \(\mathcal{D}=\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{N}\) of \(N\) noisy observations \(y_{n}\in\mathbb{R}\) and corresponding inputs \(\mathbf{x}_{n}\in\mathbb{R}^{D}\), and observation model \(y_{n}=f(\mathbf{x}_{n})+\epsilon,\ \epsilon\sim\mathcal{N}(0,\sigma^{2})\), we construct a GP regression problem by placing a zero-mean GP prior on the latent function \(f\sim\mathcal{GP}(0,k(\cdot,\cdot))\). The posterior distribution \(p(f|\mathbf{y})\sim\mathcal{GP}(\mu(\cdot),\Sigma(\cdot,\cdot))\) is a GP with
\[\begin{split}\mu(\cdot)&=\mathbf{k}_{\mathbf{f}}^{T}( \cdot)\mathbf{K}_{\mathbf{y}\mathbf{y}}^{-1}\mathbf{y},\\ \Sigma(\cdot,\cdot)&=k(\cdot,\cdot)-\mathbf{k}_{ \mathbf{f}}^{T}(\cdot)\mathbf{K}_{\mathbf{y}\mathbf{y}}^{-1}\mathbf{k}_{ \mathbf{f}}(\cdot),\end{split} \tag{1}\]
w
Figure 2: (a) 1st-order B-spline basis (b) 2nd-order B-spline basis (c) 3rd-order B-spline basis. For the same set of knots, the support of the B-splines increases in width with increasing order. This has the effect that each B-spline basis function has intersecting support with an increasing number of basis functions as the order increases.
where \(\mathbf{k_{f}}(\cdot)=[k(\mathbf{x}_{n},\cdot)]_{n=1}^{N}\), \(\mathbf{K_{yy}}=\mathbf{K_{ff}}+\sigma^{2}\mathbf{I}\) and \(\mathbf{K_{ff}}=[k(\mathbf{x}_{i},\mathbf{x}_{j})]_{i,j=1}^{N}\).
To train the GP we maximise the log-marginal likelihood \(\log p(\mathbf{y})=\log\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f})\mathrm{d} \mathbf{f}\). In the case of a Gaussian likelihood, this takes the explicit form
\[\log p(\mathbf{y})=-\frac{1}{2}\mathbf{y}^{\top}\mathbf{K}_{\mathbf{yy}}^{-1} \mathbf{y}-\frac{1}{2}\log|\mathbf{K_{yy}}|-\frac{n}{2}\log 2\pi. \tag{2}\]
Training the GP scales in \(\mathcal{O}(N^{3})\) due to computing the matrix inverse and determinant in (2). Moreover, when using gradient-based optimisation to tune the hyperparameters, (2) must be computed at every iteration. Predictions using (1) require \(\mathcal{O}(N^{2})\) computations, assuming \(\mathbf{K}_{\mathbf{yy}}^{-1}\) (or its Cholesky factorisation) has been cached, e.g., after the training procedure. In terms of memory, GP predictions require \(\mathcal{O}(N^{2})\) to store the Cholesky factor of \(\mathbf{K_{yy}}\). The computational and memory demands therefore make GPs prohibitively expensive for datasets with more than \(\approx 10,000\) datapoints.
### Sparse Variational Gaussian Processes
Variational inference provides an elegant method to approximate the true posterior \(p(f|\mathbf{y})\) of a GP with a variational distribution \(q(f)\), rather than approximating the model itself. Sparse variational Gaussian processes (SVGPs) introduced by Titsias (2009) leverage inducing points coupled with variational inference to construct a low-rank approximation to the posterior. SVGP consists of introducing a (small) set of inducing variables \(\mathbf{u}=\{f(\mathbf{z}_{m})\}_{m=1}^{M}\) defined at a set of inducing point locations \(Z=\{\mathbf{z}_{m}\}_{m=1}^{M}\). Placing a Gaussian distribution over the inducing variables \(q(\mathbf{u})=\mathcal{N}(\mathbf{m},\mathbf{S})\), the approximate posterior
\[q(f)=\int p(f|\mathbf{u})q(\mathbf{u})\mathrm{d}\mathbf{u}=\mathcal{GP}(\mu( \cdot),\Sigma(\cdot,\cdot)) \tag{3}\]
is obtained by marginalising out the inducing variables. The approximate posterior (3) is defined in terms of the variational parameters \(\mathbf{m}\in\mathbb{R}^{M}\) and \(\mathbf{S}\in\mathbb{R}^{M\times M}\), where, due to the conjugacy between \(p(f|\mathbf{u})\) and \(q(\mathbf{u})\),
\[\mu(\cdot) =\mathbf{k}_{\mathbf{u}}^{T}(\cdot)\mathbf{K}_{\mathbf{uu}}^{-1} \mathbf{m}, \tag{4}\] \[\Sigma(\cdot,\cdot) =k(\cdot,\cdot)+\mathbf{k}_{\mathbf{u}}^{T}(\cdot)\mathbf{K}_{ \mathbf{uu}}^{-1}(\mathbf{S}-\mathbf{K_{uu}})\mathbf{K}_{\mathbf{uu}}^{-1} \mathbf{k}_{\mathbf{u}}(\cdot). \tag{5}\]
Here \(\mathbf{k}_{\mathbf{u}}(\cdot)=[\mathrm{cov}(u_{m},f(\cdot))]_{m=1}^{M}=[k( \mathbf{z}_{m},\cdot)]_{m=1}^{M}\) and \(\mathbf{K_{uu}}=[\mathrm{cov}(u_{i},u_{j})]_{i,j=1}^{M}=[k(\mathbf{z}_{i}, \mathbf{z}_{j})]_{i,j=1}^{M}\).
The variational parameters \(\mathbf{m}\) and \(\mathbf{S}\) are optimised by minimising the KL divergence between the true and approximate posterior \(\mathrm{KL}\left[q(f)\,\|\,p(f|\mathbf{y})\right]\). In practice, this is made tractable by maximising the evidence lower bound (ELBO)
\[\mathcal{L}_{\mathrm{ELBO}}=\sum_{n=1}^{N}\mathbb{E}_{q(f_{n})}[\log p(y_{n}|f_ {n})]-\mathrm{KL}\left[q(\mathbf{u})\,\|\,p(\mathbf{u})\right], \tag{6}\]
which provides a lower bound to the log-marginal likelihood \(\log p(\mathbf{y})\geq\mathcal{L}_{\mathrm{ELBO}}\), and whose gap is precisely the KL divergence that we are minimising. Normally, the hyperparameters of the model are optimised jointly with the variational parameters, by maximising the ELBO.
For a Gaussian likelihood, the moments of the optimal distribution \(\hat{q}(\mathbf{u})=\mathcal{N}(\hat{\mathbf{m}},\hat{\mathbf{\Sigma}})\) can be computed exactly as
\[\hat{\mathbf{m}} =\sigma^{-2}\hat{\mathbf{\Sigma}}\mathbf{K_{uf}}\mathbf{y}, \tag{7}\] \[\hat{\mathbf{\Sigma}} =\mathbf{K_{uu}}\left[\mathbf{K_{uu}}+\sigma^{-2}\mathbf{K_{uf} }\mathbf{K_{fu}}\right]^{-1}\mathbf{K_{uu}}. \tag{8}\]
The corresponding optimal ELBO is given by
\[\mathcal{L}_{\mathrm{ELBO}} =\log\mathcal{N}\left(\mathbf{y}|\mathbf{0},\mathbf{K_{fu}} \mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{K_{uf}}+\sigma_{n}^{2}\mathbf{I}\right) \tag{9}\] \[\quad-\frac{1}{2}\sigma_{n}^{-2}\text{tr}\left(\mathbf{K_{ff}}- \mathbf{K_{fu}}\mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{K_{uf}}\right),\]
where \(\mathbf{K_{uf}}=[k(\mathbf{z}_{m},\mathbf{x}_{n})]_{m,n=1}^{M,N}\). SVGPs thus reduce the computational cost of training to \(\mathcal{O}(NM^{2}+M^{3})\) per evaluation of the ELBO. Hensman et al. (2013) showed that the ELBO in (6) is also amenable to stochastic optimisation, further reducing the computational complexity to \(\mathcal{O}(N_{b}M^{2}+M^{3})\) per iteration by using minibatches. SVGPs require \(\mathcal{O}(N_{b}M+M^{2})\) memory to store \(\mathbf{K_{fu}}\) and the dense Cholesky factor of \(\mathbf{K_{uu}}\).
The use of a low-rank approximation does have certain trade-offs, however. Whilst small \(M\) speeds up computation, the choice of \(M\) is also essential to ensuring a certain quality of approximation (Burt et al., 2019). Using a small number of inducing points becomes particularly troublesome for data with inherently short lengthscales, which commonly occurs when working with spatial data. In this case, the SVGP will collapse quickly to the prior mean and variance when not in the immediate vicinity of an inducing input.
### Variational Fourier Features (VFF)
Inter-domain GPs (Alvarez and Lawrence, 2008; Lazaro-Gredilla and Figueiras-Vidal, 2009; van der Wilk et al., 2020) generalise the idea of inducing variables by instead conditioning on a linear transformation \(\mathcal{L}_{m}\) of the GP \(\mathbf{u}=[\mathcal{L}_{m}f(\cdot)]_{m=1}^{M}\). By choosing \(\mathcal{L}_{m}\) to be a convolution of \(f(\cdot)\) with respect to a Dirac delta function centred at the inducing points \(\mathbf{z}_{m}\), we can recover the standard inducing point approximation. However, by choosing different linear operators, such as projections (Hensman et al., 2017; Dutordoir et al., 2020) or general convolutions (van der Wilk et al., 2017), we can construct more informative features, without changing the sparse variational inference scheme.
VFF (Hensman et al., 2017) is an inter-domain variational GP approximation that constructs inducing features as a Matern RKHS projection of the GP onto a set of Fourier basis functions \(u_{m}=\langle f,\phi_{m}\rangle_{\mathcal{H}},m=1,\ldots,M,\) where \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) denotes the Matern RKHS inner product, and \(\phi_{0}(x)=1\), \(\phi_{2i-1}(x)=\cos(\omega_{i}x)\), \(\phi_{2i}=\sin(\omega_{i}x)\) are the Fourier basis functions. This results in the matrices
\[\mathbf{K_{uu}}=[\langle\phi_{i},\phi_{j}\rangle_{\mathcal{H}}]_{i,j=1}^{M}, \quad\mathbf{K_{uf}}=[\phi_{m}(x_{n})]_{m,n=1}^{M,N} \tag{10}\]
where, due to the reproducing property, the cross-covariance matrix \(\mathbf{K_{uf}}\), which is equivalent to evaluating the Fourier basis, is independent of kernel hyperparameters. This leads to several computational benefits: (i) we can precompute \(\mathbf{K_{uf}}\), as it remains constant throughout hyper-parameter training via the ELBO (9), (ii) due to the orthogonality of the Fourier basis, \(\mathbf{K_{uu}}\) is the sum of a block-diagonal matrix plus low-rank matrices, e.g., in the case of a 1D Matern-1/2 kernel,
\[\mathbf{K_{uu}}=\mathrm{diag}(\boldsymbol{\alpha})+\boldsymbol{\beta} \boldsymbol{\beta}^{\top} \tag{11}\]
for some \(\boldsymbol{\alpha},\boldsymbol{\beta}\in\mathbb{R}^{M}\), where the vector \(\boldsymbol{\beta}\) is sparse. This structure can be exploited to significantly reduce the computational complexity for training and prediction when compared to standard sparse GP methods. However, VFF has two main flaws:
* VFF generalises poorly to higher dimensions due to the use of a Kronecker product basis. This construction of a high-dimensional basis not only scales exponentially in the number of dimensions, it is also inefficient in terms of captured variance (Dutordoir et al., 2020): Multiplying together basis functions of increasing frequency causes the prior variance to decay rapidly, resulting in large numbers of redundant features and the down-weighting of important low-frequency ones. Thus for \(D\geq 2\), VFF typically requires more inducing variables than SGPR, making it memory inefficient.
* Whilst \(\mathbf{K_{uu}}\) has a computationally efficient structure, \(\mathbf{K_{uf}}\) is still a dense matrix. In the special case when the likelihood is Gaussian, we still require to compute a dense Cholesky factor of the \(M\times M\) matrix \(\mathbf{K_{uu}}+\sigma^{-2}\mathbf{K_{uf}}\mathbf{K_{fu}}\) (see (8)), which costs \(\mathcal{O}(M^{3})\). The same problem persists for VISH.
In order to address these issues, in the next section we will consider defining inter-domain inducing variables as the projection of the GP onto a set of compactly supported basis functions, drastically reducing memory requirements and improving computational efficiency, enabling us to use large numbers of inducing points.
## 3 B-Spline Inducing Features
In this section, we introduce B-spline inducing features and propose Actually Sparse Variational Gaussian Processes (AS-VGPs). The core idea is to use the concept of RKHS projections as in VFF, except to project a GP onto a set of compactly supported _B-spline basis functions_ instead of the Fourier basis functions. Unlike in VFF, the resulting inducing features \(\{u_{m}\}_{m=1}^{M}\) are localised by the nature of their compact support, see Figure 2, such that \(\mathbf{K_{uu}}\), \(\mathbf{K_{uf}}\) and \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) are _all_ sparse matrices (see Figure 1). These sparse covariance structures allow us to gain substantial computational benefits.
### B-Spline Inducing Features
B-spline basis functions of order \(k\) are a set of compactly supported piece-wise polynomial functions of degree \(k\). Their shape is controlled by an increasing sequence of knots \(V=\{v_{m}\}_{m=0}^{M}\in\mathbb{R}\) that partition the domain into \(M\) sub-intervals. We denote the \(m\)-th B-spline basis function of order \(k\) by \(B_{m,k}(x)\) (See Appendix E for expressions). Since a \(k\)-th order B-spline has compact support over only \(k+1\) sub-intervals, it has intersecting support with at most \(k+1\) other B-spline basis functions (see Figure 2).
We define the _B-spline inducing features_ as the RKHS projection \(u_{m}=\langle f,\phi_{m}(\cdot)\rangle_{\mathcal{H}}\) onto the B-spline basis, where \(\phi_{m}(x)=B_{m,k}(x)\). Under this choice, the covariance between the inducing features \(u_{m}\) and the GP \(f\) is given by
\[[\mathbf{K_{uf}}]_{m,n} =\mathrm{Cov}[u_{m},f(x_{n})]=\langle k(x_{n},\cdot),\phi_{m}( \cdot)\rangle_{\mathcal{H}} \tag{12}\] \[=\phi_{m}(x_{n})=B_{m,k}(x_{n}) \tag{13}\]
and reduces to a simple evaluation of the B-spline basis at the training inputs. Note that \(B_{m,k}(x)\neq 0\) if and only if \(x\in[v_{m},v_{m+k+1}]\) and therefore \(\mathbf{K_{uf}}\) is sparse with at most \(M(k+1)\) non-zero entries. As with VFF, \(\mathbf{K_{uf}}\) is also independent of the kernel hyperparameters, meaning it remains constant throughout training and can be precomputed. Next, the covariance between the inducing features is given by
\[[\mathbf{K_{uu}}]_{m,m^{\prime}}=\mathrm{Cov}[u_{m},u_{m^{\prime}}]=\langle \phi_{m},\phi_{m^{\prime}}\rangle_{\mathcal{H}}, \tag{14}\]
which is only non-zero when \(\phi_{m}\) and \(\phi_{m^{\prime}}\) have intersecting support. This produces sparse band-diagonal \(\mathbf{K_{uu}}\) matrices with bandwidth equal to \(k+1\). Since the B-spline basis functions are piecewise polynomials, we can evaluate the inner product in closed form, allowing for efficient computation during training and testing.
_Remark 1_.: Strictly speaking, the notation \(\langle f,\phi_{m}(\cdot)\rangle_{\mathcal{H}}\) is ill-defined, as samples of \(f\) are almost surely not elements of \(\mathcal{H}\) Kanagawa et al. (2018). In order to make rigorous sense of this, we use the machinery of generalised Gaussian fields, which we discuss in Appendix D.
### Sparse Linear Algebra
In this section, we will initially restrict our analysis to GPs with one-dimensional inputs and extend this later in Section 3.4 to higher dimensions. Using our proposed spline inducing features, we have the following desirable properties that we can leverage in the key computations (8)-(9):
**Property 1:** For the Matern-\(\nu/2\) class of kernels, \(\mathbf{K_{uu}}\) is a band-diagonal matrix with bandwidth equal to _at least_\(\nu/2+3/2\).
This is due to the fact that, in order to be a valid projection, the B-spline basis functions must belong to the same Matern RKHS. As stated by Kanagawa et al. (2018), the RKHS generated by the Matern-\(\nu/2\) kernel \(k(\cdot,\cdot)\) is norm-equivalent to
the Sobolev space \(\mathcal{H}^{\nu/2+1/2}\). Given their polynomial form, we can check that B-splines of order \(k\) are \(C^{k-1}\)-smooth and moreover \(k\)-times weakly differentiable (see Appendix E). Since the B-splines are compactly supported, so are their (weak) derivatives; therefore, the (weak) derivatives are all square-integrable. Thus, they belong to the Sobolev space \(\mathcal{H}^{k}\). As a result, for the Matern-\(\nu/2\) kernel, we choose to project onto B-splines of order \(k=\nu/2+1/2\), giving us a \(\mathbf{K_{uu}}\) matrix with bandwidth \(k+1=\nu/2+3/2\).
**Property 2:** The matrix product \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) is a band-diagonal matrix with bandwidth at most equal to that of \(\mathbf{K_{uu}}\).
To see this, from (13) we have
\[[\mathbf{K_{uf}}\mathbf{K_{fu}}]_{ij} =\sum_{n=1}^{N}[\mathbf{K_{uf}}]_{in}[\mathbf{K_{uf}}]_{jn} \tag{15}\] \[=\sum_{n=1}^{N}B_{i,k}(x_{n})B_{j,k}(x_{n}). \tag{16}\]
By the properties of B-splines, \(B_{i,k}(x_{n})B_{j,k}(x_{n})\neq 0\) if and only if \(x_{n}\in\mathcal{I}_{ij}\), where \(\mathcal{I}_{ij}=[v_{i},v_{i+k+1}]\cap[v_{j},v_{j+k+1}]\) is the intersection of the supports of the two B-splines \(B_{i,k}\) and \(B_{j,k}\). However, we know that the supports are intersecting if and only if \(|i-j|<k+1\). Hence, when \(|i-j|\geq k+1\), no data point can be contained in \(\mathcal{I}_{ij}\) since it is the empty set, giving us \(B_{i,k}(x_{n})B_{j,k}(x_{n})=0\) for all \(n=1,\ldots,N\) and therefore \([\mathbf{K_{uf}}\mathbf{K_{fu}}]_{ij}=0\) from (16). This implies that the matrix \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) has bandwidth at most equal to \(k+1\).
Using these two properties, we can construct an inter-domain variational method that can leverage sparse linear algebra to speed up inference and significantly save on memory footprint. We discuss this next.
### Actually Sparse Variational Gaussian Processes
We propose Actually Sparse Variational Gaussian Processes (AS-VGP) as inter-domain variational GPs that use B-Spline inducing variables. For one-dimensional GPs, our method has several computational advantages:
* \(\mathbf{K_{uf}}\) is very sparse with typically 1% of its entries being non-zero. This allows us to store it as a sparse tensor, resulting in 2 orders of magnitude memory saving.
* By Properties (1)-(2), the sum \(\mathbf{K_{uu}}+\sigma^{-1}\mathbf{K_{uf}}\mathbf{K_{fu}}\) in (8) is band-diagonal and its inverse can be computed at a cost of \(\mathcal{O}(M(k+1)^{2})\); its memory footprint is \(\mathcal{O}(M(k+1))\).
* Using the banded operators from Durrande et al. (2019), we compute \(\mathrm{tr}(\mathbf{K_{fu}}\mathbf{K_{uu}^{-1}}\mathbf{K_{uf}})=\mathrm{tr}( \mathbf{K_{uu}^{-1}}\mathbf{K_{uf}}\mathbf{K_{fu}})\) in (9) without having to instantiate a dense matrix, reducing the memory footprint to \(\mathcal{O}(M(k+1))\).
Overall, this reduces the pre-computation cost for computing the sparse matrix multiplication \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) to _linear_ in the number of training datapoints. The resulting matrix can be cached for later use with a memory footprint of \(\mathcal{O}((k+1)M)\), owing to its banded structure (Property 2). Further, the per-iteration computational cost and memory footprint of computing the ELBO (9) and its gradients is also _linear_ in the number of inducing variables, required to take the (sparse) Cholesky decomposition of a banded matrix (Durrande et al., 2019).
Further, using the banded operators introduced by Durrande et al. (2019), given a banded Cholesky factor of \(\mathbf{K_{uu}}\), we can also compute only the band elements of its inverse at a cost of \(\mathcal{O}(M(k+1)^{2})\). Given that \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) is a banded matrix (Property 2), we compute the trace term in (9) by computing only the bands of the matrix product \(\mathbf{K_{uu}^{-1}}\mathbf{K_{uf}}\mathbf{K_{fu}}\), with the computational cost \(\mathcal{O}(M(k+1)^{2})\), thereby avoiding _ever_ instantiating a dense matrix.
We compare the compute and memory costs of various sparse GP inference algorithms in Table 1. This highlights the linear scaling in both memory and computational complexity with inducing points of the proposed AS-VGP. Compared to both VFF and VISH, AS-VGP is the only method that scales linearly in both computational complexity and storage, enabling it to be used with tens or hundreds of thousands of inducing variables, significantly more than both VFF and VISH.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & Pre- & Computational & Storage \\ & computation & complexity & \\ \hline SGPR (Titsias, 2009) & ✗ & \(\mathcal{O}(NM^{2}+M^{3})\) & \(\mathcal{O}(NM)\) \\ SVGP (Hensman et al., 2013) & ✗ & \(\mathcal{O}(N_{b}M^{2}+M^{3})\) & \(\mathcal{O}(M^{2}+N_{b}M)\) \\ VFF (Hensman et al., 2017) & \(\mathcal{O}(NM^{2})\) & \(\mathcal{O}(M^{3})\) & \(\mathcal{O}(NM+M^{2})\) \\ VISH (Dutordoir et al., 2020) & \(\mathcal{O}(NM^{2})\) & \(\mathcal{O}(M^{3})\) & \(\mathcal{O}(NM+M^{2})\) \\
**AS-VGP (Ours)** & \(\mathcal{O}(\boldsymbol{N})\) & \(\mathcal{O}((\boldsymbol{k+1})^{2}\boldsymbol{M})\) & \(\mathcal{O}(\boldsymbol{N+(k+1)M})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Complexity of sparse variational GPs for evaluating the ELBO (9) in 1D regression settings with a Gaussian likelihood. \(N\): number of datapoints; \(M\): number of inducing points; \(k\): bandwidth of the covariance matrix; \(N_{b}\): size of the mini-batch in stochastic variational inference. For both VFF and VISH we quote the complexity required for exact SGPR.
### Extensions to Higher Dimensions
To extend AS-VGP to higher dimensions, we employ a similar strategy to VFF, by constructing either the additive or separable kernel. In the separable case, we have
\[k(\mathbf{x},\mathbf{x}^{\prime})=\prod_{d=1}^{D}k_{d}(x_{d},x_{d}^{\prime}), \tag{17}\]
where \(k_{d}(\cdot,\cdot)\) for \(d=1,\ldots,D\) are one-dimensional kernels. By choosing the basis functions to be a tensor product of \(M\) one-dimensional B-Splines, that is,
\[\mathbf{\phi}(\mathbf{x})=\bigotimes_{d=1}^{D}[B_{m,k}^{(d)}(x_{d})]_{m=1}^{M}\in \mathbb{R}^{M^{D}}, \tag{18}\]
we get the matrices
\[\mathbf{K_{uf}}=[\mathbf{\phi}(\mathbf{x}_{n})]_{n=1}^{N},\quad\mathbf{K_{uu}}= \bigotimes_{d=1}^{D}\mathbf{K_{uu}^{(d)}}, \tag{19}\]
computed using (13)-(14), where \(\mathbf{K_{uu}^{(d)}}\) for \(d=1,\ldots,D\) denotes the matrix (14) corresponding to the 1D case and \(\{\mathbf{x}_{n}\}_{n=1}^{N}\) are the training inputs. Note that some of the structures present for one-dimensional inputs are also present in the Kronecker formulation, namely: (i) \(\mathbf{K_{uu}}\) is a block-banded matrix with bandwidth \(\approx kM^{D-1}\) whose Cholesky factorisation can be computed in \(\mathcal{O}(kM^{D-1})\), (ii) \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) is also a band-diagonal matrix with bandwidth \(\approx kM^{D-1}\). For low-dimensional problems, the large number of basis functions (18) provides a rich covering of the input space. However, this is unsuitable for large \(D\) due to the exponential scaling in the number of input dimensions.
For the additive case, we construct \(D\)-dimensional kernels as the sum of \(D\) one-dimensional kernels, i.e.,
\[k(\mathbf{x},\mathbf{x})=\sum_{d=1}^{D}k_{d}(x_{d},x_{d}^{\prime}). \tag{20}\]
This results in a band-diagonal \(\mathbf{K_{uu}}\) matrix with bandwidth equal to the one-dimensional equivalent. However, the product \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) is no longer sparse and hence inference using a Gaussian likelihood and pre-computation requires an \(\mathcal{O}(DM^{3})\) Cholesky factorisation.
## 4 Experiments
In the following, we evaluate AS-VGP on a number of regression tasks. We highlight the following properties of our method: 1) AS-VGP significantly reduces the memory requirements of sparse variational GPs without sacrificing on performance. 2) AS-VGP is extremely fast and scalable (training on 2 million 1D data points and 1000 inducing points in under 6 seconds). 3) AS-VGP is able to perform closed-form optimal variational inference when other methods have to use stochastic optimisation instead. 4) AS-VGP is not limited to low-dimensional problems and improves upon VFF when using an additive structure. 5) AS-VGP is particularly suited to modelling fast-varying spatial datasets.
### One-Dimensional Regression
Regression Benchmarks.The purpose of this experiment is to assess the empirical performance and computational benefits of AS-VGP in comparison with SVGP and VFF on medium-sized datasets. We use three UCI benchmarks and a synthetic dataset to compare the predictive performance of AS-VGP with SVGP and VFF. For the synthetic dataset, we generated a periodic function to compare our locally supported B-Spline basis with VFF, which uses a naturally periodic basis.
For each dataset, we randomly sample 90\(\%\) of the data for training and 10\(\%\) for testing, repeating this five times to calculate the mean and standard deviation, of the predictive performance (MSE) and uncertainty quantification (NLPD). When using AS-VGP, we normalise the inputs to be between \([0,M]\), where \(M\) is the number of inducing points to ensure the spacing between knots is equal to \(1\), to avoid numerical issues caused by large gradients when computing the inner-product between basis functions. All models are trained using the L-BFGS optimiser; for VFF and AS-VGP, we precompute the matrix product \(\mathbf{K_{uf}}\mathbf{K_{fu}}\). We use the Matern-3/2 kernel for each experiment.
The results in Table 2 demonstrate that AS-VGP is comparative in performance to VFF on every dataset, whilst being less memory intensive. This highlights how our locally supported basis functions offer benefits in both complexity and
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{MSE (\(\times 10^{-1}\))} & \multicolumn{3}{c}{NLPD} \\ \cline{4-9} Dataset & \(N\) & \(M\) & SGPR & VFF & AS-VGP & SGPR & VFF & AS-VGP \\ \hline Air Quality & 9k & 500 & 6.43 \(\pm\) 0.04 & 6.64 \(\pm\) 0.04 & 6.68 \(\pm\) 0.04 & 1.24 \(\pm\) 0.00 & 1.25 \(\pm\) 0.00 & 1.25 \(\pm\) 0.00 \\ Synthetic & 10k & 50 & 0.40 \(\pm\) 0.00 & 0.39 \(\pm\) 0.00 & 0.39 \(\pm\) 0.00 & -0.16 \(\pm\) 0.00 & -0.15 \(\pm\) 0.00 & -0.15 \(\pm\) 0.00 \\ Rainfall & 43k & 700 & 0.48 \(\pm\) 0.00 & 0.83 \(\pm\) 0.00 & 0.84 \(\pm\) 0.00 & 0.10 \(\pm\) 0.00 & 0.25 \(\pm\) 0.00 & 0.29 \(\pm\) 0.00 \\ Traffic & 48k & 300 & 9.96 \(\pm\) 0.01 & 10.01 \(\pm\) 0.01 & 10.02 \(\pm\) 0.01 & 1.42 \(\pm\) 0.00 & 1.42 \(\pm\) 0.00 & 1.42 \(\pm\) 0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Predictive mean squared errors (MSEs) and negative log predictive densities (NLPDs) with one standard deviation based on 5 random splits for a number of UCI regression datasets. All models use a Matern-3/2 kernel and L-BFGS optimiser.
memory over their globally supported counterparts, while retaining comparable performance. We note that SGPR performs slightly better than both VFF and AS-VGP, but at a higher computational complexity and without the ability for pre-computation.
**Large-Scale Regression.** In this example, we illustrate the scalability of our method both in the number of data points and in the number of inducing points, using the household electric power consumption dataset, where \(N=2,049,279\). We opt to use the entire dataset, which uses a one-minute sampling rate over a period of four years, as an example of data with a very low lengthscale. This necessitates a large number of inducing variables and tests the model's ability to scale accordingly. We repeat each experiment five times by randomly sampling 95% of the data for training and use the remaining 5% for evaluation. For each experiment, we use the Matern-3/2 kernel.
Results are displayed in Table 3.We were unable to use either VFF or VISH in this experiment as we were unable to precompute \(\mathbf{K_{uf}K_{fu}}\) due to \(\mathbf{K_{uf}}\) not fitting on GPU memory. For SVGP, we used minibatching to reduce the reliance on memory. However, we also ran into issues when using \(M\geq 1000\) requiring us to use a very (\(N_{b}=100\)) small batchsize given the size of the dataset and the model became computationally unfeasible for \(M\geq 5,000\). In contrast, for AS-VGP, memory was not an issue, and we were able to efficiently scale the number of inducing points. As highlighted in Table 3, our method was more than two orders of magnitude faster than SVGP, fitting a GP with \(1,000\) inducing points and over \(2\) million datapoints in under \(6\) seconds. We also observe that the time taken for each AS-VGP experiment follows a linear trend shown in Figure 4.1 as predicted (see Table 1). AS-VGP was also more accurate than SVGP both in predictive performance (MSE) and uncertainty quantification (NLPD), and showed an increase in performance as more inducing variables were added. Firstly, this is indicative of optimal closed-form variational inference being a better approximation to the true posterior than stochastic variational inference. Secondly, this emphasis how the B-spline basis is able to accurately represent local variance in the data and motivates using a large number of inducing points when the lengthscale is very small.
### Additive Regression
In this experiment we show that AS-VGP is not limited to low-dimensional problems, but can scale to high dimensions using an additive structure.
The airline dataset is a common GP benchmark, consisting of flight details for every commercial flight in the USA from 2008. The task is to predict the amount of delay \(y\) given eight different covariates (route distance, airtime, aircraft, age, etc.). We follow the exact same setup as from Hensman et al. (2013) using an additive Matern-3/2 GP and evaluate the performance on four datasets of size \(10K\), \(100K\), \(1,000K\) and \(5,929,413\) (complete dataset) by subsampling the original data. For each dataset, we perform 10 splits, using two thirds of the data for training and a third for testing. We report the mean and standard deviation of the MSE and NLPD in Table 4. For AS-VGP, we normalise the inputs to be between \([0,M]\), where \(M\) is the number of inducing points to ensure the spacing between knots is equal to \(1\).
We compare our method using both \(M=30\) (\(240\) in total) basis functions and \(M=200\) (\(1600\) in total) basis functions per dimension. Table 4 shows that by adding more basis functions, we can improve upon VFF and VISH in terms of MSE on the larger datasets. We can also scale to larger
Figure 3: Illustration of the linear scaling in computation of the ELBO and independence on computing the product \(\mathbf{K_{uf}K_{fu}}\) w.r.t. the number of inducing points.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & \(M=1000\) & \(M=5000\) & \(M=10,000\) & \(M=20,000\) & \(M=30,000\) \\ \hline AS-VGP (MSE \(\times 10^{-1}\)) & **8.65\(\pm\) 0.00** & **6.55\(\pm\) 0.01** & **4.53 \(\pm\) 0.00** & **3.41\(\pm\) 0.01** & **2.90\(\pm\) 0.01** \\ SVGP (MSE \(\times 10^{-1}\)) & 9.00 \(\pm\) 0.01 & / & / & / & / \\ \hline AS-VGP (NLPD) & **1.34 \(\pm\) 0.00** & **1.20 \(\pm\) 0.00** & **1.01 \(\pm\) 0.00** & **0.86 \(\pm\) 0.00** & **0.77 \(\pm\) 0.00** \\ SVGP (NLPD) & 1.37 \(\pm\) 0.00 & / & / & / & / \\ \hline AS-VGP (Time in s) & **5.51 \(\pm\) 0.10** & **14.4 \(\pm\) 0.23** & **24.5 \(\pm\) 0.35** & **46.3 \(\pm\) 0.35** & **75.0 \(\pm\) 1.70** \\ SVGP (Time in s) & 188 \(\pm\) 1.18 & / & / & / & / \\ \hline \hline \end{tabular}
\end{table}
Table 3: Predictive mean squared errors (MSEs), negative log predictive densities (NLPDs) and wall-clock time in seconds with one standard deviation based on \(5\) random splits of the household electric power consumption dataset containing \(2,049,279\) data points. The number of inducing variables used is given by \(M\).
numbers of inducing points. While VFF uses 240 inducing points in total, we use 1600 in our largest experiment, while remaining computationally efficient since pre-computation is independent of the number of inducing points (and linear in the number of training datapoints; see Table 1).
### Synthetic Spatial Data
In the following, we demonstrate the effectiveness of AS-VGP on synthetic spatial data with an inherently low lengthscale. To simulate high fidelity spatial data with fast variations, we sample from a 2D GP with a kernel constructed as the product of two 1D Matern-3/2 kernels. We generate data by sampling the GP five times each for three different lengthscales: \(0.1\), \(0.05\) and \(0.03\). Fixing the lengthscales of AS-VGP and VFF to match the generated data, we then compute the NLPD, for different numbers of inducing points.
Figure 4 shows that AS-VGP captures the variance in the data better than VFF as the lengthscale is reduced. This is in part the fault of the product basis in VFF, which produces features with very small variance, becoming more pronounced with features of higher-frequency (Dutordoir et al. (2020)). However, it also promotes the use of compactly supported basis functions which, unlike the Fourier basis that describe the process across the entire domain, act locally and therefore are more effective at modelling local variations in the data.
### Real-World Spatial Data
In this experiment, we test AS-VGP on a very large spatial regression problem. For this we use the eNATL60 ocean model of sea surface height (SSH) over the North Atlantic at \(1/60^{\circ}\) grid resolution as a real-world example of an extremely large low-lengthscale spatial regression. We perform a typical regridding problem by interpolating the model data defined on a curvilinear grid onto a regular latitude-longitude grid. We restrict the domain to a \(45^{\circ}\times 30^{\circ}\) region and randomly select 2 million data points from the model as training observations and \(100,000\) points for testing. We then evaluate the trained AS-VGP model on a regular grid at \(1/12^{\circ}\) resolution, equivalent to a \(540\times 360\) grid.
We fit AS-VGP on this data using 100 basis function per dimension (10,000 in total) in 109 seconds, 41 seconds for pre-computation and 68 seconds for optimisation, achieving an MSE of \(9.3\times 10^{-4}\) and NLPD of \(2.1\) on the test set. Similar to the 1D large-scale regression experiment, we could not use the equivalent VFF or SGPR model as storing \(\mathbf{K_{uf}}\) (a \(2,000,000\times 10,000\) matrix), requires 149 GB of memory, which cannot be stored on GPU or CPU memory. In contrast, for AS-VGP, storing \(\mathbf{K_{uf}}\) only requires \(216\) MB of memory. Consequently, AS-VGP can handle both large numbers of datapoints and inducing points without requiring stochastic optimisation, which is unachievable using both SGPR and VFF. Using the trained model, we then predict onto a regular latitude longitude grid at \(1/12^{\circ}\) resolution,
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{\(N=10,000\)} & \multicolumn{2}{c}{\(N=100,000\)} & \multicolumn{2}{c}{\(N=1,000,000\)} & \multicolumn{2}{c}{\(N=5,929,413\)} \\ \cline{2-11} Model & M & MSE & NLPD & MSE & NLPD & MSE & NLPD & MSE & NLPD \\ \hline VISH & 610 & 0.90\(\pm\)0.16 & 1.33\(\pm\)0.09 & 0.81\(\pm\)0.05 & 1.27\(\pm\)0.03 & 0.83\(\pm\)0.03 & 1.28\(\pm\)0.01 & 0.83\(\pm\)0.06 & 1.27\(\pm\)0.00 \\ VFF & 30/dim & 0.89\(\pm\)0.15 & 1.36\(\pm\)0.09 & 0.82\(\pm\)0.05 & 1.32\(\pm\)0.03 & 0.83\(\pm\)0.01 & 1.34\(\pm\)0.01 & 0.83\(\pm\)0.00 & 1.32\(\pm\)0.00 \\ AS-VGP & 30/dim & 0.95\(\pm\)0.17 & 1.39\(\pm\)0.09 & 0.84\(\pm\)0.05 & 1.33\(\pm\)0.03 & 0.84\(\pm\)0.01 & 1.33\(\pm\)0.01 & 0.83\(\pm\)0.00 & 1.33\(\pm\)0.00 \\ AS-VGP & 200/dim & 0.91\(\pm\)0.16 & 1.37\(\pm\)0.09 & 0.82\(\pm\)0.05 & 1.32\(\pm\)0.03 & 0.83\(\pm\)0.01 & 1.32\(\pm\)0.01 & 0.82\(\pm\)0.00 & 1.32\(\pm\)0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Predictive mean squared errors (MSEs) and negative log predictive densities (NLPDs) with one standard deviation based on 5 random splits for a number of UCI regression datasets. All models use a Matern-3/2 kernel and the L-BFGS optimiser for training. All models show comparable performance.
Figure 4: Mean NLPD for AS-VGP and VFF for increasing numbers of inducing points. The data is obtained by sampling a GP with Matern-3/2 kernel with decreasing lengthscale. The mean NLPD for each model is computed with known parameters and by averaging over five separate samples. The error bars show one standard deviation.
taking 36 seconds. Figure 5 shows that AS-VGP is able to model the small structures present in the SSH, quickly and efficiently, whilst still being able to perform closed-form optimal variational inference where other methods can't.
## 5 Discussion
For one-dimensional inputs with a Gaussian likelihood, AS-VGP is extremely fast, scalable and lightweight, exploiting the band-diagonal structure of both \(\mathbf{K_{uu}}\) and \(\mathbf{K_{uf}}\mathbf{K_{fu}}\) to perform pre-computation that scales linearly in the number of datapoints and evaluations of the ELBO that scale linearly in the number of inducing variables. We also show that our method is not limited to one-dimensional inputs, but can scale to higher dimensions using an additive or Kronecker structure. In particular, we show that our method is particularly strong at representing processes with small length-scales, making it amenable to modelling spatio-temporal data or long time series.
Stochastic variational inference also enables the scaling of GPs to large datasets via mini-batching. In practice, however, when using a Gaussian likelihood and if compute permits, SGPR produces a better approximation to the true posterior than SVGP. Whilst in regular SGPR the cost to compute the ELBO is dependent on \(N\), VFF made it possible to remove this by performing a one-off pre-computation, effectively scaling SGPR to millions of data points. Our work extends VFF even further by reducing the complexity of the pre-computation with respect to the number of datapoints and decoupling it from the number of inducing variables, enabling us to scale to larger \(N\) and larger \(M\).
Comparisons to our method can also be made to Structured Kernel Interpolation (SKI) by Wilson and Nickisch (2015), but from a variational perspective. Both SKI and AS-VGP construct inducing variables on dense grids. However, whereas SKI performs explicit interpolation between inducing points, AS-VGP implicitly performs interpolation by instead constructing inducing variables equivalent to evaluating a set of B-spline basis functions.
Whilst, we show good performance in low-dimensional problems, ideally, we would not have to impose a Kronecker structure that scales so badly in dimensionality, but instead project directly onto a set of 2D basis functions. Taking inspiration from the connections with SKI, a better choice of basis might be one defined on the simplex, which offers linear scaling in \(D\) when generalising to higher dimensions (Kapoor et al., 2021).
LimitationsThe main limitation of our approach is the scaling to high dimensions. Unlike VISH, we inherit many of the shortcoming of VFF, including a reliance on tensor products which requires an exponential increase in the number of basis functions with increasing dimensions. However, by decoupling the pre-computation from the number of inducing variables, our method is less affected by exponential scaling than VFF. For low numbers of inducing features \(M\), our method performs worse than VFF. However, we can mitigate this shortcoming by using more inducing features due to the linear scaling in \(M\) versus cubic for VFF (see Table 1). Finally, like VFF our method currently only supports the Matern class of kernels. A future research direction would be to expand the class of kernels that can be decomposed using B-splines, e.g., non-stationary kernels, which could help improve spatial modelling.
In practise we propose to use our method on low-dimensional problems (\(D\leq 4\)), such as spatial or spatio-temporal data, where our method has shown to be computationally and memory efficient while being able to capture high-frequency variations.
## 6 Conclusion
We introduced a novel inter-domain GP model wherein the inducing features are defined as RKHS projections of the GP onto compactly-supported B-spline basis functions. This results in covariance matrices that are sparse, allowing us to draw entirely on techniques from sparse linear algebra to do GP training and inference and thereby opening the door to GPs with tens of thousand inducing variables. Our experiments demonstrate that we get significant computational speed up and memory savings without sacrificing accuracy.
Figure 5: Real-world data from the eNATL60 ocean model over the Gulfstream at \(1/60^{\circ}\) grid resolution. (a) Ground truth; (b) Predictive mean and (c) predictive standard deviation for AS-VGP at a regular grid with \(1/12^{\circ}\) resolution. The predictive mean of the AS-VGP and the ground truth are nearly identical while the predictive uncertainty essentially vanishes.
## Acknowledgements
HJC is supported by the UCL Department of Computer Science DTP scholarship.
|
2310.01368 | Co-orientable taut foliations in Dehn fillings of pseudo-Anosov mapping
tori with co-orientation-reversing monodromy | Let $\Sigma$ be a compact orientable surface with nonempty boundary, let
$\varphi: \Sigma \to \Sigma$ be an orientation-preserving pseudo-Anosov
homeomorphism, and let $M = \Sigma \times I / \stackrel{\varphi}{\sim}$ be the
mapping torus of $\Sigma$ over $\varphi$. Let $\mathcal{F}^{s}$ denote the
stable foliation of $\varphi$ in $\Sigma$. Let $T_1, \ldots, T_k$ denote the
boundary components of $M$. With respect to a canonical choice of meridian and
longitude on each $T_i$, the degeneracy locus of the suspension flow of
$\varphi$ on $T_i$ can be identified with a pair of integers $(p_i; q_i)$ such
that $p_i > 0$ and $-\frac{1}{2}p_i < q_i \leqslant \frac{1}{2}p_i$. Let $c_i$
denote the number of components of $T_i \cap (\Sigma \times \{0\})$. Assume
that $\mathcal{F}^{s}$ is co-orientable and $\varphi$ reverses the
co-orientation on $\mathcal{F}^{s}$. We show that the Dehn filling of $M$ along
$\partial M$ with any multislope in $J_1 \times \ldots \times J_k$ admits a
co-orientable taut foliation, where $J_i$ is one of the two open intervals in
$\mathbb{R} \cup \{\infty\} \cong \mathbb{R}P^{1}$ between $\frac{p_i}{q_i +
c_i}, \frac{p_i}{q_i - c_i}$ which doesn't contain $\frac{p_i}{q_i}$.
For some hyperbolic fibered knot manifolds, the slopes given above contain
all slopes that yield non-L-space Dehn filllings. The examples include (1) the
exterior of the $(-2,3,2q+1)$-pretzel knot in $S^{3}$ for each $q \in
\mathbb{Z}_{\geqslant 3}$ (see \hyperref[Kri]{[Kri]} for a previous proof), (2)
the exteriors of many L-space knots in lens spaces. | Bojun Zhao | 2023-10-02T17:31:24Z | http://arxiv.org/abs/2310.01368v1 | Co-orientable taut foliations in Dehn fillings of pseudo-Anosov mapping tori with co-orientation-reversing monodromy
###### Abstract.
Let \(\Sigma\) be a compact orientable surface with nonempty boundary, let \(\varphi:\Sigma\to\Sigma\) be an orientation-preserving pseudo-Anosov homeomorphism, and let \(M=\Sigma\times I/\stackrel{{\sim}}{{\sim}}\) be the mapping torus of \(\Sigma\) over \(\varphi\). Let \(\mathcal{F}^{s}\) denote the stable foliation of \(\varphi\) in \(\Sigma\). Let \(T_{1},\ldots,T_{k}\) denote the boundary components of \(M\). With respect to a canonical choice of meridian and longitude on each \(T_{i}\), the degeneracy locus of the suspension flow of \(\varphi\) on \(T_{i}\) can be identified with a pair of integers \((p_{i};q_{i})\) such that \(p_{i}>0\) and \(-\frac{1}{2}p_{i}<q_{i}\leqslant\frac{1}{2}p_{i}\). Let \(c_{i}\) denote the number of components of \(T_{i}\cap(\Sigma\times\{0\})\). Assume that \(\mathcal{F}^{s}\) is co-orientable and \(\varphi\) reverses the co-orientation on \(\mathcal{F}^{s}\). We show that the Dehn filling of \(M\) along \(\partial M\) with any multislope in \(J_{1}\times\ldots\times J_{k}\) admits a co-orientable taut foliation, where \(J_{i}\) is one of the two open intervals in \(\mathbb{R}\cup\{\infty\}\cong\mathbb{R}P^{1}\) between \(\frac{p_{i}}{q_{i}+c_{i}},\frac{p_{i}}{q_{i}-c_{i}}\) which doesn't contain \(\frac{p_{i}}{q_{i}}\).
For some hyperbolic fibered knot manifolds, the slopes given above contain all slopes that yield non-L-space Dehn fillings. The examples include (1) the exterior of the \((-2,3,2q+1)\)-pretzel knot in \(S^{3}\) for each \(q\in\mathbb{Z}_{\geqslant 3}\) (see [Kri] for a previous proof), (2) the exteriors of many L-space knots in lens spaces.
## 1. Introduction
Throughout this paper, all 3-manifolds are connected, orientable and irreducible.
The L-space conjecture ([1], [2]) predicts that the following statements are equivalent for a closed orientable irreducible 3-manifold \(M\):
(1) \(M\) is a non-L-space.
(2) \(\pi_{1}(M)\) is left orderable.
(3) \(M\) admits a co-orientable taut foliation.
The implication (3) \(\Longrightarrow\) (1) is confirmed in [1] (see also [1], [11]). In the case that \(M\) has positive first Betti number, (2) is proved in [1], and (3) is proved in [1]. The L-space conjecture has been verified when \(M\) is a graph manifold ([1], [1], [12]).
There are two useful ways to construct closed orientable 3-manifolds: Dehn surgeries on knots or links in \(S^{3}\), and Dehn fillings of mapping tori of compact orientable surfaces. Both of these two operations can produce all closed orientable 3-manifolds ([1], [1], [2]). One practical approach to the L-space conjecture is to find and identify the (multi)slopes of those Dehn surgeries or fillings that yield manifolds satisfying (1), (2), (3) listed above. A slope on a knot or a knot manifold is called an _NLS_ or _LO_ or _CTF surgery/filling slope_ if it yields a manifold satisfying (1) or (2) or (3), respectively. Also a slope on a knot or a knot manifold is called an _L-space surgery/filling slope_ if it yields an L-space. And we use the similar terminologies _NLS, LO, CTF, L-space surgery/filling multislopes_ for links and link manifolds (see Subection 2.1 for the convention of multislopes).
In this paper we focus on the problem of finding CTF filling (multi)slopes on pseudo-Anosov mapping tori of compact orientable surfaces. Let \(\Sigma\) be a compact orientable surface with nonempty boundary and let \(\varphi:\Sigma\to\Sigma\) be an orientation-preserving pseudo-Anosov homeomorphism. Let \(M=\Sigma\times I/\stackrel{{\sim}}{{\sim}}\) be the mapping torus of \(\Sigma\) over \(\varphi\). We fix an orientation on \(M\).
We choose a canonical meridian/longitude coordinate system on each component \(T\) of \(\partial M\), following [1] (see Convention 2.2 for details). If \(M\) is the exterior of a fibered knot \(K\) in \(S^{3}\), this
coordinate system coincides with the standard meridian/longitude coordinate system except a special case (see Remark 2.4 for details). With respect to the canonical coordinate system, a slope on each component \(T\) of \(\partial M\) will be identified with an element of \(\mathbb{Q}\cup\{\infty\}\) as usual.
We introduce some notations related to \(M\) and \(\varphi\), as explained in [GO], [Ga3], [HKM], [KazR1]:
**Notation 1.1**.: (a) Let \(\Psi\) denote the suspension flow of \(\varphi\) in \(M\). Then all closed orbits of \(\Psi\) contained in the same boundary component of \(M\) are parallel essential simple closed curves. Let \(T\) be a component of \(\partial M\). The _degeneracy slope_ of \(T\) (denoted \(\delta_{T}\)) is the slope of the closed orbits of \(\Psi\) contained in \(T\). Note that there are \(2n\) closed orbits of \(\Psi\) on \(T\) for some \(n\in\mathbb{N}_{+}\), and \(n\) of them contain \((t,0)\) for some \(t\in\partial\Sigma\) which is a singular point of the stable foliation of \(\varphi\). The _degeneracy locus_ of \(\Psi\) on \(T\) ([Ga3], see also [Ro2]) is the union of these \(n\) closed orbits, and we denote it by \(d(T)=(nu;nv)\) for \(u,v\in\mathbb{Z}\) such that \(\frac{u}{v}=\delta_{T}\), \(u>0\), and \(\gcd(u,v)=1\) (where \(u=1,v=0\) if \(\delta_{T}=\infty\)). And we call \(n\) the _multiplicity_ of \(d(T)\).
(b) If \(\varphi(C)=C\) for some component \(C\) of \(\partial\Sigma\) and \(T\) is the component \(C\times I\) of \(\partial M\), the _fractional Dehn twist coefficient_ of \(C\) (denoted \(f_{C}(\varphi)\)) is defined to be \(\frac{1}{\delta_{T}}\) (where \(f_{C}(\varphi)=0\) if \(\delta_{T}=\infty\)). In the case that \(\varphi\) takes each component of \(\partial\Sigma\) to itself, \(\varphi\) is called _right-veering_ (resp. _left-veering_) if \(f_{C}(\varphi)>0\) (resp. \(f_{C}(\varphi)<0\)) for each component \(C\) of \(\partial\Sigma\).
We note that under the canonical coordinate system on \(\partial M\), for each component \(T\) of \(\partial M\), \(\delta_{T}\in(\mathbb{Q}\cup\{\infty\})-[-2,2)\). Moreover, if \(d(T)=(p;q)\), then \(-\frac{1}{2}p<q\leqslant\frac{1}{2}p\), \(\frac{p}{q}=\delta_{T_{i}}\), and \(\gcd(p,q)\) is the multiplicity of \(d(T)\).
And we adopt the following conventions for Dehn fillings of \(M\) and slopes on \(\partial M\):
**Convention 1.2**.: (a) Let \(k\) denote the number of boundary components of \(M\). For any multislope \(\mathbf{s}\in(\mathbb{Q}\cup\{\infty\})^{k}\) of \(\partial M\), \(M(\mathbf{s})\) denotes the Dehn filling of \(M\) along \(\partial M\) with the multislope \(\mathbf{s}\).
(b) Throughout this paper, for a slope \(\frac{p}{q}\) on a component \(T\) of \(\partial M\), we allow \(\gcd(p,q)>1\). In this case, we consider \(\frac{p}{q}\) as the corresponding fraction in reduced form, i.e. \(\frac{p}{q}\) is identified with \(\frac{u}{v}\) for which \(u=\frac{p}{\gcd(p,q)}\), \(v=\frac{q}{\gcd(p,q)}\).
Now we review some known results on CTF filling slopes of \(M\). To be convenient, we state the following two theorems in our setting, although they do not need to restrict \(\varphi\) to be pseudo-Anosov.
Roberts ([Ro1], [Ro2]) proves the following theorem:
**Theorem 1.3** (Roberts).: _Suppose that \(\Sigma\) has exactly one boundary component._
_(a) If \(\varphi\) is right-veering, then \(M(s)\) admits a co-orientable taut foliation for any rational slope \(s\in(-\infty,1)\)._
_(b) If \(\varphi\) is left-veering, then \(M(s)\) admits a co-orientable taut foliation for any rational slope \(s\in(-1,+\infty)\)._
_(c) If \(\varphi\) is neither right-veering nor left-veering, then \(M(s)\) admits a co-orientable taut foliation for any rational slope \(s\in(-\infty,+\infty)\)._
_Moreover, the core curve of the filling solid torus is transverse to the foliation in each case._
For the case that \(\Sigma\) has multiple boundary components, Kalelkar and Roberts ([KalR]) prove
**Theorem 1.4** (Kalelkar-Roberts).: _Let \(k\) denote the number of boundary components of \(M\). There is a neighborhood \(J\subseteq\mathbb{R}^{k}\) of \((0,\ldots,0)\) such that for any rational multislope \((s_{1},\ldots,s_{k})\in J\), \(M(s_{1},\ldots,s_{k})\) admits a co-orientable taut foliation. Moreover, the core curves of the filling solid tori are transverse to the foliation._
Let \(\mathcal{F}^{s},\mathcal{F}^{u}\) denote the stable and unstable foliations of \(\varphi\).
**Definition 1.5**.: We call \(\varphi\)_co-orientable_ if \(\mathcal{F}^{s}\) is co-orientable, and we call \(\varphi\)_co-orientation-preserving_ (resp. _co-orientation-reversing_) if \(\varphi\) is co-orientable and preserves (resp. reverses) the co-orientation on \(\mathcal{F}^{s}\).
**Remark 1.6**.: In Definition 1.5, \(\mathcal{F}^{s}\) can be replaced by \(\mathcal{F}^{u}\). If \(\mathcal{F}^{s}\) is co-oriented, then the co-orientation on \(\mathcal{F}^{s}\) defines continuously varying orientations on the leaves of \(\mathcal{F}^{u}\), which implies that \(\mathcal{F}^{u}\) is orientable and thus is co-orientable. It follows that \(\mathcal{F}^{u}\) is co-orientable if and only if \(\mathcal{F}^{s}\) is co-orientable, and \(\varphi\) preserves (resp. reverses) the co-orientation on \(\mathcal{F}^{u}\) if and only if \(\varphi\) is co-orientation-preserving (resp. co-orientation-reversing).
We refer to [T], [M] for some information on the co-orientability of \(\varphi\) and refer to [BB, Lemma 4.3], [DGT] for criterions for \(\varphi\) being co-orientation-preserving/reversing. \(\varphi\) is co-orientable implies that each singularity of \(\mathcal{F}^{s}\) in \(Int(\Sigma)\) has an even number of prongs and each component of \(\partial\Sigma\) contains an even number of singularities of \(\mathcal{F}^{s}\). Note that if \(\varphi\) is non-co-orientable, then \(\Sigma\) has a double cover branched over \(\{\)singularities of \(\mathcal{F}^{s}\) in \(Int(\Sigma)\) with an odd number of prongs\(\}\), denoted \(\widetilde{\Sigma}\), such that the pull-back of \(\mathcal{F}^{s}\) to \(\widetilde{\Sigma}\) is co-orientable, and \(\varphi\) lifts to two co-orientable pseudo-Anosov homeomorphisms \(\varphi_{1},\varphi_{2}\) on \(\widetilde{\Sigma}\) which are co-orientation-preserving, co-orientation-reversing respectively, see for example [LT, page 14].
For general pseudo-Anosov mapping tori of compact surfaces with more than one boundary components, co-orientation-preserving is the only known case to have an explicit multi-interval such that all rational multislopes in it are CTF filling multislopes. The following theorem is implicitly contained in [Ga2]:
**Theorem 1.7** (Gabai).: _Assume that \(\varphi\) is co-orientable and co-orientation-preserving. Let \(k\) denote the number of boundary components of \(M\). For any multislope \((s_{1},\ldots,s_{k})\in(\mathbb{Q}\cup\{\infty\})^{k}\) such that each \(s_{i}\) is not the degeneracy slope of the corresponding boundary component, \(M(s_{1},\ldots,s_{k})\) admits a co-orientable taut foliation. Moreover, the core curves of the filling solid tori are transverse to the foliation._
Unlike the co-orientation-preserving case, many \(M(s_{1},\ldots,s_{k})\) (with each \(s_{i}\) being not the degeneracy slope) don't admit co-orientable taut foliation when \(\varphi\) is co-orientation-reversing, since there are many hyperbolic fibered L-space knots in \(S^{3}\) and lens spaces that have co-orientation-reversing monodromy (see Proposition 1.13, Examples 1.16\(\sim\)1.20). We concentrate on the co-orientation-reversing case in this paper.
**Remark 1.8**.: Foliations in the above theorems are very useful in showing that many of these Dehn fillings have left orderable fundamental group. See [BH] and [H] for results on LO filling slopes derived from foliations in Theorem 1.3. And see [Z] for a result on LO filling multislopes derived from foliations in Theorem 1.7.
### The main results
Let \(T_{1},\ldots,T_{k}\) denote the boundary components of \(M\). For each \(i\in\{1,\ldots,k\}\), we choose a boundary component \(C_{i}\) of \(\Sigma\) such that \(C_{i}\times\{0\}\subseteq T_{i}\), and let \(c_{i}\) denote the order of \(C_{i}\) under \(\varphi\) (i.e. \(c_{i}=\min\{k\in\mathbb{N}_{+}\mid\varphi^{k}(C_{i})=C_{i}\}\)). Let \((p_{i};q_{i})\) denote the degeneracy locus of the suspension flow of \(\varphi\) on each \(T_{i}\). We note that if \(\varphi\) is co-orientable, then \(2\mid p_{i}\), and moreover, if \(\varphi\) is co-orientation-reversing, then \(q_{i}\equiv c_{i}\ (\mathrm{mod}\ 2)\) (this is because \(2\mid q_{i}\) if and only if \(\varphi^{c_{i}}\) is co-orientation-preserving, see Remark 1.11 (a)).
**Theorem 1.9**.: _Assume that \(\varphi\) is co-orientable and co-orientation-reversing (then \(2\mid p_{i}\) and \(q_{i}\equiv c_{i}\ (\mathrm{mod}\ 2)\) for each \(i\)). Then \(M(s_{1},\ldots,s_{k})\) admits a co-orientable taut foliation for any multislope \((s_{1},\ldots,s_{k})\in(J_{1}\times\ldots\times J_{k})\cap(\mathbb{Q}\cup\{ \infty\})^{k}\), where each \(J_{i}\) is a set of slopes on \(T_{i}\) such that_
\[J_{i}=\begin{cases}(-\infty,\frac{p_{i}}{q_{i}+c_{i}})\cup(\frac{p_{i}}{q_{i}- c_{i}},+\infty)\cup\{\infty\}&\text{if $q_{i}>c_{i}>0$}\\ (-\infty,\frac{p_{i}}{2q_{i}})&\text{if $q_{i}=c_{i}>0$}\\ (-\frac{p_{i}}{c_{i}-q_{i}},\frac{p_{i}}{q_{i}+c_{i}})&\text{if $c_{i}>q_{i} \geqslant 0$}\\ (-\frac{p_{i}}{|q_{i}|+c_{i}},\frac{c_{i}}{c_{i}-|q_{i}|})&\text{if $-c_{i}<q_{i}<0$}\\ (-\frac{p_{i}}{2|q_{i}|},+\infty)&\text{if $q_{i}=-c_{i}<0$}\\ (-\infty,-\frac{p_{i}}{|q_{i}|-c_{i}})\cup(-\frac{p_{i}}{|q_{i}|+c_{i}},+ \infty)\cup\{\infty\}&\text{if $q_{i}<-c_{i}<0$}.\end{cases}\]
_Moreover, the core curves of the filling solid tori are transverse to the leaves of these foliations._
In the case that \(\varphi\) preserves each component of \(\partial\Sigma\), \(c_{i}=1\) for each \(1\leqslant i\leqslant k\).
**Corollary 1.10**.: _Assume that \(\varphi\) is co-orientable, co-orientation-reversing and \(\varphi(C)=C\) for each component \(C\) of \(\partial\Sigma\). Then \(M(s_{1},\ldots,s_{k})\) admits a co-orientable taut foliation for any multislope \((s_{1},\ldots,s_{k})\in(J_{1}\times\ldots\times J_{k})\cap(\mathbb{Q}\cup\{ \infty\})^{k}\), where each \(J_{i}\) is a set of slopes on \(T_{i}\) such that_
\[J_{i}=\begin{cases}(-\infty,\frac{p_{i}}{q_{i}+1})\cup(\frac{p_{i}}{q_{i}-1}, +\infty)\cup\{\infty\}&\text{if }q_{i}>1\\ (-\infty,\frac{p_{i}}{2})&\text{if }q_{i}=1\\ (-\frac{p_{i}}{2},+\infty)&\text{if }q_{i}=-1\\ (-\infty,-\frac{p_{i}}{|q_{i}|-1})\cup(-\frac{p_{i}}{|q_{i}|+1},+\infty)\cup \{\infty\}&\text{if }q_{i}<-1.\end{cases}\]
**Remark 1.11**.: (a) Note that \(p_{i}\) is the number of singular points of \(\mathcal{F}^{s}\) contained in \(C_{i}\). We assign each \(C_{i}\) an orientation consistent with the orientation on the longitude of \(T_{i}\), and let \(v_{1},\ldots,v_{p_{i}}\) denote the \(p_{i}\) singular points of \(\mathcal{F}^{s}\) in \(C_{i}\) (consecutive along the orientation on \(C_{i}\)). Then \(\varphi^{c_{i}}(v_{j})=v_{j+q_{i}}\) (mod \(p_{i}\)) for each \(1\leqslant j\leqslant p_{i}\).
(b) Assume that \(\varphi\) is co-orientable, co-orientation-reversing and \(\partial\Sigma\) is connected. Then the degeneracy slope of \(\partial M\) is not \(\infty\), and so \(\partial\Sigma\) has nonzero fractional Dehn twist coefficient. Thus, one of Theorem 1.3 (a), (b) applies to \(M\) and Theorem 1.3 (c) doesn't apply. In this case, Theorem 1.9 expands the known range of CTF filling slopes when \((p_{1};q_{1})\neq(2;1)\).
(c) In Theorem 1.9, if we regard the set of slopes \(\mathbb{R}\cup\{\infty\}\) on each \(T_{i}\) as \(\mathbb{R}P^{1}\cong S^{1}\), then \(J_{i}\) is one of the two open intervals in \(\mathbb{R}\cup\{\infty\}\) between \(\frac{p_{i}}{q_{i}+c_{i}},\frac{p_{i}}{q_{i}-c_{i}}\) that doesn't contain the degeneracy slope \(\frac{p_{i}}{q_{i}}\) on \(T_{i}\). To be convenient, we will call \(J_{i}\) an interval and call \(J_{1}\times\ldots\times J_{k}\) a multi-interval.
### The application to Dehn surgeries on knots
In this subsection, we apply Corollary 1.10 to Dehn surgeries on knots in \(S^{3}\) or spherical manifolds. At first, we describe some properties of L-space filling slopes on knot manifolds.
A knot manifold is _Floer simple_ if it has at least two distinct L-space filling slopes (compare with [RR, Proposition 1.3]), and a knot in a closed 3-manifold is called an _L-space knot_ if its exterior is Floer simple. We note from [RR] that for a Floer simple knot manifold \(N\), there is an interval \(\mathcal{L}(N)\) of slopes such that
\[\{\text{L-space filling slopes of }N\}=\mathcal{L}(N)\cap(\mathbb{Q}\cup\{ \infty\}),\]
and in particular, \(\mathcal{L}(N)\) either consists of all slopes except the homological longitude, or is a closed interval ([RR, Theorem 1.6]). We call \(\mathcal{L}(N)\) the _maximal L-space filling interval_ of \(N\). If \(\mathcal{L}(N)\) is a closed interval, then we call its complement the _maximal NLS filling interval_ of \(N\).
A knot in a closed 3-manifold is called a _fibered knot_ if its exterior is an once-punctured surface bundle over \(S^{1}\). Note that all L-space knots in \(S^{3}\) are fibered ([Gh], [Ni]). Let \(K\) be a hyperbolic fibered L-space knot in a closed 3-manifold \(W\) which is either \(S^{3}\) or a spherical manifold, and we fix an orientation on \(W\). Let \(N\) denote the exterior of \(K\) in \(W\), let \(T=\partial N\), and let \(\phi\) denote the monodromy of \(N\). We assign \(N\) an orientation induced from \(W\). Combining Theorem 1.3 with [OS], \(\phi\) is either left-veering or right-veering. Fix the canonical coordinate system on \(\partial N\). Let \(\delta_{T},d(T)\) denote the degeneracy slope on \(T\) and the degeneracy locus of the suspension flow of \(\phi\) on \(T\), respectively, and we denote \(d(T)\) by \((p;q)\) (then \(\delta_{T}=\frac{p}{q}\)). Let \(m_{K}\) denote the slope on \(\partial N\) with \(N(m_{K})=W\). Then
\[\Delta(m_{K},d(T))=\gcd(p,q)\Delta(m_{K},\delta_{T})<2\]
since \(W\) admits no essential lamination ([GO]), where \(\Delta(m_{K},d(T)),\Delta(m_{K},\delta_{T})\) denote the minimal geometric intersection numbers of \(m_{K},d(T)\) and of \(m_{K},\delta_{T}\). Thus \(m_{K}\) belongs to one of Cases 1\(\sim\)3:
**Case 1.**\(m_{K}=\delta_{T}\).
**Case 2.**\(\Delta(m_{K},\lambda_{T})=1\). In this case, either (1) \(m_{K}=\infty\) or (2) \(m_{K}=1\) and \(\delta_{T}=2\). In the case (2), we can choose the opposite orientation on \(W\) to make \(m_{K}\) become \(\infty\) under the coordinate system induced from this orientation (see Conventions 2.2, 2.3). In the case of \(m_{K}=\infty\), \(|q|=1\) and \(2\leqslant p\leqslant 4g-2\) ([Ga3]). Thus \(d(T)\) has multiplicity \(1\).
**Case 3.**\(m_{K}\neq\delta_{T}\) and \(\Delta(m_{K},\lambda_{T})>1\). Equivalently, \(m_{K}\notin\{\delta_{T},\infty,1\}\). In this case, \(\gcd(p,q)=\Delta(m_{K},\delta_{T})=1\), and thus \(d(T)\) has multiplicity \(1\).
We call \(K\) a _type-I_ (resp. _type-II_, _type-III_) knot in \(W\) if Case 1 (resp. Case 2, Case 3) holds. As explained in Case 2, we may always assume \(m_{K}=\infty\) when \(K\) is a type-II knot in \(W\). Let \(g\) denote the fibered genus of \(N\). When \(W=S^{3}\), \(K\) can only be a type-II knot, and the maximal NLS filling interval of \(N\) is \((-2g+1,+\infty)\) if \(\phi\) is left-veering and is \((-\infty,2g-1)\) if \(\phi\) is right-veering ([KMOS]). However, when \(W\) is a lens space and \(K\) is a type-II knot in \(W\), the maximal NLS filling interval of \(N\) doesn't have to be \((-2g+1,+\infty)\) or \((-\infty,2g-1)\) (for instance, Examples 1.16, 1.18). Also, \(K\) is possible to be a type-I knot or a type-III knot when \(W\) is a lens space (Examples 1.16, 1.17).
**Remark 1.12**.: We note that if \(\phi\) is left-veering (resp. right-veering), then we can choose an opposite orientation on \(W\) to get a right-veering (resp. left-veering) monodromy of \(N\). In the case of \(\delta_{T}\neq 2\), the meridian on \(T\) doesn't change, and any other slope \(s\in\mathbb{Q}\) becomes \(-s\) under the canonical coordinate system induced from the opposite orientation on \(N\).
We give some examples in the remainder of this subsection. For \(q\in\mathbb{N}\) with \(q\geqslant 3\), the \((-2,3,2q+1)\)-pretzel knot \(K\) in \(S^{3}\) is a hyperbolic L-space knot with Seifert genus \(g(K)=q+2\) ([LM], [O1]). We have
**Proposition 1.13**.: _Let \(K\) denote the \((-2,3,2q+1)\)-pretzel knot in \(S^{3}\) with \(q\geqslant 3\). We fix an orientation on \(S^{3}\) so that \(K\) has right-veering monodromy._
_(a) \(K\) has co-orientable and co-orientation-reversing monodromy._
_(b) The degeneracy slope on \(K\) is \(4g(K)-2\)._
_(c) All rational slopes in \((-\infty,2g(K)-1)\) are CTF surgery slopes of \(K\)._
_(d) For each \(n\in\mathbb{N}\) with \(n\geqslant 2\), the \(n\)-fold cyclic branched cover of \(S^{3}\) over \(K\), denoted \(\Sigma_{n}(K)\), admits a co-orientable taut foliation._
**Remark 1.14**.: (1) Proposition 1.13 (c) has been proved in [Kri].
(2) In Proposition 1.13 (d), the foliations are obtained from Theorem 1.7 when \(n\) is even and from Corollary 1.10 when \(n\) is odd. When \(n\geqslant 4g(K)-2=4(q+2)-2=4q+6\), it can be deduced from Theorem 1.3 that \(\Sigma_{n}(K)\) admits a co-orientable taut foliation (see [BH] for an explanation). It follows from (d) and [OS] that \(\Sigma_{n}(K)\) is a non-L-space. In fact, it's already known from [BBG, Theorem 1.2] that \(\Sigma_{n}(K)\) is a non-L-space (see [Nie, Subsection 3.2] for an explanation that [BBG, Theorem 1.2] can be applied to \(\Sigma_{n}(K)\)). By [BGH, Theorem 1.2], \(\Sigma_{n}(K)\) also has left orderable fundamental group.
It follows that
**Corollary 1.15**.: _For every \((-2,3,2q+1)\)-pretzel knot \(K\) in \(S^{3}\) with \(q\geqslant 3\), and for each \(n\geqslant 2\), the \(n\)-fold cyclic branched cover of \(K\) satisfies each of (1), (2), (3) of the L-space conjecture._
In the census [D3], there are \(3242\)\(1\)-cusped hyperbolic fibered \(3\)-manifolds with monodromies satisfying that all singularities of the stable foliations have even number of prongs. \(805\) of them have co-orientation-preserving monodromy, \(2214\) of them have co-orientation-reversing monodromy, and the rest \(223\) of them don't have co-orientable monodromy. Let \(\mathcal{N}\) denote the set of these \(2214\) manifolds with co-orientation-reversing monodromy. For each \(N\in\mathcal{N}\), we can obtain a CTF filling interval from our main results. In [D1], the manifolds in \(\mathcal{N}\) are tested for being Floer simple, and some Dehn fillings of them are tested for being L-spaces. We can find some L-space filling slopes and NLS filling slopes of the manifolds in \(\mathcal{N}\) from [D2] (the data associated to [D1]).
We list some examples in \(\mathcal{N}\) below with explicit CTF filling intervals obtained from Corollary 1.10, as well as other data, such as fibered genera, degeneracy slopes, and maximal NLS filling intervals. We first explain how this data is obtained. For each \(1\)-cusped manifold \(N\) in Examples 1.16\(\sim\)1.20,
\(\bullet\) The fibered genus and the degeneracy slope of \(N\) are known from [D3], and \(N\) is the complement of a hyperbolic fibered L-space knot in some spherical manifold of type-II or type-III (\(N\) is also possible to be the complement of a type-I knot in another spherical manifold). By the degeneracy slope of \(N\), we can get a CTF filling interval obtained from Corollary 1.10 directly.
\(\bullet\) If \(N\) is contained in Examples 1.16\(\sim\)1.19, then the two endpoints of the obtained CTF filling interval of \(N\) are verified to be L-space filling slopes in [D2]. Combined with [OS] and [RR], the obtained CTF filling interval of \(N\) is exactly the maximal NLS filling interval of \(N\).
\(\bullet\) Spherical Dehn fillings of \(N\) are known from comparing [D2] with snapPy [CDGW].
Let \(L(p,q)\) denote the lens space obtained from Dehn surgery along the unknot in \(S^{3}\) with the slope \(-\frac{p}{q}\). If we write \(L(p,q)\) such that \(p\) is specified but \(q\) is not specified, that means it's valid for the given \(p\) and some \(q\), for example, \(L(5,q)\) denotes an unspecified element of \(\{L(5,1),L(5,2)\}\). Also note that all slopes in the following examples are consistent with Convention 2.2 (instead of the slopes in snapPy). Similar to Convention 1.2, for any \(1\)-cusped \(3\)-manifold \(N\), we denote by \(N(s)\) the Dehn filling of \(N\) with slope \(s\) for any \(s\in\mathbb{Q}\cup\{\infty\}\).
**Example 1.16**.: We list some examples in \(\mathcal{N}\), each of which has three distinct lens space Dehn fillings, and it can be identified with the complement of a type-I\(\sim\)III knot in these three lens spaces respectively. The obtained CTF filling intervals of these examples are equal to their maximal NLS filling intervals.
\begin{tabular}{|l|l|l|l|l|} \hline manifold & genus & degeneracy slope & obtained CTF filling interval & maximal NLS filling interval \\ \hline \(m122\) & \(g=2\) & \(\delta=4\) & \((-\infty,2)\) & \((-\infty,2)\) \\ \cline{2-4} & \(m122(4)=L(28,q_{1})\), \(m122(5)=L(35,q_{2})\), \(m122(\infty)=L(7,q_{3})\). \\ \hline \(m280\) & \(g=2\) & \(\delta=-4\) & \((-2,+\infty)\) & \((-2,+\infty)\) \\ \cline{2-4} & \(m280(-4)=L(44,q_{2})\), \(m280(-3)=L(33,q_{1})\), \(m280(\infty)=L(11,q_{3})\). \\ \hline \(v0751\) & \(g=3\) & \(\delta=-6\) & \((-3,+\infty)\) & \((-3,+\infty)\) \\ \cline{2-4} & \(v0751(-6)=L(78,q_{1})\), \(v0751(-5)=L(65,q_{2})\), \(v0751(\infty)=L(13,q_{3})\). \\ \hline \end{tabular}
**Example 1.17**.: We give some more examples in \(\mathcal{N}\) which are complements of type-III knots in lens spaces, and the obtained CTF filling intervals of them are equal to their maximal NLS filling intervals.
\begin{tabular}{|l|l|l|l|l|} \hline manifold & genus & degeneracy slope & obtained CTF filling interval & maximal NLS filling interval \\ \hline \(s297\) & \(g=2\) & \(\delta=-6\) & \((-3,+\infty)\) & \((-3,+\infty)\) \\ \cline{2-4} & \(s297(-5)=L(30,q)\). \\ \hline \(s408\) & \(g=3\) & \(\delta=-8\) & \((-4,+\infty)\) & \((-4,+\infty)\) \\ \cline{2-4} & \(s408(-7)=L(42,q)\). \\ \hline \(o9_{26541}\) & \(g=3\) & \(\delta=-\frac{8}{3}\) & \((-\infty,-4)\cup(2,+\infty)\cup\{\infty\}\) & \((-\infty,-4)\cup(2,+\infty)\cup\{\infty\}\) \\ \cline{2-4} & \(o9_{26541}(-3)=L(87,q)\). \\ \hline \end{tabular}
**Example 1.18**.: We list some examples in \(\mathcal{N}\), each of which is the complement of a type-II knot in some \(L(p,q)\) with relatively small \(p\), and its obtained CTF filling interval is exactly its maximal NLS filling interval.
\begin{tabular}{|l|l|l|l|l|l|} \hline manifold & genus & degeneracy & obtained CTF filling & maximal NLS filling & lens space filling \\ & & slope & interval & interval & \\ \hline \(m146\) & \(g=3\) & \(\delta=-10\) & \((-5,+\infty)\) & \((-5,+\infty)\) & \(m146(\infty)=\mathbb{R}P^{3}\) \\ \hline \(v2585\) & \(g=4\) & \(\delta=14\) & \((-\infty,7)\) & \((-\infty,7)\) & \(v2585(\infty)=\mathbb{R}P^{3}\) \\ \hline \(m036\) & \(g=2\) & \(\delta=-6\) & \((-3,+\infty)\) & \((-3,+\infty)\) & \(m036(\infty)=L(3,1)\) \\ \hline \(s313\) & \(g=3\) & \(\delta=-10\) & \((-5,+\infty)\) & \((-5,+\infty)\) & \(s313(\infty)=L(3,1)\) \\ \hline \(v3327\) & \(g=3\) & \(\delta=10\) & \((-\infty,5)\) & \((-\infty,5)\) & \(v3327(\infty)=L(3,1)\) \\ \hline \end{tabular}
**Example 1.19**.: Here are some examples in \(\mathcal{N}\) which are complements of type-II knots in some spherical manifolds other than lens spaces, and their obtained CTF filling intervals are equal to their maximal NLS filling intervals.
\begin{tabular}{|l|l|l|l|l|l|} \hline manifold & genus & degeneracy & obtained CTF filling interval & maximal NLS filling interval \\ \hline \(t08752\) & \(g=2\) & \(\delta=-6\) & \((-3,+\infty)\) & \((-3,+\infty)\) \\ \hline & \(t08752(\infty)=W\) for some prism manifold \(W\) with \(|H_{1}(W)|=8\). \\ \hline \(o9_{23699}\) & \(g=2\) & \(\delta=-6\) & \((-3,+\infty)\) & \((-3,+\infty)\) \\ & \(o9_{23699}(\infty)=W\) for some tetrahedral manifold \(W\) with \(|H_{1}(W)|=9\). \\ \hline \end{tabular}
**Example 1.20**.: The CTF filling interval obtained from Corollary 1.10 may not be the maximal NLS filling interval for every Floer simple \(1\)-cusped manifold satisfying the assumptions of Corollary 1.10. For instance, \(o9_{19364}\) is an example in \(\mathcal{N}\) which is the complement of an L-space knot in \(S^{3}\) with Seifert genus \(14\) and degeneracy slope \(48\). Corollary 1.10 gives a CTF filling interval \((-\infty,24)\), but the maximal NLS filling interval of \(o9_{19364}\) is \((-\infty,27)\).
### Organization
In Section 2, we set up some conventions and review some background material on branched surfaces. We prove Theorem 1.9 in Section 3. In Subsection 3.1, we construct a branched surface \(B(\alpha)\) in \(M\). We prove that \(B(\alpha)\) is a laminar branched surface in Subsection 3.2. In Subsection 3.3, we describe the boundary train tracks of \(B(\alpha)\) in \(\partial M\) and choose some simple closed curves carried by it. In Subsection 3.4, we prove that the boundary train tracks of \(B(\alpha)\) realize all rational multislopes contained in the multi-interval given in Theorem 1.9. And we complete the proof of Theorem 1.9 in Subsection 3.5. In Section 4, we prove Proposition 1.13.
### Acknowledgements
The author wishes to thank Xingru Zhang for his guidance, patience, encouragement, and for many helpful discussions and comments on this work. He is grateful to Nathan Dunfield for providing him the census of examples [D3] and answering him several questions
about the examples. He thanks Cameron Gordon and Rachel Roberts for telling him the fractional Dehn twist coefficients of \((-2,3,2q+1)\) pretzel knots and some relevant works. He thanks Chi Cheuk Tsang for telling him that the monodromies of \((-2,3,2q+1)\)-pretzel knots have co-orientable stable foliations, and for some helpful comments. He thanks Cagatay Kutluhan, Johanna Mangahas, William Menasco, Tao Li and Diego Santoro for some helpful conversations. He thanks Tech Topology Summer School 2023 and its organizers for providing him a great chance to communicate and discuss, and for their travel support.
## 2. Preliminaries
### Conventions
For a set \(X\), let \(|X|\) denote the cardinality of \(X\). For two metric spaces \(A\) and \(B\), let \(A\setminus\setminus B\) denote the closure of \(A-B\) under the path metric.
For a link manifold \(N\) such that \(\partial N\) is a union of tori \(\bigcup_{i=1}^{n}S_{n}\), a _multislope_ on \(\partial N\) is an \(n\)-tuple of slopes on \(S_{1},\dots,S_{n}\) respectively. For a link in some closed \(3\)-manifold, the multislopes on it are defined as the multislopes on its exterior.
Now we illustrate our conventions of slopes and orientations for pseudo-Anosov mapping tori of compact orientable surfaces. Let \(\Sigma\) be a compact orientable surface with nonempty boundary and let \(\varphi:\Sigma\to\Sigma\) be an orientation-preserving pseudo-Anosov homeomorphism. Let \(M=\Sigma\times I/\stackrel{{\varphi}}{{\sim}}\) be the mapping torus of \(\Sigma\) over \(\varphi\), and we fix an orientation on \(M\).
**Notation 2.1**.: (a) For two slopes \(\alpha,\beta\) in the same boundary component of \(M\), let \(\Delta(\alpha,\beta)\) denote the minimal geometric intersection number of \(\alpha,\beta\).
(b) For two closed oriented curves \(\gamma,\eta\) in the same boundary component of \(M\), let \(\langle\gamma,\eta\rangle\) denote the algebraic intersection number of \(\gamma,\eta\) (see Convention 2.3 (d) for more details on this setting).
We adopt the following conventions for slopes on the boundary components of \(M\), as described in [10]:
**Convention 2.2** (Slope conventions).: Let \(T\) be a boundary component of \(M\). Let \(\delta_{T}\) denote the degeneracy slope of \(T\).
(a) We call the slope of \(T\cap(\Sigma\times\{0\})\) on \(T\) the _longitude_ of \(T\) and denote it by \(\lambda_{T}\). We choose a slope \(\mu_{T}\) on \(T\) such that \(\Delta(\lambda_{T},\mu_{T})=1\) and
\[\Delta(\mu_{T},\delta_{T})\leqslant\Delta(s,\delta_{T})\text{ for any slope $s$ of $T$ with $\Delta(\lambda_{T},s)=1$},\]
and we call \(\mu_{T}\) the _meridian_ of \(T\). We fix a canonical orientation on \(\lambda_{T}\) and on \(\mu_{T}\), see Convention 2.3 (c) for details. Note that \(\mu_{T}\) has a unique choice if \(\Delta(\lambda_{T},\delta_{T})\neq 2\). See (c) for the choice of \(\mu_{T}\) in the case of \(\Delta(\lambda_{T},\delta_{T})=2\).
(b) For an essential simple closed curve \(\gamma\) on \(T\), we identify the slope of \(\gamma\) with the number
\[\frac{\langle\gamma,\lambda_{T}\rangle}{\langle\mu_{T},\gamma\rangle}\in \mathbb{Q}\cup\{\infty\}.\]
(c) If \(\Delta(\lambda_{T},\delta_{T})=2\), then there are two choices of \(\mu_{T}\) satisfying (a). \(\delta_{T}\) is equal to \(-2\), \(2\) in these two choices respectively. We choose \(\mu_{T}\) so that \(\delta_{T}=2\).
Next, we assign orientations to \(\Sigma\times\{0\}\) and the meridians and longitudes on \(\partial M\).
**Convention 2.3** (Orientation conventions).: (a) Note that each orientation on \(\Sigma\times\{0\}\) determines a normal vector field with respect to the orientation on \(M\). We assign \(\Sigma\times\{0\}\) an orientation so that the induced normal vector field is consistent with the increasing orientation on the second coordinates in \(\Sigma\times I\).
(b) The orientation on \(\Sigma\times\{0\}\) induces an orientation on \(\Sigma\). For any oriented curve in \(\Sigma\), it has well-defined left and right sides with respect to the orientation on \(\Sigma\). Throughout this paper, the left and right side of an oriented curve on \(\Sigma\) will always be with respect to the orientation on \(\Sigma\). Each boundary component of \(\Sigma\) has an orientation induced from the orientation on \(\Sigma\) such that,
for any properly embedded arc \(\gamma:I\to\Sigma\), choose a normal vector field \(\{v(x)\mid x\in\gamma(I)\}\) pointing to the right side of \(\gamma\), then \(v(\gamma(0))\) is consistent with the positive orientation on \(\partial\Sigma\) and \(v(\gamma(1))\) is consistent with the negative orientation on \(\partial\Sigma\). For each component \(C\) of \(\partial\Sigma\), we also assign \(C\times\{0\}\subseteq\partial M\) an orientation consistent with the orientation on \(C\).
(c) Let \(T\) be a component of \(\partial M\). We assign the longitude of \(T\) an orientation consistent with the positive orientation on each component of \(T\cap(\partial\Sigma\times\{0\})\). We choose a curve \(\gamma\) on \(T\) such that \(\gamma\) represents the meridian on \(T\) and \(\gamma\) is transverse to the fibered surfaces \(\{\Sigma\times\{t\}\mid t\in I\}\), and we assign \(\gamma\) an orientation consistent with the increasing orientation on the second coordinates. We assign an orientation to the meridian on \(T\) consistent with the orientation on \(\gamma\).
(d) For a boundary component \(T\) of \(M\), we set \(\langle\mu_{T},\lambda_{T}\rangle=-\langle\lambda_{T},\mu_{T}\rangle=1\), where \(\mu_{T},\lambda_{T}\) denotes the meridian and longitude on \(T\) respectively.
At last, we describe the relation between our canonical coordinate system on \(\partial M\) and the standard meridian/longitude coordinate system for knots in \(S^{3}\), when \(M\) is the exterior of a fibered knot in \(S^{3}\).
**Remark 2.4**.: Let \(M\) be the exterior of a fibered knot \(K\) in \(S^{3}\). Let \(T=\partial M\), and let \(\mu_{K},\lambda_{K}\) denote the standard meridian and longitude of \(K\) in \(S^{3}\). As explained in [2, Corollary 7.4], \(\lambda_{K}=\lambda_{T},\mu_{K}=\mu_{T}\) when \(\Delta(\lambda_{T},\delta_{T})\neq 2\). In the case of \(\Delta(\lambda_{T},\delta_{T})=2\), the two choices of orientations on \(M\) give two distinct canonical coordinate systems on \(T\), denoted \(\mathcal{C}_{1},\mathcal{C}_{2}\). By Convention 2.2 (c), \(\delta_{T}\) has slope \(2\) with respect to both of \(\mathcal{C}_{1},\mathcal{C}_{2}\), which implies that the slopes \(\infty,1\) with respect to \(\mathcal{C}_{1}\) are the slopes \(1,\infty\) with respect to \(\mathcal{C}_{2}\), respectively. It also follows from [2, Corollary 7.4] that \(\mu_{K}\) has slope \(\infty\) with respect to one of \(\mathcal{C}_{1},\mathcal{C}_{2}\) and has slope \(1\) with respect to the other one. So the standard coordinate system on \(T\) coincides with exactly one of \(\mathcal{C}_{1},\mathcal{C}_{2}\).
### Branched surfaces
The branched surface is an important tool to describe foliations and laminations. We review some backgrounds on branched surfaces in this subsection. Our notations follow from [10], [11].
Figure 1. Local models of standard spines.
Figure 2. Local models of branched surfaces.
**Definition 2.5**.: A _standard spine_ is a 2-complex such that every point has a neighborhood modeled as Figure 1.
**Definition 2.6**.: A _branched surface_\(B\) is a standard spine with well-defined cusp structure (at the set of points without Euclidean neighborhoods), such that every point has a neighborhood modeled as Figure 2.
Let \(B\) be a branched surface in a compact orientable 3-manifold \(M\), and let
\[L(B)=\{t\in B\mid t\text{ has no Euclidean neighborhood in }B\}.\]
Call \(L(B)\) the _branch locus_ of \(B\), call each component of \(B\setminus\setminus L(B)\) a _branch sector_, call each point in \(L(B)\) without an \(\mathbb{R}\)-neighborhood in \(L(B)\) a _double point_ of \(L(B)\), and call each component of \(L(B)\setminus\{\text{double points in }L(B)\}\) a _segment_ in \(L(B)\). For each segment \(s\) in \(L(B)\), we assign \(s\) a normal vector in \(B\) with orientation consistent with the direction of the cusp, and we call it the _cusp direction_ at \(s\). \(B\) is _co-orientable_ if there exists a continuous normal vector field on \(B\), or equivalently, all branch sectors of \(B\) have compatible co-orientations.
A _fibered neighborhood_\(N(B)\) of \(B\) is a regular neighborhood of \(B\) locally modeled as Figure 3. We regard \(N(B)\) as an interval bundle over \(B\) and call its fibers _interval fibers_. \(\partial N(B)\) can be decomposed to two (possibly non-connected) compact subsurfaces \(\partial_{h}N(B)\) (called the _horizontal boundary_ of \(N(B)\)) and \(\partial_{v}N(B)\) (called the _vertical boundary_ of \(N(B)\)) such that \(\partial_{h}N(B)\) is transverse to the interval fibers and \(\partial_{v}N(B)\) is tangent to the interval fibers. Let \(\pi:N(B)\to B\) denote the canonical projection that sends every interval fiber to a single point. We call \(\pi\) the _collapsing map_ for \(N(B)\).
**Definition 2.7**.: A lamination \(\mathcal{L}\) is _carried_ by \(B\) if we can choose a fibered neighborhood \(N(B)\) of \(B\) such that \(\mathcal{L}\subseteq N(B)\) and every leaf of \(\mathcal{L}\) is transverse to the interval fibers of \(N(B)\). And \(\mathcal{L}\) is _fully carried_ by \(B\) if \(\mathcal{L}\) is carried by \(B\) and intersects all interval fibers of \(N(B)\).
In [GO], Gabai and Oertel introduce essential branched surfaces to describe essential laminations.
**Definition 2.8** (Essential branched surface).: A branched surface \(B\) in a compact orientable 3-manifold \(M\) is _essential_ if all of the following conditions hold:
(a) There is no disk of contact, where a disk of contact is an embedded disk \(D\subseteq N(B)\) transverse to the interval fibers of \(N(B)\) and \(\partial D\subseteq Int(\partial_{v}N(B))\). And there is no half disk of contact, where a half disk of contact is an embedded disk \(D\subseteq N(B)\) transverse to the interval fibers such that there is a connected segment \(\alpha\subseteq\partial D\) with \(\alpha\subseteq\partial M\cap\partial N(B)\) and \(\partial D-Int(\alpha)\subseteq Int(\partial_{v}N(B))-\partial M\).
(b) \(\partial_{h}N(B)\) is incompressible and \(\partial\)-incompressible in \(M-Int(N(B))\), no component of \(M-Int(N(B))\) is a monogon, and no component of \(\partial_{h}N(B)\) is a sphere or a disk properly embedded in \(M\).
Figure 3. The fibered neighborhood \(N(B)\).
(c) \(M-Int(N(B))\) is irreducible, and \(\partial M-Int(N(B))\) is incompressible in \(M-Int(N(B))\).
(d) \(B\) contains no Reeb branched surface (see [GO] for the definition of Reeb branched surface).
(e) \(B\) fully carries a lamination.
Gabai and Oertel ([GO]) prove that
**Theorem 2.9** (Gabai-Oertel).: _(a) Any essential lamination in a compact orientable \(3\)-manifold is fully carried by an essential branched surface._
_(b) Any lamination in a compact orientable \(3\)-manifold fully carried by an essential branched surface is an essential lamination._
Now we describe the laminar branched surface introduced by Li in [Li1], [Li2].
**Definition 2.10**.: Let \(B\) be a branched surface. A _sink disk_ of \(B\) is a disk component \(D\) of \(B\setminus\setminus L(B)\) such that \(D\cap\partial M=\emptyset\) and the cusp directions at all segments in \(\partial D\) point in \(D\).
(b) A _half sink disk_ of \(B\) is a disk component \(D\) of \(B\setminus\setminus L(B)\) such that \(D\cap\partial M\neq\emptyset\) and the cusp directions at all segments in \(\partial D\cap L(B)\) point in \(D\).
For a branched surface \(B\) in a \(3\)-manifold \(M\), A _trivial bubble_ in \(M-Int(N(B))\) is a \(3\)-ball component \(Q\) of \(M-Int(N(B))\) such that \(\partial Q\cap\partial_{h}N(B)\) has \(2\) components and each of them is a disk, and moreover, \(\pi\mid_{\partial Q\cap N_{h}(B)}\) is injective. And we call \(\pi(\partial Q)\) a _trivial bubble_ of \(B\).
**Definition 2.11** (Laminar branched surface).: Let \(B\) be a branched surface in a compact orientable \(3\)-manifold \(M\). \(B\) is a _laminar branched surface_ if \(B\) satisfies Conditions (b)\(\sim\)(d) of Definition 2.8, \(B\) contains no trivial bubble, and \(B\) contains no sink disk or half sink disk.
In [Li1], [Li2], Li proves that
**Theorem 2.12** (Li).: _Let \(M\) be a compact orientable \(3\)-manifold._
_(a) Every laminar branched surface in \(M\) fully carries an essential lamination._
_(b) For every essential lamination in \(M\) which is not a lamination by \(2\)-planes, it is fully carried by a laminar branched surface._
The laminar branched surface is a useful tool to construct taut foliations in Dehn fillings ([Li2, Theorem 2.2]):
**Theorem 2.13** (Li).: _Let \(M\) be a compact orientable, irreducible \(3\)-manifold with tori boundary \(\partial M=\bigcup_{i=1}^{n}T_{i}\). Let \(B\) be a laminar branched surface in \(M\) such that \(\partial M\setminus\setminus B\) is a union of bigons. Suppose that_
\[\boldsymbol{s}=(s_{1},\dots,s_{n})\in(\mathbb{Q}\cup\{\infty\})^{n}\]
_is a rational multislope in \(\partial M\) such that each \(s_{i}\) is realized by the boundary train track \(B\cap T_{i}\) and \(B\) does not carry a torus that bounds a solid torus in \(M(\boldsymbol{s})\), then \(B\) fully carries an essential lamination \(\mathcal{L}_{\boldsymbol{s}}\) that meets each \(T_{i}\) transversely in a collection of simple closed curves of slope \(s_{i}\). Moreover, \(\mathcal{L}_{\boldsymbol{s}}\) can be extended to an essential lamination in the Dehn filling of \(M\) along \(\partial M\) with the multislope \(\boldsymbol{s}\)._
**Remark 2.14**.: In [Li2, Theorem 2.2], Li only states this theorem in the case that \(M\) has connected boundary. As noted in [KalR, Subsection 2.4], the argument holds for the case that \(M\) has multiple boundary components.
## 3. Proof of the main theorem
We prove Theorem 1.9 in this section.
Let \(\Sigma\) be a compact orientable surface with nonempty boundary and let be \(\varphi:\Sigma\to\Sigma\) be an orientation-preserving pseudo-Anosov homeomorphism. Let \(M=\Sigma\times I/\sqrt[\varphi]{\mathcal{L}}\) be the mapping torus of \(\Sigma\) over \(\varphi\). We fix an orientation on \(M\), and we adopt Conventions 2.2, 2.3 for \(M\). We assume
**Assumption 3.1**.: \(\varphi\) is co-orientable and co-orientation-reversing.
Let \(\mathcal{F}^{s},\mathcal{F}^{u}\) denote the stable and unstable foliations of \(\varphi\). Then \(\mathcal{F}^{s},\mathcal{F}^{u}\) are co-orientable. We fix a co-orientation on \(\mathcal{F}^{s}\).
### Construction of the branched surface
In this subsection, we construct a branched surface of \(M\). We first choose a union of oriented properly embedded arcs \(\alpha\subseteq\Sigma\) and then construct a branched surface \(B(\alpha)\) by adding a union of product disks \(\alpha\times I\) to the fibered surface \(\Sigma\times\{0\}\).
Let \(C\) be a boundary component of \(\Sigma\). Let \(p\) denote the number of singularities of \(\mathcal{F}^{s}\) contained in \(C\), and let \(v_{1},v_{2},\dots,v_{p}\in C\) denote these \(p\) singularities (consecutive along the positive orientation on \(C\)). Note that \(2\mid p\) since \(\mathcal{F}^{s}\) is co-orientable. \(v_{1},v_{2},\dots,v_{p}\) divides \(C\) into \(p\) segments, denoted \((v_{i},v_{i+1})\) for each \(i\in\{1,\dots,p\}\) (by convention, \(v_{p+1}=v_{1}\)), and we call each of them a _stable segment_ of \(C\). For each stable segment \((v_{i},v_{i+1})\), we choose a point \(w_{i}\) in its interior, then \(w_{i}\) is the starting endpoint of a transversal of \(\mathcal{F}^{s}\), which is either positively oriented or negatively oriented. \((v_{i},v_{i+1})\) is said to be _negative_ (resp. _positive_) if \(w_{i}\) is the starting endpoint of some positively oriented transversal (resp. negatively oriented transversal) of \(\mathcal{F}^{s}\). Note that a stable segment is positive (resp. negative) implies that its two adjacent stable segments are negative (resp. positive).
**Construction 3.2**.: (a) Let
\[A_{+}=\{\text{positive stable segments in }\partial\Sigma\},\]
\[A_{-}=\{\text{negative stable segments in }\partial\Sigma\}.\]
Note that \(|A_{+}|=|A_{-}|\). We choose a bijection \(j:A_{-}\to A_{+}\). For each \(\sigma\in A_{-}\), we choose a path \(r_{\sigma}\) that starts at some point in \(Int(\sigma)\), positively transverse to \(\mathcal{F}^{s}\) and disjoint from the singularities of \(\mathcal{F}^{s}\), and ends at some point in \(Int(j(\sigma))\) as follows:
\(\bullet\) We draw a positively oriented transversal \(\tau_{1}:I\to\Sigma\) that starts at some point in \(Int(\sigma)\), and we draw a negatively oriented transversal \(\tau_{2}:I\to\Sigma\) that starts at some point in \(Int(j(\sigma))\). And we assume \(\tau_{1},\tau_{2}\) are disjoint from the singularities of \(\mathcal{F}^{s}\). Recall from [13, Corollary 14.15], every leaf of \(\mathcal{F}^{s}\) that is not contained in \(\partial\Sigma\) is dense in \(\Sigma\). So there exists a non-singular leaf \(\lambda\) of \(\mathcal{F}^{s}\) and \(t_{1},t_{2}\in I\) such that \(\tau_{1}(t_{1}),\tau_{2}(t_{2})\in\lambda\). Let \(r_{\sigma}^{{}^{\prime\prime}}\) denote the immersed path that starts at \(\tau_{1}(0)\) and goes along \(\tau_{1}([0,t_{1}])\) to \(\tau_{1}(t_{1})\), then goes along \(\lambda\) to \(\tau_{2}(t_{2})\), finally goes along the inverse direction of \(\tau_{2}([0,t_{2}])\) and ends at \(\tau_{2}(0)\). Then \(r_{\sigma}^{{}^{\prime\prime}}\) is positively transverse to \(\mathcal{F}^{s}\) except a subarc contained in \(\lambda\). Let \(\eta\) denote this subarc. Since \(\lambda\) is a non-singular leaf and \(\eta\) is compact, \(\eta\) has a product neighborhood \(\eta\times I\) in \(\mathcal{F}^{s}\). We can isotope \(r_{\sigma}^{{}^{\prime\prime}}\) in \(\eta\times I\) to make it transverse to \(\mathcal{F}^{s}\) and keep it still disjoint from the singularities of \(\mathcal{F}^{s}\). Then we obtain an immersed path \(r_{\sigma}^{{}^{\prime}}\) which is positively transverse to \(\mathcal{F}^{s}\). At last, we smooth every self-intersection point of \(r_{\sigma}^{{}^{\prime}}\) with respect to the orientation on the path (Figure 4), and we can obtain a properly embedded arc and a (possibly
Figure 4. At a double intersection point of a union of paths positively transverse to \(\mathcal{F}^{s}\), we smooth it with respect to the orientations on the paths. Then the paths are still positively transverse to \(\mathcal{F}^{s}\).
empty) union of circles. We delete all these circles, and we denote by \(r_{\sigma}\) the remained properly embedded arc.
We can construct \(\{r_{\sigma}\mid\sigma\in A_{-}\}\) one-by-one to make them only have double intersection points (this can be guaranteed since each \(r_{\sigma}\) is disjoint from the singularities of \(\mathcal{F}^{s}\)) and satisfy that \(\varphi\) does not take any endpoint in \(\{r_{\sigma}\mid\sigma\in A_{-}\}\) to another endpoint in it. We smooth every double intersection point of \(\bigcup_{\sigma\in A_{-}}r_{\sigma}\) (which is a non-singular point of \(\mathcal{F}^{s}\)) with respect to the orientations on the paths (Figure 4), and we can obtain a finite union of disjoint oriented properly embedded arcs which are positively transverse to \(\mathcal{F}^{s}\). Let \(\alpha\) denote this union of oriented arcs.
We note that
**Fact 3.3**.: Each negative stable segment contains exactly one starting endpoint of some path in \(\alpha\) and no ending endpoint, and each positive stable segment contains exactly one ending endpoint of some path in \(\alpha\) and no starting endpoint.
Now we construct a branched surface \(B(\alpha)\).
**Definition 3.4**.: (a) Let
\[B(\alpha)=(\Sigma\times\{0\})\cup(\alpha\times I)\]
be a branched surface, where we orient the branch sectors so that for each component \(\gamma\) of \(\alpha\), the cusp direction at \(\gamma\times\{0\}\) points to its left side in \(\Sigma\times\{0\}\), and the cusp direction at \(\gamma\times\{1\}=\varphi(\gamma)\times\{0\}\) points to its right side in \(\Sigma\times\{0\}\).
(b) For each component \(\gamma\) of \(\alpha\), we call \(\gamma\times I\) a _product disk_ and call \(\gamma\times\{0\}\) (resp. \(\gamma\times\{1\}\)) the _lower arc_ (resp. _upper arc_) of this product disk.
(c) Note that \(\partial\alpha\cap\varphi(\partial\alpha)=\emptyset\) and both of \(\alpha,\varphi(\alpha)\) contain no singularity of \(\mathcal{F}^{s}\). There is a union of oriented properly embedded arcs \(\beta\) in \(\Sigma\) such that, \(\beta\) is isotopic to \(\varphi(\alpha)\) relative to the endpoints, \(\beta\) is transverse to \(\mathcal{F}^{s}\), and \(\alpha,\beta\) only have double intersection points. We isotope \(\alpha\times I\) relative to \((\alpha\times\{0\})\cup(\partial\alpha\times I)\) so that the upper arcs of \(\alpha\times I\) are isotoped to \(\beta\times\{0\}\). This makes \(B(\alpha)\) locally modeled as in Figure 2.
### Verifying that \(B(\alpha)\) is laminar
In this subsection, we verify that \(B(\alpha)\) is a laminar branched surface.
**Lemma 3.5**.: _Let \(\rho\) be an oriented simple closed curve or an oriented properly embedded arc in \(\Sigma\) such that \(\rho\) can be divided into finitely many segments which are either positively transverse to \(\mathcal{F}^{s}\) or tangent to \(\mathcal{F}^{s}\), and at least one of these segments is positively transverse to \(\mathcal{F}^{s}\). Then \(\rho\) is essential in \(\Sigma\)._
We offer two proofs of this lemma.
_The first proof of Lemma 3.5._ We first consider the case that \(\rho\) is an oriented simple closed curve. We assume that \(\rho\) is non-essential. Then there is an embedded disk \(D\subseteq\Sigma\) such that \(\rho=\partial D\). Let \(\lambda\) be a leaf of \(\mathcal{F}^{s}\) such that \(\rho,\lambda\) have transverse intersections but have no tangent intersection. Then \(\rho\) is positively transverse to \(\lambda\) at each point of \(\rho\cap\lambda\). Note that \(\mathcal{F}^{s}\) can be split open along the singular leaves to a geodesic lamination with respect to some hyperbolic metric on \(\Sigma\) with geodesic boundary. So \(\lambda\cap D\) is a union of closed segments. At each component \(s\) of \(\lambda\cap D\), \(\rho\) is positively transverse to \(\lambda\) at one endpoint of \(s\) and is negatively transverse to \(\lambda\) at the other endpoint of \(s\) (Figure 5). This is a contradiction.
Now we consider the case that \(\rho\) is an oriented properly embedded arc. If \(\rho\) is non-essential, then there is an embedded disk \(D\subseteq\Sigma\) with \(\partial D\subseteq\rho\cup\partial\Sigma\). Similar to the above case, this is a contradiction.
_The second proof of Lemma 3.5._ We can split open \(\mathcal{F}^{s}\) along its singular leaves to obtain a geodesic lamination \(\Lambda^{s}\) of \(\Sigma\). Let \(\widetilde{\Sigma}\) be the universal cover of \(\Sigma\). Let \(\widetilde{\Lambda}^{s}\) denote the pull-back lamination of
\(\Lambda^{s}\) in \(\widetilde{\Sigma}\), and let \(L(\Lambda^{s})\) denote the leaf space of \(\widetilde{\Lambda^{s}}\). We note that \(L(\Lambda^{s})\) is an order tree, and the co-orientation on \(\mathcal{F}^{s}\) induces a co-orientation on \(\Lambda^{s}\) and an orientation on \(L(\Lambda^{s})\).
We may consider \(\rho\) as a path \(\rho:I\to\Sigma\) such that (1) either \(\rho(0)=\rho(1)\) or \(\rho(0),\rho(1)\in\partial\Sigma\), (2) \(\rho\) can be divided into finitely many segments \(\rho_{1},\ldots,\rho_{n}\) which are either positively transverse to \(\Lambda^{s}\) or tangent to \(\Lambda^{s}\), (3) at least one of \(\rho_{1},\ldots,\rho_{n}\) is positively transverse to \(\Lambda^{s}\). Let \(\widetilde{\rho}:I\to\widetilde{\Sigma}\) be a lift of \(\rho\), and let \(\widetilde{\rho_{i}}\) be the lift of \(\rho_{i}\) to \(\widetilde{\Sigma}\) contained in \(\widetilde{\rho}\). Let
\[\Omega=\{i\in\{1,\ldots,n\}\mid\rho_{i}\text{ is positively transverse to }\Lambda^{s}\}.\]
Then \(\Omega\neq\emptyset\). \(\bigcup_{i\in\Omega}\widetilde{\rho_{i}}\) can be canonnically identified with a positively oriented path in \(L(\Lambda^{s})\), and the two endpoints of this path are distinct since \(L(\Lambda^{s})\) is simply connected. It follows that \(\rho\) is essential.
Because each component of \(\alpha\) is positively transverse to \(\mathcal{F}^{s}\),
**Corollary 3.6**.: _Each component of \(\alpha\) is essential in \(\Sigma\)._
To verify that \(B(\alpha)\) contains no trivial bubble, it suffices to show that \(B(\alpha)\) has no 3-ball complementary region. Note that \(M\setminus B(\alpha)=(M\setminus\langle\alpha\rangle\times I\). If there is a 3-ball complementary region of \(B(\alpha)\), then \(M\setminus\langle\alpha\) must have some disk component. Now we exclude this case.
**Lemma 3.7**.: \(\Sigma\setminus\langle\alpha\rangle\) _contains no disk component._
Proof.: Assume that \(\Sigma\setminus\langle\alpha\rangle\) contains a disk component \(D\). In the following discussions, the clockwise and anticlockwise orientations on \(\partial D\) will always be with respect to the orientation on \(\Sigma\) (then \(D\) is in the left side of \(\partial D\) if \(\partial D\) has anticlockwise orientation). Let \(\gamma:I\to\Sigma\) be a component of \(\alpha\) contained in \(\partial D\), and we may assume that the orientation on \(\gamma\) is consistent with the anticlockwise orientation on \(\partial D\). Now draw a path \(\eta:I\to\partial D\) that starts at \(\gamma(1)\), goes along the anticlockwise orientation on \(\partial D\), and ends at \(\gamma(1)\). Recall that \(\gamma(1)\) is contained in a positive stable segment in \(\partial\Sigma\), and this positive stable segment does not contain any other endpoint of \(\alpha\) (Fact 3.3). So \(\eta\) first goes to a negative stable segment adjacent to the positive stable segment containing \(\gamma(1)\), and then goes through another arc in \(\alpha\) along its orientation, and so on (Figure 6). So the anticlockwise orientation on \(\partial D\) is consistent with the orientation on each component of \(\alpha\) contained in \(\partial D\). Thus, the anticlockwise orientation on \(\partial D\) is positively transverse to \(\mathcal{F}^{s}\) in \(\partial D\cap\alpha\) and tangent to \(\mathcal{F}^{s}\) in \(\partial D\cap\partial\Sigma\). This contradicts Lemma 3.5. So \(\Sigma\setminus\langle\alpha\rangle\) contains no disk component.
It follows that
**Corollary 3.8**.: \(B(\alpha)\) _contains no trivial bubble._
We've verified that each component of \(\alpha\) is essential and \(\Sigma\setminus\langle\alpha\rangle\) contains no disk component. As shown in [S, Lemma 3.16], \(B(\alpha)\) satisfies Conditions (b)\(\sim\)(d) of Definition 2.8.
Figure 5. For a component \(s\) of \(\lambda\cap D\), \(\rho\) is positively transverse to \(\lambda\) and negatively transverse to \(\lambda\) at the two endpoints of \(s\) respectively.
**Remark 3.9**.: [S, Lemma 3.16] has an assumption that the upper arcs and the lower arcs of the product disks intersect efficiently in \(\Sigma\times\{0\}\), but our \(B(\alpha)\) doesn't satisfy this. We explain that [S, Lemma 3.16] still holds for our \(B(\alpha)\) below. Let \(B^{{}^{\prime}}(\alpha)\) be a branched surface obtained from isotoping the product disks of \(B(\alpha)\) relative to \((\alpha\times\{0\})\cup(\partial\alpha\times I)\) so that the upper arcs and the lower arcs of \(\alpha\times I\) intersect efficiently in \(\Sigma\times\{0\}\). Then [S, Lemma 3.16] applies to \(B^{{}^{\prime}}(\alpha)\), and there exists a homeomorphism between \(M-Int(N(B^{{}^{\prime}}(\alpha)))\) and \(M-Int(N(B(\alpha)))\) that takes \(\partial_{h}N(B^{{}^{\prime}}(\alpha)),\partial_{v}N(B^{{}^{\prime}}(\alpha))\) to \(\partial_{h}N(B(\alpha)),\partial_{v}N(B(\alpha))\) respectively. Note that each of Condition (b), (c) of Definition 2.8 only depends on the complementary regions of the branched surface, and the proof for Condition (d) of Definition 2.8 in [S, Lemma 3.16] only uses a property of the complementary regions. It follows that \(B(\alpha)\) satisfies Conditions (b)\(\sim\)(d) of Definition 2.8.
It remains to prove that \(B(\alpha)\) contains no sink disk or half sink disk.
**Proposition 3.10**.: \(B(\alpha)\) _has no sink disk or half sink disk._
Proof.: We first show that every product disk in \(B(\alpha)\) is not a sink disk or a half sink disk. For every product disk \(S\) of \(B(\alpha)\), the cusp directions at both of its upper arc and lower arc point out of \(S\). So \(S\) is not a sink disk or a half sink disk.
Now assume that \(B(\alpha)\) contains a sink disk \(D\). Then \(D\) is not contained in \(\alpha\times I\). So \(D\subseteq\Sigma\times\{0\}\), and thus \(\partial D\subseteq(\alpha\times\{0\})\cup(\beta\times\{0\})\).
For a segment \(\sigma\) of \(\partial D\),
\(\bullet\) Assume \(\sigma\subseteq\alpha\times\{0\}\). Then the cusp direction at \(\sigma\) points to the left side. Since the cusp direction at \(\sigma\) points in \(D\), \(\sigma\) has anticlockwise orientation (Figure 7 (b), where the solid lines are subarcs of \(\alpha\times\{0\}\)). As \(\sigma\subseteq\alpha\times\{0\}\), \(\sigma\) is positively transverse to \(\mathcal{F}^{s}\times\{0\}\subseteq\Sigma\times\{0\}\).
\(\bullet\) Assume \(\sigma\subseteq\beta\times\{0\}\). In this case, the cusp direction at \(\sigma\) points to the right side and points in \(D\), and thus \(\sigma\) has clockwise orientation (Figure 7 (b), where the dashed lines are subarcs of \(\beta\times\{0\}\)). Because \(\varphi\) is co-orientation-reversing and \(\beta\) is isotopic to \(\varphi(\alpha)\) relative to the endpoints, \(\beta\) is negatively transverse to \(\mathcal{F}^{s}\). So \(\sigma\) is negatively transverse to \(\mathcal{F}^{s}\times\{0\}\).
So every segment of \(\partial D\) either has anticlockwise orientation and is positively transverse to \(\mathcal{F}^{s}\times\{0\}\), or has clockwise orientation and is negatively transverse to \(\mathcal{F}^{s}\times\{0\}\). Therefore, the anticlockwise orientation on \(\partial D\) is positively transverse to \(\mathcal{F}^{s}\times\{0\}\) (Figure 7 (c)). However, \(\partial D\)
Figure 6. The disk component \(D\) of \(\Sigma\setminus\backslash\alpha\) assumed to exist in the proof of Lemma 3.7. The dots are singularities of \(\mathcal{F}^{s}\) contained in \(\partial\Sigma\), the blue lines are leaves of \(\mathcal{F}^{s}\), and the segments labeled with \(+,-\) are positive stable segments, negative stable segments respectively.
is non-essential in \(\Sigma\) since it bounds the disk \(D\). This contradicts Lemma 3.5. So \(B(\alpha)\) contains no sink disk.
Similarly, if \(B(\alpha)\) contains a half sink disk \(D\), then the anticlockwise orientation on \(\partial D\) is positively transverse to \(\mathcal{F}^{s}\times\{0\}\) at every segment contained in \((\alpha\times\{0\})\cup(\beta\times\{0\})\) and is tangent to \(\mathcal{F}^{s}\times\{0\}\) at \(\partial D\cap(\partial\Sigma\times\{0\})\). This also contradicts Lemma 3.5. So \(B(\alpha)\) also contains no half sink disk. It follows that \(B(\alpha)\) is laminar.
Thus
**Corollary 3.11**.: \(B(\alpha)\) _is a laminar branched surface._
### Simple closed curves carried by boundary train tracks
Let \(\tau(\alpha)=B(\alpha)\cap\partial M\). In this subsection, we choose some simple closed curves carried by \(\tau(\alpha)\) and compute their slopes. We first give some descriptions for \(\tau(\alpha)\).
**Definition 3.12**.: Let \(\gamma:I\to\partial\Sigma\) be a component of \(\alpha\). Then the product disk \(\gamma\times I\) intersects \(\partial M\) at \((\{\gamma(0)\}\times I)\cup(\{\gamma(1)\}\times I)\). We call \(\{\gamma(0)\}\times I\) (resp. \(\{\gamma(1)\}\times I\)) a _positive vertical edge_ (resp. _negative vertical edge_) of \(\tau(\alpha)\).
Under the assumption of Definition 3.12, let \(C_{1},C_{2}\) denote the boundary components of \(\Sigma\) that contain \(\gamma(0),\gamma(1)\) respectively. Recall from Convention 2.3 (b), the positive orientation on \(C_{1}\) goes from the left side of \(\gamma\) to its right side at \(\gamma(0)\), and the positive orientation on \(C_{2}\) goes from the right side of \(\gamma\) to its left side at \(\gamma(1)\), see Figure 8 (a) for an example. Recall from Definition 3.4, the cusp direction at \(\gamma\times\{0\}\) points to the left, and the cusp direction at \(\varphi_{*}(\gamma)\times\{0\}\) points to the right. Thus
**Fact 3.13**.: (a) For a positive vertical edge \(\{a\}\times I\) (\(a\in\partial\Sigma\)) of \(\tau(\alpha)\), the cusp direction at the point \((a,0)\) is consistent with the negative orientation on \(\partial\Sigma\times\{0\}\), and the cusp direction at \((a,1)=(\varphi(a),0)\) is consistent with the positive orientation on \(\partial\Sigma\times\{0\}\). For example, see the right one of the two vertical edges in Figure 8 (a).
(b) For a negative vertical edge \(\{b\}\times I\) (\(b\in\partial\Sigma\)) of \(\tau(\alpha)\), the cusp direction at \((b,0)\) is consistent with the positive orientation on \(\partial\Sigma\times\{0\}\), and the cusp direction at \((b,1)=(\varphi(b),0)\) is consistent with the negative orientation on \(\partial\Sigma\times\{0\}\). For example, see the left one of the two vertical edges in Figure 8 (a).
Figure 7. The solid lines are subarcs of \(\alpha\times\{0\}\) and the dashed lines are subarcs of \(\beta\times\{0\}\). For a sink disk \(D\), (a) illustrates the cusp directions at the segments of \(\partial D\), (b) illustrates the orientations on the segments of \(\partial D\), (c) illustrates that the anticlockwise orientation on \(\partial D\) is positively transverse to \(\mathcal{F}^{s}\).
For a positive or negative vertical edge \(\{a\}\times I\) (where \(a\in\partial\Sigma\)), we call \((a,0)\) its _lower endpoint_ and call \((a,1)=(\varphi(a),0)\) its _upper endpoint_. Since \(\varphi\) is co-orientation-reversing, \(\varphi\) takes all positive stable segments to negative stable segments and takes all negative stable segments to positive stable segments. Thus, for a positive vertical arc (resp. negative vertical arc), its lower endpoint is contained in a negative stable segment (resp. positive stable segment) and its upper endpoint is contained in a positive stable segment (resp. negative stable segment), compare with Figure 8 (b).
Let \(C\) be a boundary component of \(\Sigma\) and let \(c\) denote the order of \(C\) under \(\varphi\) (i.e. \(c=\min\{k\in\mathbb{N}_{+}\mid\varphi^{k}(C)=C\}\)). Let \(T\) denote the boundary component of \(M\) containing \(C\times\{0\}\) and let \(\tau=\tau(\alpha)\cap T\). Let \(p\) denote the number of singularities of \(\mathcal{F}^{s}\) contained in \(C\). And let \(v_{1},\dots,v_{p}\) denote the \(p\) singularities of \(\mathcal{F}^{s}\) contained in \(C\) (consecutive along the positive orientation on \(C\)). Let \(q\in\mathbb{Z}\) for which \((p;q)\) is the degeneracy locus of the suspension flow of \(\varphi\) on \(T\). As explained in Remark 1.11 (a), \(v_{j+q}=\varphi^{c}(v_{j})\) (mod \(p\)) for each \(j\in\{1,\dots,p\}\). We note that \(2\mid p\) since \(\varphi\) is co-orientable, and \(q\equiv c\) (mod \(2\)) since \(2\mid q\) if and only if \(\varphi^{c}\) is co-orientation-preserving.
**Definition 3.14**.: Under the assumption as above, we call \((c,p,q)\) the _\(\varphi\)-triple_ for \(C\).
**Proposition 3.15**.: \(\tau\) _carries a simple closed curve of slope \(\frac{p}{q+c}\) and a simple closed curve of slope \(\frac{p}{q-c}\), where \(\frac{p}{q+c}=\infty\) if \(q+c=0\) and \(\frac{p}{q-c}=\infty\) if \(q-c=0\)._
Proof.: Let \(\mu_{T},\lambda_{T}\) denote the meridian and longitude on \(T\) (see Convention 2.3 for the orientations on them). We may assume that every \((v_{2i-1},v_{2i})\) is a negative stable segment and every \((v_{2i},v_{2i+1})\) is a positive stable segment. Let \(t_{j}=(v_{j},0)\in\Sigma\times\{0\}\) for each \(j\in\{1,\dots,p\}\).
Let \(\gamma:I\to\tau\) be the path that starts at \(t_{1}\) and do the following steps again and again, (1) once \(\gamma\) reaches \(T\cap(\Sigma\times\{0\})\), \(\gamma\) goes along the positive orientation on \(T\cap(\Sigma\times\{0\})\) until it meets the
Figure 8. (a) is the picture of a product disk \(\gamma\times I\). It intersects \(\partial M\) at two vertical segments, where the right one is a positive vertical edge and the left one is a negative vertical edge. The boundary components of \(\partial\Sigma\times\{0\}\) are labeled with positive orientations on them, and the upper and lower arcs of \(\gamma\times I\) are labeled with orientations induced from \(\gamma\). (b) illustrates the picture of \(\tau(\alpha)\) seen from the the outside of \(M\). The horizontal edges are contained in \(\partial\Sigma\times\{0\}\), and their positive orientations are toward the right side. The horizontal segments labeled with \(-,+\) are negative stable segments, positive stable segments respectively. And the vertical segments labeled with \(-,+\) are negative vertical edges, positive vertical edges respectively.
lower endpoint of some positive vertical edge, (2) when \(\gamma\) meets the lower endpoint of some positive vertical edge, (2) when \(\gamma\) meets the lower endpoint of some positive vertical edge, \(\gamma\) goes along this positive vertical edge and reaches \(T\cap(\Sigma\times\{0\})\) again, (3) \(\gamma\) stops by \(t_{1}\) at its second time to meet \(t_{1}\). Compare with Figure 9 (a) for the picture when \(\gamma\) starts.
Note that \(\gamma\) arrives at the positive stable segment \((t_{q+c},t_{q+c+1})\pmod{p}\) at it's second time meeting \(C\times\{0\}\), and then \(\gamma\) goes along the positive orientation on \(C\times\{0\}\) to \(t_{q+c+1}\). If \(t_{q+c+1}=t_{1}\), then \(\gamma\) stops at \(t_{1}\). Otherwise, \(\gamma\) repeats the steps given above, passes \(t_{2(q+c)+1},t_{3(q+c)+1},\ldots\), and arrives at \(t_{1}\) at last. Thus
\[\langle\gamma,\lambda_{T}\rangle=\frac{p}{\gcd(p,q+c)},\]
\[\langle\mu_{T},\gamma\rangle=\frac{q+c}{p}\cdot\langle\gamma,\lambda_{T} \rangle=\frac{q+c}{\gcd(p,q+c)},\]
by convention \(\gcd(p,0)=p\). Therefore,
\[\text{slope}(\gamma)=\frac{(\frac{p}{\gcd(p,q+c)})}{(\frac{q+c}{\gcd(p,q+c)}) }=\frac{p}{q+c}.\]
Next, we choose a simple closed curve \(\nu\) carried by \(\tau\) with \(\text{slope}(\nu)=\frac{p}{q-c}\). Let \(\nu:I\to\tau\) be the path starting at \(t_{1}\) such that (1) when \(\gamma\) gets to \(T\cap(\Sigma\times\{0\})\), \(\gamma\) goes along the negative orientation on \(T\cap(\Sigma\times\{0\})\) until it meets the lower endpoint of some negative vertical edge, (2) when \(\gamma\) meets the lower endpoint of some negative vertical edge, \(\gamma\) goes along this negative vertical edge and gets to \(T\cap(\Sigma\times\{0\})\) again, (3) \(\gamma\) stops by \(t_{1}\) at the second time to meet \(t_{1}\). Compare with Figure 9 (b) for the picture when \(\nu\) starts.
\(\nu\) arrives at the positive stable segment \((t_{q-c+1},t_{q-c+2})\pmod{p}\) at the second time to meet \(C\times\{0\}\). Then \(\nu\) goes along the negative orientation on \(C\times\{0\}\) to \(t_{q-c+1}\). If \(t_{q-c+1}=t_{1}\), then \(\nu\) stops at this time. Otherwise, \(\nu\) repeats the above steps and passes \(t_{2(q-c)+1},t_{3(q-c)+1},\ldots,\) and \(\nu\) arrives at \(t_{1}\) at last. Then
\[\langle\nu,\lambda_{T}\rangle=\frac{p}{\gcd(p,q-c)},\]
\[\langle\mu_{T},\nu\rangle=\frac{q-c}{p}\cdot\langle\nu,\lambda_{T}\rangle= \frac{q-c}{\gcd(p,q-c)},\]
and thus
\[\text{slope}(\nu)=\frac{(\frac{p}{\gcd(p,q-c)})}{(\frac{q-c}{\gcd(p,q-c)})}= \frac{p}{q-c}.\]
Figure 9. The horizontal edges are contained in \(\partial\Sigma\times\{0\}\), where the positive orientations on them are toward the right. In the proof of Proposition 3.15, we construct two paths \(\gamma,\nu:I\to\tau\) with \(\text{slope}(\gamma)=\frac{p}{q+c}\), \(\text{slope}(\nu)=\frac{p}{q-c}\). (a) describes \(\gamma\) when it starts, and (b) describes \(\nu\) when it starts.
### Slopes realized by boundary train tracks
We first briefly review some ingredients in measures on train tracks. For a train track \(\tau\), let \(E(\tau)\) denote the set of edges of \(\tau\).
\(\bullet\) A _measure_\(m:E(\tau)\to\mathbb{R}_{\geqslant 0}\) on \(\tau\) is an assignment of nonnegative numbers to \(E(\tau)\) that satisfies the cusp relation (Figure 10 (a)) at each cusp, i.e. for any three edges \(x,y,z\in E(\tau)\) that has a common endpoint \(p\), if the cusp direction at \(p\) points toward \(z\), then \(m(x)+m(y)=m(z)\).
\(\bullet\) Measures on \(\tau\) are in one-to-one correspondence with measured laminations carried by \(\tau\). For a measure \(m\) on \(\tau\), call the corresponding measured lamination the _companion lamination_ of \(m\) (compare with Figure 10 (b)).
\(\bullet\)\(\tau\) is _orientable_ if its edges have continuously varying orientations. If \(\tau\) is orientable, then every measured lamination carried by \(\tau\) is orientable. Moreover, given any measure on an oriented train track, any closed curve has a well-defined algebraic intersection number with the companion lamination of this measure.
We refer the reader to [12, Chapter 15] for more details.
**Notation 3.16**.: Let \(\tau\) be a train track.
(a) Let \(m_{1},m_{2}\) be two measures on \(\tau\). Then \(m_{1}+m_{2}\) denotes the measure on \(\tau\) with \((m_{1}+m_{2})(e)=m_{1}(e)+m_{2}(e)\) for each \(e\in E(\tau)\).
(b) Let \(\rho\) be a simple closed curve carried by \(\tau\) and let \(t\in\mathbb{R}_{+}\). Then we can regard \(\rho\times[0,t]\) as a measured lamination carried by \(\tau\). Let \(\rho(t)\) denote the measure on \(\tau\) for which \(\rho\times[0,t]\) is the companion measured lamination of \(\rho(t)\).
**Proposition 3.17**.: _Let \(C\) be a component of \(\partial\Sigma\) and let \(T\) be a boundary component of \(M\) containing \(C\times\{0\}\). Let \(\tau=\tau(\alpha)\cap T\). Let \((c,p,q)\) denote the \(\varphi\)-triple for \(C\) (Definition 3.14)._
_(a) If \(q>c>0\), then \(\tau\) realizes all rational slopes in \((-\infty,\frac{p}{q+c})\cup(\frac{p}{q-c},+\infty)\cup\{\infty\}\)._
_(b) If \(q=c>0\), then \(\tau\) realizes all rational slopes in \((-\infty,\frac{p}{p})\)._
_(c) If \(c>q\geqslant 0\), then \(\tau\) realizes all rational slopes in \((-\frac{p}{c-q},\frac{p}{q+c})\)._
_(d) If \(-c<q<0\), then \(\tau\) realizes all rational slopes in \((-\frac{p}{|q|+c},\frac{p}{c-|q|})\)._
_(e) If \(q=-c<0\), then \(\tau\) realizes all rational slopes in \((-\frac{p}{2|q|},+\infty)\)._
_(f) If \(q<-c<0\), then \(\tau\) realizes all rational slopes in \((-\infty,-\frac{p}{|q|-c})\cup(-\frac{p}{|q|+c},+\infty)\cup\{\infty\}\)._
Proof.: We only prove (a). The proofs of (b)\(\sim\)(f) are similar to (a). Let \(\mu_{T},\lambda_{T}\) denote the meridian and longitude on \(T\). For every vertical edge \(\{a\}\times I\) in \(\tau\), its _upward orientation_ (resp. _downward orientation_) is the orientation on it consistent with the increasing orientation (resp. decreasing orientation) on the second coordinates.
Figure 10. (a) The cusp relation of a measure at a cusp point. (b) The companion lamination of the measure as given in (a).
Assume \(q>c>0\). We first orient \(\tau\) so that \(T\cap(\Sigma\times\{0\})\) has positive orientation, every positive vertical edge in \(\tau\) has upward orientation, and every negative vertical edge in \(\tau\) has downward orientation.
By Proposition 3.15 (a), \(\tau\) carries a simple closed curve \(\gamma\) of slope \(\frac{p}{q+c}\). Let \(v=\frac{p}{\gcd(p,q+c)}\), \(u=\frac{q+c}{\gcd(p,q+c)}\). Then \(\frac{p}{q+c}=\frac{v}{u}\), \(u,v>0\), and \(\gcd(u,v)=1\). We assign \(\gamma\) an orientation consistent with the orientation on \(\tau\), then \(\langle\gamma,\lambda_{T}\rangle=v\), \(\langle\mu_{T},\gamma\rangle=u\).
For every edge \(e\) of \(\tau\), we can choose a simple closed curve \(\rho_{e}\) carried by \(\tau\) such that \(\rho_{e}\) contains \(e\) and \(\operatorname{slope}(\rho_{e})=0\). And we assign each \(\rho_{e}\) an orientation consistent with the orientation on \(\tau\). Then \(\langle\rho_{e},\lambda_{T}\rangle=0\), \(\langle\mu_{T},\rho_{e}\rangle=1\).
Let \(m_{0}\) denote the measure \(\sum_{e\in E(\tau)}\rho_{e}(1)\), and let \(\Lambda_{0}\) denote the companion lamination of \(m(0)\). Then \(\langle\Lambda_{0},\lambda_{T}\rangle=0\), and thus \(\tau\) realizes the slope \(0\).
We define a family of one-parameter measures \(m_{1}(t)\) with \(t\in(0,+\infty)\) such that
\[m_{1}(t)=\gamma(1)+\sum_{e\in E(\tau)}\rho_{e}(t).\]
Let \(\Lambda_{1}(t)\) denote the companion lamination of \(m_{1}(t)\), and we assign \(\Lambda_{1}(t)\) an orientation induced from the orientation on \(\tau\). Let \(N=|E(\tau)|\). Then
\[\frac{\langle\Lambda_{1}(t),\lambda_{T}\rangle}{\langle\mu_{T},\Lambda_{1}(t) \rangle}=\frac{v}{u+tN}.\]
For any rational number \(x\in(0,\frac{p}{q+c})\), we can choose some \(t>0\) so that
\[\frac{v}{u+tN}=x,\]
which implies that \(\tau\) realizes the slope \(x\).
Now we prove that \(\tau\) realizes all rational slopes in \((-\infty,0)\cup\{\infty\}\cup(\frac{p}{q-c},+\infty)\). We re-orient \(\tau\) so that \(T\cap(\Sigma\times\{0\})\) has negative orientation, every positive vertical edge in \(\tau\) has downward orientation, and every negative vertical edge in \(\tau\) has upward orientation.
By Proposition 3.15 (a), \(\tau\) carries a simple closed curve \(\nu\) of slope \(\frac{p}{q-c}\). We assign \(\nu\) an orientation induced from the orientation on \(\tau\). Let \(s=\frac{p}{\gcd(p,q-c)}\), \(r=\frac{q-c}{\gcd(p,q-c)}\). Then \(\frac{p}{q-c}=\frac{s}{r}\), \(r,s>0\), and \(\gcd(r,s)=1\). And we have \(\langle\nu,\lambda_{T}\rangle=s\), \(\langle\mu_{T},\nu\rangle=r\).
For every edge \(e\in E(\tau)\), we can choose a simple closed curve \(\eta_{e}\) carried by \(\tau\) that contains \(e\) and has slope \(0\). We assign each \(\eta_{e}\) an orientation consistent with the orientation on \(\tau\). Because \(T\cap(\Sigma\times\{0\})\) has negative orientation, \(\langle\eta_{e},\lambda_{T}\rangle=0\), \(\langle\mu_{T},\eta_{e}\rangle=-1\).
We define a family of one-parameter measures \(m_{2}(t)\) with \(t\in(0,+\infty)\) such that
\[m_{2}(t)=\nu(1)+\sum_{e\in E(\tau)}\eta_{e}(t).\]
Let \(\Lambda_{2}(t)\) denote the companion lamination of \(m_{2}(t)\), and we assign \(\Lambda_{2}(t)\) an orientation induced from the orientation on \(\tau\). Recall that \(|E(\tau)|=N\), so
\[\frac{\langle\Lambda_{2}(t),\lambda_{T}\rangle}{\langle\mu_{T},\Lambda_{2}(t) \rangle}=\frac{s}{r-tN}.\]
If we choose \(t=\frac{r}{N}\), then \(\frac{s}{r-tN}=\infty\). For any rational number \(x\in(\frac{p}{q-c},+\infty)\), we can choose some \(0<t<\frac{r}{N}\) so that \(\frac{s}{r-tN}=x\). And for any rational number \(x\in(-\infty,0)\), we can choose some \(t>\frac{r}{N}\) so that \(\frac{s}{r-tN}=x\). Thus, \(\tau\) realizes all rational slopes in \((-\infty,0)\cup\{\infty\}\cup(\frac{p}{q-c},+\infty)\).
### The proof of Theorem 1.9
We first explain that \(B(\alpha)\) carries no torus. Assume that \(B(\alpha)\) carries a torus \(T\). Since all product disks in \(B(\alpha)\) intersect \(\partial M\) and \(T\) is a closed surface, \(\pi(T)\) contains no product disk, and thus \(\pi(T)\subseteq\Sigma\times\{0\}\). However, \(\Sigma\times\{0\}\) carries no closed surface, which contradicts \(\pi(T)\subseteq\Sigma\times\{0\}\). So \(B(\alpha)\) carries no torus.
Choose a multislope \(\mathbf{s}=(s_{1},\ldots,s_{k})\in(\mathbb{Q}\cap\{\infty\})^{k}\) contained in the multi-inerval as given in Theorem 1.9. Combining Corollary 3.11, Proposition 3.17 with Theorem 2.13, since \(B(\alpha)\) carries no torus, \(B(\alpha)\) fully carries an essential lamination \(\mathcal{L}_{\mathbf{s}}\) in \(M\) such that \(\mathcal{L}_{\mathbf{s}}\) intersects each \(T_{i}\) in a collection of simple closed curves of slope \(s_{i}\). Since \(B(\alpha)\) has product complementary regions, each complementary region of \(\mathcal{L}_{\mathbf{s}}\) is an \(I\)-bundle over a surface. We can extend \(\mathcal{L}_{\mathbf{s}}\) to a foliation \(\mathcal{F}_{\mathbf{s}}\) in \(M\) that intersects each \(T_{i}\) in a foliation by simple closed curves of slope \(s_{i}\). We can then extend \(\mathcal{F}_{\mathbf{s}}\) to a foliation \(\widehat{\mathcal{F}_{\mathbf{s}}}\) in \(M(\mathbf{s})\). \(\widehat{\mathcal{F}_{\mathbf{s}}}\) is taut since the union of core curves of the filling solid tori is transverse to \(\widehat{\mathcal{F}_{\mathbf{s}}}\) and intersects all leaves of \(\widehat{\mathcal{F}_{\mathbf{s}}}\).
We assign \(\Sigma\times\{0\}\) a co-orientation. For each product disk of \(B(\alpha)\), we can assign it a co-orientation compatible with the co-orientation on \(\Sigma\times\{0\}\). This defines a co-orientation on \(B(\alpha)\), which induces a co-orientation on \(\mathcal{L}_{\mathbf{s}}\) and on \(\mathcal{F}_{\mathbf{s}}\). It follows that \(\widehat{\mathcal{F}_{\mathbf{s}}}\) is co-orientable. This completes the proof of Theorem 1.9.
## 4. The proof of Proposition 1.13
Let \(q\in\mathbb{N}\) with \(q\geqslant 3\), and let \(K\) be the \((-2,3,2q+1)\)-pretzel knot in \(S^{3}\). Let \(g(K)\) denote the Seifert genus of \(K\) (then \(g(K)=q+2\)). Let \(X=S^{3}-Int(N(K))\), let \(S\) be a fibered surface of \(X\), and let \(\phi:S\to S\) denote the pseudo-Anosov monodromy of \(X\). Fix an orientation on \(S^{3}\) so that \(\phi\) is right-veering.
In this section, we prove Proposition 1.13:
**Proposition 1.13**.: _(a) \(\phi\) is co-orientable and co-orientation-reversing._
_(b) \(K\) has degeneracy slope \(4g(K)-2\)._
_(c) All rational slopes in \((-\infty,2g(K)-1)\) are CTF surgery slopes of \(K\)._
_(d) For each \(n\geqslant 2\), the \(n\)-fold cyclic branched cover of \(K\) admits a co-orientable taut foliation._
_The proof of (a)._ Let \(\mu,\delta\) denote the merdian and degeneracy slope on \(K\) respectively. Since \(\phi\) is right-veering, \(\mu\neq\delta\). Let \(\widetilde{X_{2}}\) denote the double cyclic cover of \(X\), and let \(\Sigma_{2}(K)\) denote the double branched cover of \(K\). Then \(\widetilde{X_{2}}\) is the mapping torus of \(S\) with monodromy \(\phi^{2}\). Because \(K\) is a hyperbolic L-space knot in \(S^{3}\), \(\Delta(\mu,\delta)=1\) (see Case 2). The suspension flow of \(\phi^{2}\) in \(\widetilde{X_{2}}\) induces a pseudo-Anosov flow \(\Upsilon\) of \(\Sigma_{2}(K)\) since \(2\Delta(\mu,\delta)=2\) ([F]). Let \(\mathcal{E}^{s}\) denote the weak stable foliation of \(\Upsilon\). As explained in [BS, A.4], \(\Sigma_{2}(K)\) is a Seifert fibered \(3\)-manifold with base orbifold \(S^{2}(2,3,2q+1)\) (i.e. \(S^{2}\) with three cone points of index \(2,3,2q+1\) respectively). Note that all pseudo-Anosov flows in Seifert fibered \(3\)-manifolds are \(\mathbb{R}\)-covered Anosov flows ([Ba]). Thus \(\Upsilon\) is an \(\mathbb{R}\)-covered Anosov flow and \(\mathcal{E}^{s}\) is an \(\mathbb{R}\)-covered foliation. \(\mathcal{E}^{s}\) contains no compact leaf since the stable foliation of \(\varphi\) contains no compact leaf. By [Br, Corollary 7], \(\mathcal{E}^{s}\) can be isotoped to be transverse to the Seifert fibers of \(\Sigma_{2}(K)\). Because the base orbifold of \(\Sigma_{2}(K)\) is orientable, the Seifert fibers of \(\Sigma_{2}(K)\) have continuously varying orientations, and these orientations define a co-orientation on \(\mathcal{E}^{s}\). Thus \(\phi^{2}\) is co-orientable and co-orientation-preserving, and therefore \(\phi\) is co-orientable. Since \(\phi\) is right-veering and \(K\) is a knot in \(S^{3}\), \(\phi\) can only be co-orientation-reversing.
_The proof of (b)._ It can be deduced from [FS] that \(\delta=4g(K)-2\). We give another proof here using (a). Since \(\Upsilon\) is an \(\mathbb{R}\)-covered Anosov flow, \(\Upsilon\) has no singular orbit. So the stable foliation of \(\phi\) has no singularity in \(Int(S)\). It follows that the stable foliation of \(\phi\) has \(4g(K)-2\) singularities in \(\partial S\) ([FM, Proposition 11.4]). As \(\Delta(\mu,\delta)=1\), we have \(\delta=4g(K)-2\).
_The proof of (c)._ This follows from (b) and Corollary 1.10 directly.
_The proof of (d)._ Let \(\widetilde{X_{n}}\) denote the \(n\)-fold cyclic cover of \(X\), which is also the mapping torus of \(S\) with monodromy \(\phi^{n}\). If \(n\) is even, then \(\phi^{n}\) is co-orientation-preserving, so the result can be deduced from Theorem 1.7 directly. Now we assume that \(n\) is odd (then \(\phi^{n}\) is co-orientation-reversing). We denote \(4g(K)-2\) by \(p\). There is a unique \(a\in\mathbb{Z}_{\geqslant 0}\) with \(-\frac{1}{2}p<n-ap\leqslant\frac{1}{2}p\), and let \(b=n-ap\). We fix the canonical coordinate system on \(\partial\widetilde{X_{n}}\). By Remark 1.11 (a), \((p;b)\) is the degeneracy locus
of the suspension flow of \(\phi^{n}\) on \(\partial\widetilde{X_{n}}\). Applying Corollary 1.10 to \(\widetilde{X_{n}}\), a slope \(s\) on \(\partial\widetilde{X_{n}}\) is a CTF filling slope if \(s\notin[\frac{p}{b+1},\frac{p}{b-1}]\) (when \(b>1\)) or \(s\notin[\frac{p}{2},+\infty)\cup\{\infty\}\) (when \(b=1\)). The inverse image of the slope \(\infty\) on \(\partial X\) is the slope \(-\frac{1}{a}\) on \(\partial\widetilde{X_{n}}\). When \(b=1\), we have \(a\geqslant 1\) since \(n>1\), and thus \(-\frac{1}{a}\notin[\frac{p}{2},+\infty)\cup\{\infty\}\). When \(b>1\), we have \(\frac{p}{b+1},\frac{p}{b-1}\in(0,+\infty)\) and \(-\frac{1}{a}\in(-\infty,0)\cup\{\infty\}\), which implies \(-\frac{1}{a}\notin[\frac{p}{b+1},\frac{p}{b-1}]\). The result follows directly.
|
2303.01250 | Toward a Universal Theory of Stable Evolution | The backbone of nonequilibrium thermodynamics is the stability structure,
where entropy is related to a Lyapunov function of thermodynamic equilibrium.
Stability is the background of natural selection: unstable systems are
temporary, and stable ones survive. The physical concepts from the stability
structure and the related formalism of constrained entropy inequality are
universal by construction. Therefore, the mathematical tools and the physical
concepts of thermodynamics help formulate dynamical theories of any systems in
social and natural sciences. | Peter Ván | 2023-03-01T12:08:57Z | http://arxiv.org/abs/2303.01250v1 | # Toward a universal theory of stable evolution
###### Abstract.
The backbone of nonequilibrium thermodynamics is the stability structure, where entropy is related to a Lyapunov function of thermodynamic equilibrium. Stability is the background of natural selection: unstable systems are temporary, and stable ones survive. The physical concepts from the stability structure and the related formalism of constrained entropy inequality are universal by construction. Therefore, the mathematical tools and the physical concepts of thermodynamics help formulate dynamical theories of any systems in social and natural sciences.
## 1. Introduction
Universality is a remarkable property of physical theories, quantities and parameters. Temperature, second-order phase transitions, gravity, or the gas constant are universal in the sense that they do not depend on material properties. One of the famous demonstration of the universality of free fall is the Eotvos-Pekar-Fekete experiment. There the measured ratio of inertial and gravitational masses was the same for several different materials [1]. Since then, many experiments have verified and improved that for different length scales and materials [2]. For second-order phase transitions, the approach is similar: for different materials and material structures the critical exponents are the same, [3, 4]. There is also an empirical part of the universality of various physical constants, like the universal gas constant.
In the experiments, one recognises the independence from material. In the related theories, the empirical facts must be explained. Therefore universality becomes a construction principle, like in the case of general relativity, or can remain unexplained, but is a source of insight and methodological development, like in the case of critical exponents. The gas constant is a built-in property of the related theories, and its universality has somehow become natural.
The universality of temperature, i.e. the absolute temperature scale, is peculiar. It is the only case where we trace back the independence from matter to more fundamental aspects. Those are the reversibility and composability of processes and also the Second Law, a particular property of thermodynamic engines (see, e.g. [5]). "Engines" here are abstract representations of materials. Then the absolute temperature and, consequently, the existence of entropy follow. The argumentation can be and should be extended at least to some other thermodynamic quantities, like the pressure [6]. The key concept is the strange, fictitious "process" of equilibrium thermodynamics. In equilibrium thermodynamics, processes are time-independent; the change of thermodynamic state refers to so-called quasi-static processes that run through equilibrium states. This is why the scope of thermodynamics is generally thought to be limited. It is a strange contradiction: because of universality,
substances have temperature, independently from their material structure, let it be electromagnetic field, fluid or an ensemble of particles, and, at the same time, the starting point of the deductive justification has limited validity. This dilemma about the validity range of thermodynamics is remarkable and thought-provoking (compare [7, 8]).
Another important manifestation of universality is related to matter-related evolution equations, the ordinary or partial differential equations describing the processes. The Fourier equation of heat conduction and the Fick equation of diffusion are prominent examples: these are the same differential equations that are valid for very different materials and mechanisms. The equation works well for heat conduction and diffusion, in fluids and solids as well. The material's structure is only reflected in the parameters of the differential equation; the heat conduction coefficient, diffusion coefficient, and density are different, but the equation is the same. This observation motivated, for example, the research direction called synergetics [9]. That kind of universality of the evolution equations will be called _process universality_. It is mainly related to nonequilibrium thermodynamics: in the particular case of synergetics, it has been linked to order parameters of phase transitions, the fields describing non-equilibrium structures. Nevertheless, that kind of explanation remains at the empirical level: no principle in synergetics could explain the universality of the evolution equations in detail.
The appearance of thermodynamic concepts and principles in biological and social processes can be another example of empirical universality. One can see - or rather feel - the applicability, but without a physical or any insight that could explain it. In the case of biological evolution and economics, thermodynamics not only emerges as a helpful analogy but also leads to a useful conceptual framework1. One may ask what is the reason for the appearance of thermodynamics in this many areas of sciences and humanities and what is its limited success?
Footnote 1: There are too many and too different suggested connections of biological systems and thermodynamics. Classical irreversible thermodynamics focused on the concept of dissipative structures [10]. Some other exciting suggestions are [11, 12, 13, 14]. It is also remarkable that the mechanism of natural selection is based on a stability argument [15]. In economics, there were also high hopes in classical and irreversible thermodynamics, [16, 17], and these approaches were further refined also for its process models [18, 19]. Recent approaches focus on statistical aspects, see, e.g. [20].
This paper argues that thermodynamic state changes of homogeneous bodies can be understood as real processes and that conditions for asymptotic stability of thermodynamic equilibrium are based on concepts of thermostatics. In this framework, i.e., for non-equilibrium thermodynamics of homogeneous bodies, which can be called ordinary thermodynamics, the total entropy is the Lyapunov function that ensures the stability of equilibrium. Thus, equilibrium thermodynamics is a dynamical theory in disguise, where the stability properties of the dynamics interpret the second law and entropy. Hence, a conceptual background emerges for the above-mentioned process universality. The stability aspect is why thermodynamics appears in so many areas, and it is the key to extend its applicability.
This paper explains the stability structure of equilibrium thermodynamics in the case of the simplest thermodynamic system. Then the universality of temperature is analysed in the light of the stability background, and, in the end, the scope of the stability based approach to social and scientific dynamics is shortly discussed.
## 2. Second Law and Lyapunov stability
Classical equilibrium thermodynamics (hereafter called _thermostatics_) developed during the 19th century. By this time, mechanics was the basis of natural philosophy and the model for all other new physical disciplines. However, thermostatics, the newest of classical branches of physics2 is not a dynamical theory, and it is not based on an evolution equation. Non-equilibrium thermodynamics appeared as a classical field theory, its evolution equations describe the evolution of continua in space and time. A dynamic theory of homogeneous bodies would be a natural reduction, which was attempted in the monograph of de Groot and Mazur, [22], among others. However, the homogenisation of the continuum equations of non-equilibrium thermodynamics did not yield a theory compatible with thermostatics. Subsequently, the research of Truesdell and Bharatha showed that the other, bottom up construction is not straightforward either, it is not obvious to complement the first law with an additional differential equation and to keep the structure of thermostatics at the same time, [23].
Footnote 2: Only the second and the first laws were recognised then, the third law was formulated in 1912, zeroth law in 1939 [21].
Further difficulties in the non-equilibrium thermodynamics of homogeneous bodies are illustrated by the later developments, by the _finite time_, and _endoreversible_ thermodynamics [24, 25, 26], and also by thermodynamics of discrete systems, [27]. There time-dependent, real processes are treated without evolution equations and a dynamic interpretation of the Second Law, [28, 29, 30].
The problem is the connection between the dynamic and static aspects of entropy. The dynamic part of the Second Law is related to insulated systems, where the entropy is expected to grow, but nothing happens in insulated homogeneous bodies; time-dependent processes are only in open thermodynamic systems. The solution of the paradox and the consequent complete dynamic formulation of thermodynamics of homogeneous bodies was given by Matolcsi, [31], and is called _ordinary thermodynamics_, [32].
In the following, a brief presentation of the core of ordinary thermodynamics shows that thermodynamic stability, zeroth law and increasing entropy are the conditions for asymptotic stability of thermodynamic equilibrium, and, most remarkably, the total entropy of the thermodynamic system is a Lyapunov function of the thermodynamic equilibrium. This paper's purpose is not to critique thermodynamic theory or to resolve its paradoxes in detail. These are dealt with in many respects in the books mentioned above [23, 32, 33].
### Thermostatics and thermodynamics
The key concept of thermodynamics is _equilibrium_. However, since there are no real processes in classical theory of homogeneous bodies, and the possible process terms used - quasistatic, reversible, or irreversible - are vaguely defined, the concept of thermodynamic equilibrium is complex3. Nevertheless, thermodynamic quantities are taken to be meaningful only in equilibrium in all introductory thermodynamics books [34, 35]. Equilibrium is implicitly defined; its meaning can be, among other things, time independence, homogeneity and lacking dissipation. The zeroth law of thermodynamics is intended to clarify the conditions.
The concept of thermodynamic equilibrium is formulated here self-consistently, without any microscopic, statistical background. In this conception, according to the zeroth theorem, equilibrium means the _reducibility_ and _separability_ of physical systems. Reducibility, since we compress the combined action of many atomic, molecular, and mesoscopic physical quantities into some thermodynamic variables. Separability means that individual physical quantities, the state variables, characterise thermodynamical bodies, even if they are interacting.
Time independence is not an essential element of the concept of thermodynamics. The relationship between thermodynamics and thermostatics can be clarified understanding the dynamic role of thermodynamic potentials, especially entropy.
To describe the time variation of thermodynamic quantities of homogeneous bodies, we can use ordinary differential equations, the _evolution equations_ of the given variables. Since we will be dealing with mechanical systems - fluids - these evolution equations are sometimes related to motion, therefore, the evolution equations of thermodynamic theory must incorporate the corresponding form of Newton's equation, both dissipative and nondissipative. In other words, a sufficiently complete theory of non-equilibrium thermodynamics incorporates inertial effects.
## 3. Termostatics
Three basic hypotheses are postulated. They are not mathematical axioms because a complete mathematical precision would obscure the physical background, but some precision is unavoidable for clarity4. First of all, the existence of a thermodynamic body as a separate entity of reality is a fundamental assumption.
Footnote 4: For example, I will assume all functions to be differentiable, invertible, etc., although we know well that the discussion of potentials or phase boundaries, requires fixing the differentiability and also the domains of the functions.
**A1:**_There are independent thermodynamic bodies. Extensive state variables and intensive state functions characterise them._
This is the static part of the zeroth law.
A thermodynamic body is characterized by the state space consisting of _\(N\) extensive thermodynamic quantities_, \((X^{1},X^{2},...,X^{N})\) which form a vector space \(\mathbb{X}\) and _\(N\) intensive thermodynamic quantities_, a state function \((Y_{1},Y_{2},...,Y_{N}):\mathbb{X}\rightarrow\mathbb{X}^{*}\), where \(\mathbb{X}^{*}\) is the dual space of \(\mathbb{X}\). The components of the intensive state function are experimentally given. The most important assumption here is separability, i.e. thermodynamic bodies can be characterised by their own independent state functions even when they are apparently under the influence of other thermodynamic bodies or the environment due to physical interactions or conservation laws5.
Footnote 5: The additivity or a more general concept of composability is not trivial either, also with considering the simplest microscopic constitution [36]. Additivity is not to be confused with extensivity.
A defining property of extensive state variables is that they are proportional to the extension of the thermodynamic body. However, the conditions of extensivity are not treated here; they are simply the canonical set of state variables.
**A2:**_The intensive state function has a potential, the potential is the entropy._
That is, for a thermodynamic body, there exists an entropy function \(S:\mathbb{X}\rightarrow\mathbb{R}\), whose derivative is \((Y_{1},Y_{2},...,Y_{N})\). Traditionally this is expressed by the Gibbs relation
\[\mathrm{d}S=Y_{A}\mathrm{d}X^{A}=Y_{1}\mathrm{d}X^{1}+Y_{2}\mathrm{d}X^{2}+... +Y_{N}\mathrm{d}X^{N}. \tag{1}\]
which means more precisely that
\[\frac{\partial S(X^{1},X^{2},...,X^{N})}{\partial X^{A}}=Y_{A}(X^{1},X^{2},...,X ^{N}) \tag{2}\]
Here upper index indicates vector, the lower index covector, and the double index denotes duality mapping. This requirement imposes conditions on the experimentally defined functions: the derivative of the intensive state function is symmetric6.
Footnote 6: The potential conditions, the so called Maxwell relations, are rarely measured directly. We accept the potential property and design the experiments accordingly.
**A3:**_The entropy is concave_.
This requirement is called _thermodynamic stability_. Then it follows that the second derivative of the entropy, a second order tensor, is negative definite7.
Footnote 7: Actually it is only semidefinite if Euler homogeneity is required, its kernel is spanned by all \((X^{1},X^{2},...,X^{N})\), where thermodynamic stability holds, multiplied by a nonnegative number, [32].
The above conditions are gradually weaker from a physical point of view. Phase transitions and boundaries violate thermodynamic stability, property _A3_ is local. The existence of macroscopic entropy, defined in _A2_, is rarely questioned, mainly in the case, when it is calculated assuming ideal theoretical microstructures. _A1_ is the most profound assumption, the separability and the possibility of reduced complexity are rudimentary. Theories that would violate _A1_ are not only not thermodynamic, they are not physical: this kind of reductionism is fundamental in existing theories of physics.
In what follows, it will be shown in detail, using the simplest examples, that the above fundamental properties have a common role, they are conditions ensuring the asymptotic stability of equilibrium of thermodynamic systems within the framework of a dynamical theory. This, in turn, highlights how and in what sense thermodynamics is fundamental and why it is part of all physical disciplines: the existence of asymptotically stable states of matter is essential for experimental reproducibility and, thus, for the existence of objective natural science.
4. Termodynamics
Regarding dynamics, the fundamental assumption is that there is a dynamical law, a differential equation that defines and determines processes. We do not specify that in the form of a formal postulate, a representative example will be given in the next section. This subsection aims to clearly separate the two independent parts of the Second Law and emphasize that there is an evolution equation in the background.
**A4:**_A system is a collection of interacting bodies, the processes of the system are the solutions of an evolution equation._
The evolution equation is an ordinary differential equation, but it is not arbitrary:
**A5:**_A requirement is imposed on the dynamical equation which ensures that the entropy is an increasing function along the processes in an insulated system._
Precisely this means that a process of a body is a function defined in time \(t\mapsto(X^{1}(t),X^{2}(t),...,X^{N}(t))\), and the entropy change along the process is \(t\mapsto S(X^{1}(t),X^{2}(t),...,X^{N}(t))\). Insulated is a physical condition, expressed by the conservation of extensive quantities in the particular cases. The emphasis is on the adjective "along the processes", which are determined by the evolution equation. A demonstrative example and a short analysis of some traps using \(A1-A5\) in modeling are given in the following section.
## 5. Ordinary thermodynamics - homogeneous fluids
The processes of thermodynamic bodies are functions of time in ordinary thermodynamics. In this respect, it is analogous to the mechanics of mass points. However, in thermodynamic state space is not related to spacetime, space does not play a role at all. Bodies of ordinary thermodynamics are models of matter with homogeneous distribution in space.
### **Thermostatics of fluids.**
In gases and liquids, the classical extensive state variables are the internal energy \(E\), the volume \(V\) and the particle number \(N\) (or mass _M_). Entropy is a function of these: \(S(E,V,N)\).
According to \(A2\), entropy is a thermodynamic potential, and its partial derivatives are the entropic intensive state functions. Therefore, if equations of state, the intensive state function, is given, entropy is defined as a potential in terms of its partial derivatives :
\[S(E,V,N),\qquad\frac{\partial S}{\partial E}=\frac{1}{T},\qquad\frac{\partial S }{\partial V}=\frac{p}{T},\qquad\frac{\partial S}{\partial N}=-\frac{\mu}{T}. \tag{3}\]
The related differential form, the Gibbs relation, is:
\[\mathrm{d}E=T\mathrm{d}S-p\mathrm{d}V+\mu\mathrm{d}N. \tag{4}\]
Here \(T\) is the temperature, \(p\) is the pressure and \(\mu\) is the chemical potential. The differential form is helpful for the transformation of variables. In the following, we assume that the number of particles is constant and entropy is a function of volume and internal energy only. For gases, the usual intensive state functions, equations of state, are the thermal and caloric ones, \(p(E,V)\) and \(T(E,V)\). If the entropic intensives, \(1/T\) and \(p/T\) satisfy that
\[\frac{\partial}{\partial V}\frac{1}{T}=\frac{\partial}{\partial E}\frac{p}{T}, \tag{5}\]
then there is an entropy function with the partial derivatives given in (3). Up to three variables _A2_ practically does not require physical conditions, because the temperature is constructed as integrating divisor and the requirement of extensivity, the scaling of thermodynamic properties with size, preserves the potential property for the the two variable function for the third variable, as it was shown in the early analysis of Gyula Farkas [38, 39].
The _thermodynamic stability_, _A3_, is a further restriction. Then the second derivative of entropy,
\[D^{2}S(E,V)=\begin{pmatrix}\frac{\partial}{\partial E}\frac{1}{T}&\frac{ \partial}{\partial V}\frac{1}{T}\\ \frac{\partial}{\partial E}\frac{p}{T}&\frac{\partial}{\partial V}\frac{p}{T} \end{pmatrix}, \tag{6}\]
is negative definite. It is easy to see that it is equivalent to the following inequalities:
\[\frac{\partial T}{\partial E}(E,V)>0,\qquad\frac{\partial p}{\partial V}(T,V)<0. \tag{7}\]
Please note the variables of the second inequality. For an ideal gas, (7) is valid in the whole canonical state space, but for a Van der Waals gas, for example, it is not, the region under the so-called spinodal curve in the \((p,V)\) diagram violates the second inequality of thermodynamic stability.
A _reservoir_ is a thermodynamic body, whose temperature and pressure is constant in any process. Therefore, the equations of state are constant functions, one can easily calculate the entropy of a reservoir, the _ambient entropy_. Let us denote
the constant temperature and pressure of the reservoir by \(T_{0}\) and \(p_{0}\), respectively. Then the respective entropy is
\[S_{a}(E_{a},V_{a})=\frac{1}{T_{0}}E_{a}+\frac{p_{0}}{T_{0}}V_{a}+S_{a}, \tag{8}\]
where \(E_{a}\) and \(V_{a}\) are the internal energy, and the volume of the reservoir, respectively, \(S_{a}\) is constant. Reservoirs represent idealised, simple environments.
### Termodynamics
Evolution equations of thermodynamics cannot be arbitrary; there are some general restrictions and evident properties of equilibrium that must be fulfilled. For example, the first law of thermodynamics, i.e. the balance of internal energy, can be related to a differential equation of the following form
\[\dot{E}=q(E,V,..)+w(E,V,..). \tag{9}\]
Here, the dot denotes the time derivative, \(q\) is the thermal energy transferred per unit time (heating), and \(w\) is the mechanical power, the value of the work done on the homogeneous body per unit time (working). Also, the three dots indicate that the first law is trivial in insulated systems, there the energy is conserved, one expect work and heat exchange only in systems with interactive bodies. The quantities for heat and power are not interpretable unless the thermodynamic body under consideration is in contact with other bodies or at least with its environment. \(q\) and \(w\) are the quantities characterising the interaction. Therefore, they must also depend on the characteristics of the environment or adjacent thermodynamic body or bodies. In the following, we will consider a simple thermodynamic system as shown in Figure (1), a gas in thermal and mechanical interaction with its environment. The dotted spaces in the above formula (9) should be replaced by the characteristic parameters of the environment.
A particular case is Newton's cooling law, which models the heat exchange of a body and its environment. That example shows some of the expected properties of a complete theory:
\[q(E,V,T_{0},p_{0})=-\alpha(T(E,V)-T_{0}). \tag{10}\]
\(T\) and \(T_{0}\) are the temperatures of the body and the environment, and \(\alpha>0\) is the nonnegative heat exchange coefficient. If the mechanical power is zero and the internal energy equals the heat capacity times the temperature, then we can get a simple differential equation for the internal energy. When solved, it gives the time variation of the internal energy, \(E(t)\), modelling, e.g. a mug of hot tea in a room temperature environment. However, the combination of heat exchange and mechanical work is not straightforward.
It is also confusing, that the Gibbs relation, (4) with \(dN=0\), is associated with the first law. One can find formulas such as
\[\mathrm{d}E=\delta Q+\delta W=T\mathrm{d}S-p\mathrm{d}V,\]
where \(\mathrm{d}\) is a differential as before, but \(\delta\) does not just denote some variation, indicating the inconsistent mathematics when combining (9) and (4). It is straightforward to reinterpret the above equations with time derivatives:
\[\dot{E}=q+w\stackrel{{?}}{{=}}T\dot{S}-p\dot{V}=\dot{E}. \tag{11}\]
The first equation is the first law, and the last is the time derivative of the internal energy from the Gibbs relation, (4). The question is that how could they be valid at the same time? From a mathematical point of view they are unrelated. On the left-hand side, \(q\) and \(w\) characterise the interaction, depending on both the environment and body characteristics, while on the right-hand side only quantities appear that characterise the thermodynamic body, the gas in the cylinder on Fig. 1. The only possible consistent explanation requires evolution equations.
In the following, for the thermodynamic system shown in Fig. 1, we interpret the first law as the energy balance in the following form:
\[\dot{E}=q(E,V,E_{a},V_{a})+w(E,V,E_{a},V_{a}). \tag{12}\]
However, it is not a complete evolution equation and also the interaction functions are unknown, [23]. Therefore for the volume, we introduce
\[\dot{V}=f(E,V,E_{a},V_{a}), \tag{13}\]
where \(f\) characterises the interaction of the body and the environment. Now, one must consider the other expected physical conditions: the requirements of equilibrium, conservation laws and the Second Law. Those restrict the possible forms of the interaction functions, \(q,w\) and \(f\).
In a dynamical theory _equilibrium_ is time-independence. It is introduced in terms of constitutive functions, usually associated with intensive state variables. In our case, it is expected that when temperatures and pressures are equal, the thermodynamic system does not change; the right-hand side of the above differential equation (12)-(13) is zero. Therefore, it is assumed that the interaction quantities depend on the extensive state variables through the intensive ones and are zero when the intensives are equal:
\[q(E,V,E_{a},V_{a})=q\big{(}T(E,V),p(E,V),T_{0},p_{0}\big{)}, \quad\text{and}\quad q\big{(}T_{0},p_{0},T_{0},P_{0}\big{)} =0, \tag{14}\] \[f(E,V,E_{a},V_{a})=f\big{(}T(E,V),p(E,V),T_{0},p_{0}\big{)}, \quad\text{and}\quad f\big{(}T_{0},p_{0},T_{0},P_{0}\big{)} =0. \tag{15}\]
Note that this seemingly complicated statement is a basic assumption in thermodynamics; it is the part of the zeroth law, [40, 41, 34, 42, 28].
Figure 1. A simple thermodynamic system, a gas connected to a reservoir environment. \(T_{0}\) and \(p_{0}\) are the temperature and the pressure of the reservoir.
Mechanical interaction, \(w\), is assumed in an ideal form, proportional to volume change, therefore written as
\[w=-pf. \tag{16}\]
Then, the equilibrium solution of the evolution equation, (12)-(13), is obtained as the solution \((E_{0},V_{0})\) of the algebraic equations
\[T(E_{0},V_{0})=T_{0},\qquad p(E_{0},V_{0})=p_{0}. \tag{17}\]
It is possible that the equilibrium is not unique, for a given constant ambient temperature and pressure (17) may have several solutions. For example a Van der Waals gas equation of state has this property.
Then we consider that the body-environment system is insulated, therefore
1. The volume of the thermodynamical system is constant: \(V+V_{a}=V_{tot}=const.\). From this, it follows that \[\dot{V}+\dot{V}_{a}=0.\] (18)
2. The energy of the thermodynamical system is constant: \(E+E_{a}=E_{tot}=const.\). Then \[\dot{E}+\dot{E}_{a}=0.\] (19)
The entropy of the _whole system_ is the sum of the entropies of the body and the environment. Its change along the process can be calculated without solving the evolution equation:
\[\frac{d}{dt}(S+S_{a}) = \frac{d}{dt}\big{(}S(E,V)+S_{a}(E_{a},V_{a})\big{)}= \tag{20}\] \[= \left(\frac{1}{T}\dot{E}+\frac{p}{T}\dot{V}-\frac{1}{T_{0}}\dot{ E}-\frac{p_{0}}{T_{0}}\dot{V}\right)=\left(\frac{1}{T}-\frac{1}{T_{0}}\right)(q-pf)+ \left(\frac{p}{T}-\frac{p_{0}}{T_{0}}\right)f=\] \[= \frac{1}{T_{0}}\left((T_{0}-T)\frac{q}{T}+(p-p_{0})f\right).\]
Here we first used the constraints, (18)-(19), then the definitions of the entropies of the body and the environment, (3) and (8), and finally the evolution equation (12)-(13). (20) is the derivative of the total entropy, \(S+S_{a}\), along a process defined by the system of differential equations (12)-(13); considering all the processes, \((S+S_{a})\) will denote the derivative along the evolution equation. According to _A4_, the total entropy increases, therefore the interaction function must fulfil the following inequality:
\[(T_{0}-T)\frac{q}{T}+(p-p_{0})f\geq 0. \tag{21}\]
This inequality is understood as a constraint on the functions \(q\) and \(f\). Equality can occur only in equilibrium. Its physical meaning is straightforward. For example, in the case of \(f\equiv 0\), it follows from the above inequality that the direction of the heat flow is opposite to the difference between the temperature of the body and the ambient temperature. Therefore (21) is Clausius' formulation of the Second Law, [43; 32]. On the other hand, if \(q\equiv 0\), the volume changes in the direction of the pressure difference. That is, if the body's pressure is greater than that of the environment, its volume increases, and if it is smaller, its volume decreases.
There were several conditions - the concavity of entropy, the dynamic part of the zeroth law for the equilibrium, (14), the growth of entropy, (21) -, and the following statement is a consequence, it is an explanation of their role:
**Stability.**_The equilibrium solution, (17), of the differential equation (12)-(13) is asymptotically stable_.
It is valid, because we can show that \(L(E,V)=S(E,V)+\hat{S}_{a}(E,V)\) is the Lyapunov function of the equilibrium. Here, \(\hat{S}_{a}(E,V)=S_{a}\big{(}(E_{tot}-E),(V_{tot}-V)\big{)}\), i.e. the entropy of the environment expressed by the entropy of the body using the constraints.
Then, the derivative of \(L\) is zero in equilibrium, and its second derivative is equal to the second derivative of the entropy of the body, which is concave by the requirement of thermodynamic stability. That is, \(L\) has a strict maximum in thermodynamic equilibrium.
Moreover, \(\overset{\vbox{\hrule height 0.4pt width 0.4pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 0.4pt}{L}\) has a strict minimum according to the inequality (21), hereby the asymptotic stability, according to
Therefore, the usual assumptions of thermodynamics, in particular
* the existence of entropy as a thermodynamic potential,
* thermodynamic stability, i.e. the concavity of entropy,
* the interpretation of the entropy of the environment by the conservation constraints,
* the equilibrium conditions,
* classical working and
* nonnegative entropy production rate, as a condition for interactions
together result in the conditions of Lyapunov's theorem of asymptotic stability, the stability and attractivity of the thermodynamic equilibrium (see [44] or e.g. in [45], theorem 6.2.).
There are some interesting consequences:
1. The Lyapunov function multiplied by the temperature of the environment is the _exergy_, the maximum available work of the system, \(T_{0}\hat{S}(E,V)=T_{0}S(E,V)-E-p_{0}V\). Therefore exergy is not a fundamental concept.
2. The derivative of the body entropy along the evolution equation, the entropy production of the thermodynamic body, is the heating divided by the temperature: \[\overset{\vbox{\hrule height 0.4pt width 0.4pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 0.4pt}{S}=DS\cdot(q-pf,f)=\frac{1}{T}(q-pdf)+\frac{p}{T}f=\frac{q}{T}.\] (22) Now, the following equation is well-defined: \[\dot{E}=q+w=\overset{\vbox{\hrule height 0.4pt width 0.4pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 0.4pt}{TS}-p\dot{V}.\] (23) Therefore, the relation of the first law and the Gibbs relation, (11), is clarified. Usually \(dS=\delta Q/T\) defines the so-called _reversible_ state changes in thermodynamic textbooks. Now, we are dealing with well-defined processes, and \(q/T\) is the derivative of the entropy of the body along the differential equation (12)-(13).
3. The above evolution equation leads to the interpretation of the concept of _quasi-static process_. Namely, if the temperature and pressure of the environment are equal to the temperature and pressure of the body at any moment, the body is in equilibrium, i.e. the energy and volume changes cease, and the process stops. In this sense, the process passes through
a series of thermodynamic equilibria; it is quasi-static. The nomination quasi-static is misleading and paradoxical, perhaps it is better to call that property _controllable_. The instant equilibration indicates the lack of memory and inertia in the system.
The simple thermodynamic system and evolution equation can be extended to several thermodynamic state variables, bodies and substances: the previous considerations and conditions regarding Lyapunov stability provide a well-defined mathematical framework: ordinary thermodynamics is nonequilibrium thermodynamics of homogeneous bodies.
From our point of view, the clear physical meaning is the most remarkable: thermodynamics is apparently a theory of stability in a dynamic sense. It is important to emphasise that stability is not only a part of thermodynamics, as it is treated in classical irreversible thermodynamics, [10, 46], the theory itself organises the experience establishing a bunch of concepts to construct dynamics with asymptotically stable equilibrium. Asymptotic stability is the central notion of thermodynamics.
Haddad and coworkers also recognised the role of Lyapunov stability and the possibility of rigorous treatment, [47, 48]. They indicate, that thermodynamic concepts can be fruitful in optimisation and control and also the mathematics of control theory clarify and refine themodynamical considerations.
Now we investigate the concept of absolute temperature in the light of the stability framework.
## 6. Absolute and universal
Temperature is a useful concept only if it is universal. Otherwise the temperatures of bodies with different materials could not be compared. That was the motivation of Lord Kelvin, who was William Thomson then, to introduce absolute temperature, [49]. The adjective absolute refers to the property of the temperature that it must be independent from material8.
Footnote 8: It is sometimes referred to as absolute temperature scale. It is not about absolute zero temperature; several recent textbooks and Wikipedia are misleading.
Absoluteness is universality and the related arguments are based on the Kelvin-Planck form of the Second Law and the concept of a reversible engine. In the original reasoning, Second Law is formulated for cyclic processes of a heat engine. In Figure 2a, two heat reservoirs with empirical temperatures \(T_{1}>T_{2}\) represent constant temperature environments that are connected to the thermodynamic body (the "heat engine" in the classical terminology), where heat exchanges with the reservoirs in one cycle are \(Q_{1}\) and \(Q_{2}\). Also there is mechanical part of the system, therefore mechanical work can be performed and consumed to regulate the operation, e.g. temporarily connect or disconnect the reservoirs, enabling or disabling the heat exchange with the reservoirs. The total mechanical work in a cycle, \(W\), is the difference of the adsorbed and emitted heats, \(W=Q_{1}-Q_{2}\), according to the First Law. The structure of the engine can be arbitrary, and at the end of a cycle period, the state of the engine is restored. The Kelvin-Planck formulation of the Second Law claims that \(Q_{2}\), the emitted heat in a cycle, cannot be zero. No engine could perform work and only absorbs heat in a cycle. There is no perpetuum mobile of the second kind.
It is not assumed that the reservoirs are connected continuously to the engine or would be connected to the engine simultaneously. Then, an analysis of the system
reveals that the Kelvin-Planck statement follows from the existence of body entropy as a potential, [23, 32]. Also, the maximum efficiency is the Carnot efficiency. However, the existence of entropy of the body - therefore the Kelvin-Planck form - cannot say anything about the interactions, about heat exchange or about work, including their directions. Neither is assumed that the engine's temperature would be the same as the temperature of the reservoir9.
Footnote 9: That is the niche, where finite time thermodynamics exist, and the freedom of optimising for maximum power appears, [24]. However, the fundamental importance of the Second Law remains intact. That can be understood also when compared to the original and literature, like the works of Carnot, Kelvin and Maxwell among others, [28].
The mechanical power during operation depends on time, there is a function \(\pi:t\mapsto\pi(t)\), and \(W=\int_{cycle}\pi(t)dt\). There are engines, whose operation can be reversed in the sense that there is a power function that regulates the machine adsorbing \(Q_{2}\) heat in a cycle from the reservoir with temperature \(T_{2}\). The required work, \(W^{\prime}\) and the emitted heat to the \(T_{1}\) reservoir \(Q_{1}^{\prime}\) are different10. The power function for reversed operation is not necessarily the forward power with negative sign. The timing and duration of reversed operation, opening and closing of thermal and mechanical parts, the insulation and reconnection from and to the reservoirs may be different, too. The engine is called _reversible_, if the work required for reversed operation is the same as for forward operation, while adsorbing and emitting the same amount of heat from and to the \(T_{2}\) reservoir in the reversed and forward modes, respectively.
Footnote 10: A reverse cycle air conditioner, a heat pump, is an example. It is not completely mechanical, requires electricity and there are other differences, but otherwise can be operated forward and backward, in a cyclic mode. However, its original state never will be restored.
Reversible operation is somehow more intricate than one may expect. First of all, good thermal engines are not the best heat pumps, because the _thermal efficiency_ is defined as \(\eta=\frac{W}{Q_{1}}\), it is the better the more work is produced from less adsorbed heat. The _effectiveness_ of a heat pump, assuming a reversed operation on Figure 2/a, is defined as \(\eta=\frac{Q_{2}}{W}\), it is the better the more heat is adsorbed by less work. \(\eta=\frac{1}{1+\epsilon}\), therefore a bad thermal engine is a good heat pump and a good heat pump is a bad thermal engine, if one assumes reversible operation. However, the worst possible heat pump, with \(Q_{2}=0\), would dissipate all work to heat, without adsorbing any heat from the \(T_{2}\) reservoir. That kind of operation cannot be reversible, as follows from the above mentioned Kelvin-Planck form of the Second Law. Also, if I take a heat pump with low effectiveness, I do not necessarily find a thermal engine that would have a larger thermal efficiency than the heat pump in a fictional reversed operation.
A reversible engine has the largest possible thermal efficiency, it is the best possible thermal engine. The classical reasoning is seemingly simple (see e.g. in [50]): if any machine has larger efficiency than the reversible one, it would be easy to construct a perpetuum mobile of the second kind. If the thermal efficiency of the forward engine, \(\eta=\frac{W}{Q_{1}}\) is larger than the efficiency of the reversible one, \(\eta^{\prime}=\frac{W^{\prime}}{Q_{1}^{\prime}}\), then the first combined with the reversible one operated in a reverse mode becomes a perpetuum mobile, because at he end of the cycle the exchanged heat with \(T_{2}\) is zero, and \(1-\eta=\frac{Q_{2}}{Q_{1}}<1-\eta^{\prime}=\frac{Q_{2}}{Q_{1}^{\prime}}\), therefore \(Q_{1}^{\prime}<Q_{1}\) and \(W^{\prime}<W\). The combined engine adsorbs \(Q_{1}-Q_{1}^{\prime}\) heat from the \(T_{1}\) reservoir while performing \(W-W^{\prime}\) work. Therefore the reversible engine has the largest possible efficiency not violating the
Second Law. However, the simplicity is misleading: a reversible engine in a reverse operation is not the worst possible heat pump, one cannot argue with reversible heat pumps instead of reversible thermal engines. It is because mechanical work can be converted to heat without problems, and the extra work reduces the efficiency of the machine. On the other hand, if that dissipated mechanical work is directed toward the \(T_{1}\) reservoir, contributing \(Q_{1}\), then it can be useful as direct heating. One should be careful not mixing technical requirements with theoretical concepts.
Thermal efficiency can depend only on the temperatures of the reservoirs. If the difference in construction and materials leads to a difference in efficiency, then one can combine the two like in the previous case; the one with a smaller efficiency is operating backwards the same perpetuum mobile could be constructed. Therefore the efficiency is universal and can be used to define the absolute temperature, [49, 50, 5, 28].
The Second Law and the reversible operation are the most apparent conditions in the the argumentation above. The reversible operation of a reversible engine does not require step-by-step reversibility of the corresponding processes, the forward and reverse power functions can be different. The possibility of cyclic operation is trivial from a technical point of view. It must be ideal in the sense that the engine must recover its original state at the end of the cycle. However, a reversible engine can operate in a reverse mode while the heat and work exchanges are the same as in the case of forward operation but with the opposite sign.
In the simplest case, if the engine is a single thermodynamic body with the evolution equations of the previous section, the recoverability of the original state is provided if the contact with the reservoir and the original external pressure is restored. Then the original equilibrium is recovered because the processes are
Figure 2. a) The schematic representation of the heat engine, a thermodynamic system composed of a body, to thermal reservoirs with temperatures \(T_{1}\) and \(T_{2}\) and some other non specified parts absorbing and emitting mechanical work. \(Q_{1}\), \(Q_{2}\) and \(W\) are the heat and work emitted and absorbed in a cycle. b) Two engines combined in a forward and reversed mode operation. The work produced by the first engine is used by the second engine to enforce reversed operation. If the second engine is a reversible one and \(W>W^{\prime}\), then the combination of the two becomes a Kelvin-Planck perpetuum mobile.
controllable in the sense of the previous section. For simple equations of state, e.g. for an ideal gas, complete control of the engine is provided by the regulation of the external pressure, \(p_{a}(t)\), a Carnot process could run either forwards or backwards, and the conditions of well defined universal temperature are fulfilled.
In summary, the universality of the temperature is based on the Second Law.
## 7. The role of universality
Universality in physics has empirical and analytical aspects. The widespread appearance of thermodynamical concepts in sciences and humanities can be considered as indications of empirical universality, [51]. One may look for common background, arguing, that general assumptions lead to general consequences. It was demonstrated with the concept of absolute temperature, where the universality was justified by the Kelvin-Planck form of the Second Law. Moreover, the Kelvin-Planck form is based on the existence of entropy as thermostatic potential, it is clear, that it has nothing to do with process directions11. Postulating the existence of a potential is a weak condition, in particular considering the lack of any microscopic background.
Footnote 11: Clausius form of the Second Law is independent of the Kelvin-Planck statement. The requirements regarding the interaction of thermodynamic bodies are independent of the requirements on the material properties of the bodies. It is clear from the stability statements but also can be proved directly, see [23, 32].
In this paper, it was argued that thermodynamics is a stability theory. It was demonstrated for homogeneous thermodynamic bodies. There the seemingly independent concepts, assumptions and properties of equilibrium thermodynamics can be unified as parts of Lyapunov stability conditions in the framework of a dynamical theory with genuine evolution equations with general assumptions about the material properties or the interactions. Then thermodynamics - in a general sense - provides a simple mathematical structure that ensures existence and stability of equilibrium. The evolution equation was constructed according to thermodynamics, and thermodynamic conditions were well suited into the stability framework. Remarkably, the differential equation for the volume change was constructed using only stability arguments. If the conditions, like the fundamental balances, are adequately considered in the stability calculations, i.e. in the specification of the state space and in the exploitation of the entropy balance, then the theory, including the evolution equations, is universal. It is as universal as general the assumptions are.
The existence and stability of equilibrium is the most general assumption that one may expect for any natural evolution. Without stability, objective, repeated experiments are impossible. The unstable state of matter transforms into a stable one. Therefore a constructive approach where the only requirement is the stability of equilibrium leads to universal evolution. That is also valid for evolution in spacetime. Therefore, it is not surprising that evolution equations derived with the help of a new constructive methodology of nonequilibrium thermodynamics are robust models in heat conduction or continuum mechanics [52, 53], that thermodynamic concepts emerge concerning the most fundamental theories of physics, [54]. For example the field equation of Newtonian gravity emerges without any further ado, [55]. The Second Law and a pure thermodynamic based stability framework looks like fundamental, beyond expectations, [56, 57].
It is remarkable, that any structural interpretation, e.g. particle-based argument of statistical physics, may destroy universality but not necessarily do that. If the statistical concept is compatible with the stability structure, e.g. defines the thermodynamic state variables, then thermodynamics becomes emergent. Also, at the same time, an inconsistent microscopic identification of the macrovariables necessarily reduces the universality. It is remarkable that evolution equations of probability distribution functions, like the Boltzmann equation, can be interpreted in a thermodynamic framework, too, [58].
The Second Law is expected to play a role in various disciplines of science, but its application in the case of economics, ecology, biology or anywhere else is somehow mysterious. Random behaviour of individual objects alone does not justify the emergence of thermodynamic quantities like energy or entropy, nor the tendency of its uniform distribution. However, thermodynamics provides a straightforward and natural framework when some form of equilibrium exists, and stability is expected or required. It is not a wonder that growth models of economics, structural stability of ecology, and dynamical aspects of evolutionary game theory are similar to thermodynamic evolution, [59, 60]. There are still several steps towards a universal theory of stable evolution.
## 8. Acknowledgement
The work was supported by the grants National Research, Development and Innovation Office - FK134277. The research reported in this paper is part of project no. BME-NVA-02, implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021 funding scheme.
The author thank the referees for their valuable and detailed comments and remarks.
|
2310.19527 | On the Theory of Risk-Aware Agents: Bridging Actor-Critic and Economics | Risk-aware Reinforcement Learning (RL) algorithms like SAC and TD3 were shown
empirically to outperform their risk-neutral counterparts in a variety of
continuous-action tasks. However, the theoretical basis for the pessimistic
objectives these algorithms employ remains unestablished, raising questions
about the specific class of policies they are implementing. In this work, we
apply the expected utility hypothesis, a fundamental concept in economics, to
illustrate that both risk-neutral and risk-aware RL goals can be interpreted
through expected utility maximization using an exponential utility function.
This approach reveals that risk-aware policies effectively maximize value
certainty equivalent, aligning them with conventional decision theory
principles. Furthermore, we propose Dual Actor-Critic (DAC). DAC is a
risk-aware, model-free algorithm that features two distinct actor networks: a
pessimistic actor for temporal-difference learning and an optimistic actor for
exploration. Our evaluations of DAC across various locomotion and manipulation
tasks demonstrate improvements in sample efficiency and final performance.
Remarkably, DAC, while requiring significantly less computational resources,
matches the performance of leading model-based methods in the complex dog and
humanoid domains. | Michal Nauman, Marek Cygan | 2023-10-30T13:28:06Z | http://arxiv.org/abs/2310.19527v3 | # Decoupled Actor-Critic
###### Abstract
Actor-Critic methods are in a stalemate of two seemingly irreconcilable problems. Firstly, critic proneness towards overestimation requires sampling temporal-difference targets from a conservative policy optimized using lower-bound Q-values. Secondly, well-known results show that policies that are optimistic in the face of uncertainty yield lower regret levels. To remedy this dichotomy, we propose Decoupled Actor-Critic (DAC). DAC is an off-policy algorithm that learns two distinct actors by gradient backpropagation: a conservative actor used for temporal-difference learning and an optimistic actor used for exploration. We test DAC on DeepMind Control tasks in low and high replay ratio regimes and ablate multiple design choices. Despite minimal computational overhead, DAC achieves state-of-the-art performance and sample efficiency on locomotion tasks.
## 1 Introduction
Deep Reinforcement Learning (RL) is still in its infancy, with a variety of tasks still unsolved [64; 24] or solved within an unsatisfactory amount of environment interactions [77; 58]. Whereas increasing the replay ratio (ie. the number of parameter updates per environment interactions step) is a promising general approach for increasing sample efficiency and final performance of RL agents [30; 8; 47; 42], it is characterized by quickly diminishing gains [12] combined with linearly increasing computational cost [54; 35]. Moreover, the limitations of robot hardware and data acquisition frequency constrain the maximum achievable replay ratio [62]. As such, it is worthwhile to pursue orthogonal techniques such as enhancing the properties of the underlying model-free agents. One continuously researched theme is how a particular algorithm handles the _exploration-exploitation_ dilemma [27; 18; 10; 14].
In Actor-Critic (AC) algorithms, it's common to employ a single policy for both exploration (gathering new data to improve the current best policy) and exploitation (leveraging gathered data to determine the best policy) [60; 56; 72; 57; 45]. Algorithms like TD3 [17] or SAC [21] achieve exploration by introducing symmetric noise to an exploitative action. However, this noisy exploitation strategy necessitates careful balancing of policy entropy [13]. Whereas insufficient entropy leads to suboptimal policies due to inadequate exploration [21], excessive entropy results in suboptimal policies due to noisy critic network updates and, consequently, poor Q-value approximator convergence [69]. Additionally, optimizing the policy towards Q-value lower-bound leads to an inadequate exploration of the state-action subspace that yields critic disagreement [10; 45].
Using a single policy for both exploration and exploitation in AC algorithms has its roots in the Policy Gradient (PG) Theorem [65] which states that PG is a function of Q-values under the current policy. Thus, approaches building on PG would often use SARSA-type updates to train the critic [60; 22; 6], as SARSA converges to on-policy Q-values [64]. This in turn reinforces a single-policy setup for AC algorithms. Recently, there have been works in relaxing the PG Theorem toward a dual-policy, fully off-policy setup [39]. An example of a dual-policy implementation is Optimistic Actor-Critic (OAC) [10]. OAC uses two policies: optimistic for exploration (ie. sampling actions when interacting with the environment); and conservative for exploitation (ie. sampling actions for temporal-difference learning). Both policies are extracted from a single conservative actor. Whereas
the conservative policy is directly parameterized by the actor, the optimistic policy stems from a local linear approximation of Q-value upper-bound constrained by a desired Kullback-Leibler (KL) divergence. This yields an approximation of a policy that is Optimistic in the Face of Uncertainty (OFU) [71; 46]. Unfortunately, OAC exploration is highly dependent on the chosen hyperparameter values.
To address the above shortcomings, we propose Decoupled Actor-Critic (DAC). DAC tackles the exploration-exploitation dilemma by adopting a novel decoupled actor AC approach. As such, DAC employs two actors, each independently optimized using gradient backpropagation with different objectives. The optimistic actor is trained to maximize an optimistic Q-value upper-bound while adjusting optimism levels automatically. This actor is responsible for exploration (sampling transitions added to the experience buffer). In contrast, the conservative actor is trained using standard lower-bound soft policy learning [21] and is used for sampling temporal-difference (TD) targets and evaluation. Secondly, DAC addresses the shortcomings of OAC. Furthermore, by relaxing the first-order Taylor approximation and explicitly modeling the second policy via an actor network DAC can accurately approximate the maximum of arbitrary complexity Q-value upper-bound [29]. We highlight the main contributions of DAC below:
* We propose a novel off-policy dual-actor AC setup where each actor is trained via gradient backpropagation of a specialized objective. We define the optimistic policy objective and formulate a robust framework that introduces easily interpretable hyperparameters.
* We implement a module that automatically adjusts the level of optimism applied during Q-value upper-bound approximation, as well as the impact of the KL penalty. This in turn allows DAC to accommodate various levels of epistemic and aleatoric uncertainties and different reward scales without hyperparameter tuning.
* We show that DAC outperforms model-free benchmarks in terms of both sample efficiency and final performance, in both low and high replay regimes. To facilitate further research, we perform extensive ablations on various design choices (over \(2000\) training runs). We release training logs, as well as implementations of DAC under the following URL.
## 2 Preliminaries
In this paper, we address policy learning in continuous action spaces. We consider an infinite-horizon Markov Decision Process (MDP) [53] which is described with a tuple \((S,A,R,p,\gamma)\), where states \(S\) and actions \(A\) are continuous, \(R(s,a,s^{\prime})\) is the transition reward, \(p(s^{\prime}|s,a)\) is a transition kernel and \(\gamma\in(0,1]\) is a discount factor. A policy \(\pi(a|s)\) is a state-conditioned action distribution. Value is the expected discounted return from following the policy at a given state
Figure 1: DAC achieves significant improvements on DeepMind Control Suite despite minimally higher computational costs than SAC, demonstrating state-of-the-art performance in complex locomotion tasks. Figure 0(a) reports compute efficient experiments, where algorithms perform only \(3\) updates per environment step. Figure 0(b) reports a sample efficient, high replay ratio experimental setup. Figure 0(c) shows that DAC matches the performance of SR-SAC despite using \(5\)-times lower replay ratio and no parameter resets, whereas SR-DAC outperforms both. We shade the area between the low and high replay configurations of the algorithms. Figure 0(d) provides a comparison between low replay DAC and DreamerV3, highlighting DAC’s competitive performance despite significantly higher complexity, runtime, and computational demands of the model-based algorithm. We detail the setting in Section 4 and Appendix F. 10 seeds, mean and 95% bootstrapped CI.
\(V^{\pi}(s)=\int\left[R(s,a,s^{\prime})+\gamma V(s^{\prime})\right]\ ds^{\prime}a\). Q-value is the expected discounted return from performing an action and following the policy thereafter \(Q^{\pi}(s,a)=\int p(s^{\prime})\left[R(s,a,s^{\prime})+\gamma V(s^{\prime}) \right]\ ds^{\prime}\). A policy is said to be optimal if it maximizes discounted return for starting state distribution. Actor-Critic (AC) for continuous action spaces performs simultaneous gradient-based learning of Q-values (_critic_) and policy (_actor_) that seeks local optimum of said Q-values [60, 9]. Critic parameters are updated by minimizing the SARSA temporal-difference variants [64]. Modern AC methods employ a variety of countermeasures to overestimation of Q-values, with bootstrapping using target network [68] and lower-bound Q-value approximation [17] being most prominent. Soft SARSA updates include policy stochasticity according to the following [21]:
\[\mathcal{L}^{\theta}=Q_{\theta}^{\pi}(s,a)-\left(R(s,a)+\gamma\left(Q_{lb}^{ \pi}(s^{\prime},a^{\prime})-\alpha\log\pi_{\phi}(a^{\prime}|s^{\prime})\right) \right)\quad a^{\prime}\sim\pi_{\phi}\quad s,a,s^{\prime}\sim\mathcal{D} \tag{1}\]
Where \(\pi_{\phi}\) is the actor; \(Q_{\theta}^{\pi}\) is the critic; \(Q_{lb}^{\pi}\) is the Q-value lower-bound; \(\alpha\) is the entropy temperature; and \(\mathcal{D}\) denotes the experience buffer [44]. To achieve a locally optimal policy, the actor takes gradient steps aimed at maximizing the critic's lower-bound [45]. The policy can use an exploration schedule [17] or optimize its variance through soft policy improvement based on an entropy target [21]:
\[\mathcal{L}^{\phi}=-Q_{lb}^{\pi}(s,a)+\alpha\log\pi_{\phi}(a|s)\quad a\sim \pi_{\phi}\quad s\sim\mathcal{D} \tag{2}\]
As the actor models a parameterized distribution, gradients can be computed using the reparametrization trick [36]. When enforcing action domain constraints through hyperbolic tangent, minimizing policy log probabilities not only enhances exploration but also encourages the policy to maintain means within the non-saturated region of the hyperbolic tangent [70]. Additionally, the temperature can be automatically adjusted to ensure that average log probabilities match a specified target [21]:
\[\mathcal{L}^{\alpha}=-\alpha\big{(}\log\pi_{\phi}(a_{i}|s_{i})+\mathcal{H}^{ *}\big{)}\quad\alpha\in(0,\infty)\quad a\sim\pi_{\phi}\quad s\sim\mathcal{D} \tag{3}\]
Where \(\mathcal{H}^{*}\) is the fixed entropy target which is often a function of action dimensionality. Contrary to fixed exploration scheduling, this method allows for heterogeneous variances across states. Given the optimization objective, this mechanism promotes exploration in states that offer lower Q-value gradients. For both actor and critic, an ensemble statistic of \(k\) critic networks gives the Q-value lower-bound. Most commonly, an ensemble of \(k=2\) is used. Then:
Figure 2: Pessimistic underexploration and state-action space coverage on the Pendulum task with state representation embedded into \(1\) dimension. The dots represent 500 state-action samples gathered using the latest policy (conservative (black) or optimistic (red)). Figure 1(a) displays the standard deviation (\(\sigma\)) of the two critics, with smaller values observed in well-explored state-action regions. In Figure 1(b), we depict conservative policy probabilities. Due to lower-bound optimization, the actor prioritizes state-action subspaces that have already been explored and do not yield critic disagreement. Figure 1(c) illustrates optimistic policy probabilities. Despite having similar entropy levels, following the upper-bound policy results in better coverage within critic disagreement regions.
\[Q^{\pi}_{lb}(s,a)=\min\bigl{(}Q^{1}_{\pi}(s,a),Q^{2}_{\pi}(s,a)\bigr{)}=\underbrace{ \frac{1}{2}\left(Q^{1}_{\pi}(s,a)+Q^{2}_{\pi}(s,a)\right)}_{\text{Mean}}- \underbrace{\frac{1}{2}\left|Q^{1}_{\pi}(s,a)-Q^{2}_{\pi}(s,a)\right|}_{\text{ Standard Deviation}} \tag{4}\]
This observation generalized Q-value lower-bound [10; 45]:
\[Q^{\pi}_{lb}(s,a)=Q^{\mu}_{\pi}(s,a)+\beta^{lb}\;Q^{\sigma}_{\pi}(s,a) \tag{5}\]
Where \(Q^{\mu}_{\pi}\) is the critic ensemble mean; \(Q^{\sigma}_{\pi}\) is the critic ensemble standard deviation; and hyperparameter \(\beta^{lb}\) controls the level of conservatism of the algorithm (ie. decreasing \(\beta^{lb}\) leads to bigger penalization of critic disagreement). Setting \(\beta^{lb}=-1\) is equivalent to the standard minimum of the two critics. Optimizing the actor with respect to the Q-value lower-bound demotes state actions for which the critic ensemble disagrees. Such is referred to as pessimistic underexploration [10; 6]. OAC tackles the underexploration by exploring according to an optimistic policy \(\pi^{\sigma}_{\phi}\), which is itself extracted from the conservative actor \(\pi^{c}_{\phi}\). As such, OAC explores according to a transformed conservative policy \(\pi_{o}\) given by the following Lagrangian:
\[\begin{split}&\pi_{e}=\arg\max\;\underset{a\sim\pi_{o}}{\mathbb{E}}Q ^{\pi}_{ub}(s,a)\quad\text{subject to}\quad D_{\text{KL}}\bigl{(}\pi^{c}_{\phi}( s)\;\|\;\pi^{o}_{\eta}(s)\bigr{)}\leq\delta\\ & with\quad Q^{\pi}_{ub}=a\nabla\;Q^{\mu}_{\pi}(s,a)+\beta^{ub}\;Q^ {\sigma}_{\pi}(s,a)\end{split} \tag{6}\]
Where \(\delta\) is the boundary hyperparameter, \(Q^{\pi}_{ub}\) is the Q-value upper-bound approximated via a linear first-order Taylor series, and \(\beta^{ub}\) hyperparameter controls the level of optimism. OAC exploration was shown to improve sample efficiency and performance as compared to SAC [10].
## 3 Decoupled Actor-Critic
Traditionally, AC algorithms use a single actor network for three main tasks: _exploration_ (ie. sampling an action to add a transition to the experience buffer); _temporal-difference learning_ (ie. sampling an action to calculate the TD target); and _evaluation_ (ie. sampling an action to assess the performance of an agent). Using a single actor for all tasks requires a delicate balance between optimism and conservatism. Exploration tends to favor optimistic behavior policies due to lower regret guarantees [71], while TD learning leans towards conservatism due to the critic's tendency to overestimate [26].
Figure 3: Varying reward scales, uncertainty levels, and Q-value non-stationarity pose challenges in setting fixed optimism (\(\beta^{ub}\)) and KL penalty weights. We examine three versions of the Cheetah Run task: regular (blue), equivalent to the vanilla DMC task; scale (orange), where we rescale Q-values by multiplying rewards; and uncertainty (pink), where we add Gaussian noise to rewards to increase aleatoric uncertainty. In Figure 2(a)), we observe how the ratios \(Q^{\sigma}\pi/Q^{\mu}\pi\) change during training for these task variants. As training progresses, the divergence between policies maximizing Q-value lower and upper-bounds decreases for any fixed \(\beta^{ub}\). In Figure 2(b)), we illustrate the DAC optimism adjustment mechanism, which adapts \(\beta^{ub}\) to achieve a desired empirical KL divergence between optimistic and conservative actors. This allows for task-dependent and phase-specific levels of optimism \(\beta^{ub}\). Finally, Figure 2(c) presents DAC’s KL penalty weight (\(\tau\)) on a logarithmic scale. Similarly to optimism, DAC adjusts the impact of the KL until the divergence reaches target levels.
DAC addresses this dichotomy by introducing two distinct actor networks: an optimistic one and a conservative one. The optimistic actor is trained to maximize the upper-bound of the Q-value and is exclusively used for exploration. On the other hand, the conservative actor is trained to maximize the lower-bound of the Q-value and is employed for both TD learning and evaluation. By performing conservative Q-value updates on optimistic state-action samples, DAC achieves more effective exploration without the issue of Q-value overestimation.
```
1:Input Models:\(\pi^{c}_{\phi}\) - conservative actor; \(\pi^{o}_{\eta}\) - optimistic actor; \(Q^{\pi}_{\theta}\) - critic ensemble; \(Q^{\pi}_{t}\) - target critic; \(\alpha\) - entropy temperature; \(\beta^{ub}\) - optimism; \(\tau\) - KL penalty weight;
2:Input Hyperparameters:\(f_{\sigma}\) - variance multiplier described in Eq. 7; \(x\) - copying frequency; \(\mathcal{KL}^{*}\) - target KL divergence described in Eq. 9; \(\beta^{ub}_{0}\) - initial \(\beta^{ub}\); \(\tau_{0}\) - initial \(\tau\)
3:\(s^{\prime},r,t=\texttt{env.step}(a)\quad with\quad a\sim f_{\sigma}(\pi^{o}_{ \eta}(a|s))\quad\) {sample from the optimistic actor}
4:buffer.add\((s,a,r,s^{\prime},t)\)
5:if\(train\_step\) modulo \(x=0\); then
6:\(\eta\leftarrow\phi\); \(\beta^{ub},\tau\leftarrow\beta^{ub}_{0},\tau_{0}\) {copy conservative parameters; reinitialize \(\beta^{ub}\) and \(\tau\)}
7:endif
8:for\(i=1\)to ReplayRatio do
9:\(s,a,r,s^{\prime}\sim\texttt{buffer.sample}\)
10:\(\theta\leftarrow\theta-\nabla_{\theta}\mathcal{L}^{\theta}(s,a,r,s^{\prime},a ^{\prime})\quad with\quad a^{\prime}\sim\pi^{c}_{\phi}\quad\) {update critic according to Eq. 1}
11:\(\phi\leftarrow\phi-\nabla_{\phi}\mathcal{L}^{\phi}(s,a)\quad with\quad a\sim \pi^{c}_{\phi}\quad\) {update conservative actor according to Eq. 2}
12:\(\eta\leftarrow\eta-\nabla_{\eta}\mathcal{L}^{\eta}(s,a)\quad with\quad a\sim f _{\sigma}(\pi^{o}_{\eta})\quad\) {update optimistic actor according to Eq. 7}
13:\(\alpha\leftarrow\alpha-\nabla_{\alpha}\mathcal{L}^{\alpha}\quad\) {update entropy temperature according to Eq. 3}
14:\(\beta^{ub}\leftarrow\beta^{ub}-\nabla_{\beta}\mathcal{L}^{\beta}\) {update optimism according to Eq. 9}
15:\(\tau\leftarrow\tau-\nabla_{\tau}\mathcal{L}^{\tau}\) {update KL penalty weight according to Eq. 10}
16:\(Q^{\pi}_{t}=\texttt{Polyak}(Q^{\pi}_{\theta},Q^{\pi}_{t})\quad\) {standard Polyak averaging}
17:endfor
```
**Algorithm 1** Decoupled Actor-Critic Step
The pseudo-code illustrates a single DAC training step, where changes with respect to SAC are colored. We summarize the most important novelties of the proposed algorithm: \([1]\)_Decoupled Actors_ - the conservative actor is used for TD learning (pseudo-code line \(10\)) and the optimistic actor is used for exploration (pseudo-code line \(3\)); \([2]\)_Unique Variance_ - the exploration policy can have a different level of entropy as compared to the TD learning policy (pseudo-code line \(3\)); \([3]\)_Optimistic Policy Objective_ - the optimistic actor learns to maximize the regularized Q-value upper-bound (pseudo-code line \(12\)) with the levels of optimism and KL penalty weight adjusted such that the divergence target is met (pseudo-code lines \(14\) and \(15\)). We describe all DAC modules in the following subsections and provide a detailed comparison to OAC in Appendix B.2.
### Conservative Actor, Entropy Temperature and Critic
The conservative actor denoted as \(\pi^{c}_{\phi}\), optimizes a standard soft policy target described in Equation 2. Using a soft policy target allows for state-dependent exploration and regularizes the policy such that the hyperbolic tangent output remains unsaturated. Furthermore, the non-zero variance of the conservative actor regularizes the critic TD learning. Since the data in \(\mathcal{D}\) is collected exclusively by the optimistic actor, the conservative actor is updated fully off-policy. Following standard SAC, we update the entropy temperature and the critic via Equations 3 and 1 respectively. For both updates, the sampling is performed from the conservative actor \(\pi^{c}_{\phi}\). Whereas in principle detaching exploration from exploitation allows for zero variance when sampling the TD targets, we find that including some levels of noise regularizes the critic. Finally, the critic uses layer normalization [4] before every activation, which we found to slightly increase the base agent's performance. We discuss the design choices in more detail in Appendix A.
### Optimistic Actor
The optimistic actor, denoted as \(\pi^{o}_{\eta}\), optimizes an optimistic policy objective defined as follows:
\[\mathcal{L}^{\eta}=-\underbrace{\big{(}Q_{\pi}^{\mu}(s,a)+\beta^{ub}\;Q_{\pi}^{ \sigma}(s,a)\big{)}}_{\text{Q-value upper-bound}}+\underbrace{\tau D_{\text{KL}} \big{(}\pi_{\phi}^{c}(s)\parallel\pi_{\eta}^{\sigma}(s)\big{)}}_{\text{Divergence penalty}}\quad a\sim f_{\sigma}(\pi_{\eta}^{ \sigma})\quad s\sim\mathcal{D} \tag{7}\]
Where \(\beta^{ub}\) is the optimism, \(\tau\) is the KL penalty weight, \(D_{\text{KL}}\) is the KL divergence between the conservative and optimistic policies, and \(f_{\sigma}\) is the standard deviation multiplier. Optimizing for Q-value upper-bound results in a policy that is optimistic in the face of uncertainty, but also promotes actions that generate critic disagreement. Since ensemble disagreement is often treated as a proxy for sample novelty [74; 25], following such a policy yields more diverse samples and as a result better coverage of the state-action space [50; 41]. Whereas coverage is not explicitly optimized for in traditional RL, there is a growing body of research that hints toward the importance of data diversity in the context of RL [73; 16; 78]. KL divergence, the second objective term, regularizes the optimistic policy. While policies can be represented by various parameterized distributions, we implement both actors as simple diagonal normal distributions, transformed by the hyperbolic tangent activation. We compute the KL divergence in a closed form using the change of variables:
\[D_{\text{KL}}\big{(}\pi_{\phi}^{c}(s)\parallel\pi_{\eta}^{\sigma}(s)\big{)}= \sum_{i=1}^{|\mathcal{A}|}\bigg{(}\log\frac{\sigma_{\phi}^{i}(s)}{\sigma_{\eta }^{i}(s)}+\frac{\sigma_{\eta}^{i}(s)^{2}+\big{(}\mu_{\eta}^{i}(s)-\mu_{\phi}^{ i}(s)\big{)}^{2}}{2\;\sigma_{\phi}^{i}(s)^{2}}-\frac{1}{2}\bigg{)} \tag{8}\]
We derive the above statement in Appendix A. Using KL stabilizes the off-policy learning by ensuring that the sampled trajectories are probable under the conservative actor policy [64; 10]. Secondly, it guarantees that the optimistic policy optimizes for a specified level of variance, which can be distinct from \(\pi_{\phi}^{c}\). To this end, we define the function \(f_{\sigma}\) as a simple variance multiplication. As such, the optimistic actor will have a standard deviation \(f_{\sigma}\)-times bigger than the conservative policy (this is implemented by simply multiplying the modeled standard deviations by \(f_{\sigma}\)). This mechanism allows for separate entropy for TD learning and exploration while retaining standard convergence guarantees of AC algorithms. In fact, as \(\lim_{\mathcal{D}\rightarrow\infty}Q_{\pi}^{\sigma}(s,a)=0\)[68], it follows that in the limit both actors recover a policy that differs only by \(f_{\sigma}\). As shown in Figure 4, including the KL penalty is essential for the approach's success.
### Adjustment of \(\beta^{ub}\) and \(\tau\)
Since values of \(Q_{\pi}^{\mu}\) and \(Q_{\pi}^{\sigma}\) depend on reward scales, as well as aleatoric and epistemic uncertainty of the environment, the value of \(\beta^{ub}\) cannot be easily set. Furthermore, as shown in Figure 3, fixed
Figure 4: Evaluating the impact of various design choices in DAC. The ablated design choices include: \((+KL)\) KL penalty on both actors; \((-\pi)\) only optimistic actor; \((-KL)\) not using KL at all; \((+Det)\) a deterministic conservative actor; \((-\tau)\) a fixed value of \(\tau\); \((-\beta)\) a fixed value of \(\beta^{ub}\); \((-\tau\&\beta)\) fixed values of both; \((-\sigma)\) same variance on both actors; \((-copy)\) not copying parameters during the training; and \((-LN)\) DAC without layer normalisation. As follows, the application of KL is of great importance, with both using KL penalty on both agents and not using it at all leading to bad policies. \(\beta^{ub}\) adjustment has more impact on the performance than \(\tau\) adjustment. Finally, using DAC with parameter copying and layer normalization with DAC is beneficial. RR=3, 500k steps, 10 tasks, 10 seeds, and 95% bootstrapped CI. We detail the tested variations in Appendix F.
levels of \(\beta^{ub}\) yield decreasing the impact of uncertainty on the optimistic policy. DAC leverages an observation that for \(\beta^{ub}=-\beta^{lb}\) the optimistic actor recovers the objective of the conservative actor. Then, \(\beta^{ub}\) can be defined such that the divergence between the conservative baseline policy and the optimistic policy reaches a desired level. To this end, implement a module that automatically adjusts the levels of optimism \(\beta^{ub}\):
\[\mathcal{L}^{\beta^{ub}}=\bigg{(}\beta^{ub}+\beta^{lb}\bigg{)}\bigg{(}\frac{D_{ \mathrm{KL}}\big{(}\pi_{\phi}^{c}(s)\parallel\pi_{\eta}^{o}(s)\big{)}}{| \mathcal{A}|}-\mathcal{KL}^{*}\bigg{)}\quad\beta^{ub}\in(\beta^{lb},\infty)\quad s \sim\mathcal{D} \tag{9}\]
Where \(\mathcal{KL}^{*}\) is the KL divergence target between the optimistic and transformed conservative policies, \(D_{\mathrm{KL}}\) is the empirical KL divergence, \(B\) is the batch size, and \(|\mathcal{A}|\) is the action dimensionality. If the empirical KL divergence is bigger than the KL target, then \(\beta^{ub}\) is reduced with a limit at \(\beta^{lb}\). On the other hand, if the empirical KL divergence is smaller than the target, then \(\beta^{ub}\) is increased with a limit at \(\infty\). This update mechanism allows us to define optimism level as a divergence between optimistic and conservative policies. We update the KL penalty weight \(\tau\) in the opposite direction:
\[\mathcal{L}^{\tau}=-\tau\bigg{(}\frac{D_{\mathrm{KL}}\big{(}\pi_{\phi}^{c}(s) \parallel\pi_{\eta}^{o}(s)\big{)}}{|\mathcal{A}|}-\mathcal{KL}^{*}\bigg{)} \quad\tau\in(0,\infty)\quad s\sim\mathcal{D} \tag{10}\]
By dividing by \(|\mathcal{A}|\) we allow for divergence per degree of freedom. As \(\tau\) is increased when the empirical KL target is bigger than the desired KL target, DAC can regularize the divergence between two actors even if the \(\beta^{ub}\) is at its negative limit. Conversely, an automatic reduction of \(\tau\) accompanies the increase of \(\beta^{ub}\) if reaching the divergence limit proves challenging. This adaptive approach, as illustrated in Figure 2(c), accommodates different scales of Q-values and contrasts with setups like OAC, where optimism is predefined by fixing \(\beta^{ub}\) at a specific value. However, if the adjustment mechanism operates too slowly, the KL penalty may not be effectively enforced during training, potentially causing the two agents to diverge. This divergence can result in fully off-policy learning, insufficient coverage in the conservative policy region, and ultimately, suboptimal agent performance. We believe that this issue may be connected to the deadly triad [67; 64] and recent findings highlighting the limitations of fully off-policy learning, such as in the tandem setting [12]. To mitigate the divergence problem, we observed that initializing both agents with identical parameter values (\(\phi_{0}=\eta_{0}\)) makes them less likely to diverge. Additionally, during training, we employ a hard parameter copy of the conservative actor and reinitialize the optimistic actor with copies of these parameters.
## 4 Experiments
We consider a set of \(10\) proprioceptive DeepMind Control Suite (DMC) tasks [66] listed in Appendix D for which we run experiments in low and high replay regimes. In both, we use \(10\) random seeds and \(10^{6}\) environment steps for each task. We build our experiments on JaxRL code base [37]. Since all considered algorithms other than SAC are extensions of thereof, we fix the pool of common hyperparameters on values that are known to work well with SAC [47; 12]. All algorithm-specific hyperparameters have fixed values between low and high replay ratio settings and are reported in Appendix E. Similarly, all algorithms except RedQ use the same network architectures and a standard ensemble of two critics [17; 21; 10; 45; 6]. For all experiments, we report robust evaluation statistics generated via the RLiable package [1]. The results for both low and high replay setups are presented in Figures 1 and 6, with training curves for each task included in Appendix G. Additionally, we run
Figure 5: Impact of DAC hyperparameters on the final performance. All tested setups outperformed baseline SAC (orange), demonstrating DAC robustness and stability. The thick dot is the configuration used in the main experiment. \(500k\) steps, \(10\) tasks, \(10\) seeds, mean and \(95\)% bootstrapped CI.
ablations on various design choices and hyperparameters which we report in Figures 4 and 5. We provide further experimental results in Appendix C and information on the experimental settings in Appendix F
Low ReplayFirstly, we consider a low replay ratio of \(3\) gradient steps per environment step. Such a low replay ratio does not induce loss of plasticity or overfitting in tested algorithms [47; 12; 42]. As such, no parameter resets are allowed [47; 12]. We consider the following baselines: OAC [10]; ND-TOP [45]; SAC [21]; and TD3 [17]. We find that low replay DAC achieves significantly better performance than the baseline algorithms (Figures 0(a) and 0(a)). Furthermore, low replay DAC matches the performance of SR-SAC (Scaled-by-Resetting SAC), despite SR-SAC having \(5\)-times bigger replay and resets (Figure 0(c)).
High ReplayFurthermore, we consider a high replay ratio of \(15\) gradient steps per environment step. Such replay ratio is known to degenerate the performance of most algorithms unless regularization is used [8; 47; 12]. To this end, all algorithms perform full-parameter resets in \(50000th\) step, as well as every \(250000\) environment steps [47; 12]. In this setup, we consider the following algorithms: SR-SAC [12], as well as SR-TOP and SR-OAC (ie. variants of ND-TOP and OAC with high replay and full-parameter resets). As shown in Figures 0(b) and 0(b), SR-DAC achieves better performance than the baseline algorithms and significantly surpasses the state-of-the-art SR-SAC.
## 5 Limitations
DAC divergence minimization presents unique optimization challenges. Unlike typical uses of KL divergence, where the target distribution remains fixed (eg. Variational Autoencoders (VAE) [36]), DAC deals with a constantly evolving policy that is continually improving. Consequently, the optimistic actor needs to keep up with the conservative actor's changes. As depicted in Figure 4, DAC heavily relies on maintaining a low divergence between the actors. While DAC adjustment mechanisms proved effective in the tested environments, there is no guarantee that they will suffice in more complex ones.
The second drawback of DAC lies in its inherent use of two actor networks, which results in slightly increased memory and computational demands compared to the standard Soft Actor-Critic (SAC) approach. In practice, the wall-clock time of DAC is around \(10\)% greater than that of SAC and is indistinguishable from the overhead induced by OAC, which requires additional backpropagation through the critic ensemble. Moreover, since DAC initializes both actors with identical parameters, they must share the same network architecture. However, as indicated by Figure 4, simply copying parameters between them offers only minimal performance enhancement. In light of this, we believe that the necessity for identical architectures can be mitigated by employing techniques like delayed policy updates [17] or by learning rate scheduling.
Figure 6: RLiable results for two regimes. Tasks used in the evaluation are listed in Appendix D. DAC achieves the best final performance, with a pretty sizeable performance gap in the low replay ratio regime. We use \(10\) tasks, \(10\) random seeds, and \(10^{6}\) environment steps. The bars indicate the \(95\)% bootstrapped CI. We provide detailed training curves in Appendix G.
Future Work
One of the critical functions of DAC is limiting the divergence between the two actors (see Figure 4). This aspect raises an interesting question about the potential tradeoff between the performance gains achieved by adhering to a low-regret optimistic policy and the performance losses incurred from fully off-policy updates. To control the divergence between the two actors, we employ a KL penalty, although we believe that alternative divergence or distance metrics could also be effective. The main reason for using KL divergence in our implementation of DAC is that it is known to have a closed-form solution for Tanh-Normal distributions which we use to model both policies. We think that implementing DAC with regularization other than KL might result in better learning stability.
Novel mechanisms used by DAC are orthogonal to many recent improvements in DRL. As such, investigating synergies between DAC and techniques like receding TD horizon [34, 58], critic regularization [20], discount factor annealing [76, 58], AVTD [42], TOP [45] or increasing model size [58, 24] might improve DAC performance. Furthermore, distributional critics offer a capability to directly model both aleatoric and epistemic uncertainties [5, 11, 45]. We think that this aligns with the DAC, as it builds policies leveraging epistemic uncertainty. Similarly, expanding the size of the critic ensemble could lead to synergies and improvements surpassing those achieved by conventional ensemble AC approaches [41, 31]. Finally, as shown in Figure 4, the deterministic version of DAC underperforms its stochastic counterpart. Investigating the factors contributing to this difference in performance is a compelling avenue for research.
## 7 Conclusions
In this paper, we introduced DAC, an off-policy algorithm that leverages two distinct actors trained via specialized objectives. One actor, known as the conservative actor, is dedicated to TD learning and evaluation tasks, while the other, the optimistic actor, is used in exploration. This allows DAC to perform conservative Q-value updates at optimistic state-action samples. As a result, DAC directly addresses the optimism-pessimism dilemma commonly encountered in Actor-Critic agents.
To evaluate the effectiveness of the proposed method, we conducted experiments on a set of \(10\) complex locomotion tasks, considering two different replay ratio regimes. Our results demonstrated that DAC significantly outperforms established benchmark algorithms in terms of both performance and sample efficiency. To assess the impact of individual DAC components, we conducted extensive ablation studies consisting of over \(2000\) runs. Finally, we showcased the robustness of DAC across a range of hyperparameter settings, underscoring its suitability for practical applications.
#### Reproducibility
We provide the DAC implementation, results and scripts used to generate the results at the following URL. For pseudo-code, implementation details, or additional information about specific design choices, please refer to Section 3 and Appendix A. Additionally, we share the considered experimental settings in Section 4 and Appendix F. Finally, we point the reader towards Appendix E where one can find a list of the hyperparameters used in the main experiments.
#### Acknowledgments
We would like to thank Piotr Milos and Gracjan Goral for their valuable help and discussions. Marek Cygan is co-financed by the National Centre for Research and Development as a part of the EU-supported Smart Growth Operational Programme 2014-2020 (POIR.01.01.01-00-0392/17-00). The experiments were performed using the Entropy cluster funded by NVIDIA, Intel, the Polish National Science Center grant UMO-2017/26/E/ST6/00622, and the ERC Starting Grant TOTAL.
|
2301.05897 | Model-based Transfer Learning for Automatic Optical Inspection based on
domain discrepancy | Transfer learning is a promising method for AOI applications since it can
significantly shorten sample collection time and improve efficiency in today's
smart manufacturing. However, related research enhanced the network models by
applying TL without considering the domain similarity among datasets, the data
long-tailedness of a source dataset, and mainly used linear transformations to
mitigate the lack of samples. This research applies model-based TL via domain
similarity to improve the overall performance and data augmentation in both
target and source domains to enrich the data quality and reduce the imbalance.
Given a group of source datasets from similar industrial processes, we define
which group is the most related to the target through the domain discrepancy
score and the number of samples each has. Then, we transfer the chosen
pre-trained backbone weights to train and fine-tune the target network. Our
research suggests increases in the F1 score and the PR curve up to 20% compared
with TL using benchmark datasets. | Erik Isai Valle Salgado, Haoxin Yan, Yue Hong, Peiyuan Zhu, Shidong Zhu, Chengwei Liao, Yanxiang Wen, Xiu Li, Xiang Qian, Xiaohao Wang, Xinghui Li | 2023-01-14T11:32:39Z | http://arxiv.org/abs/2301.05897v1 | # Model-based Transfer Learning for Automatic Optical Inspection based on domain discrepancy
###### Abstract
Transfer learning is a promising method for AOI applications since it can significantly shorten sample collection time and improve efficiency in today's smart manufacturing. However, related research enhanced the network models by applying TL without considering the domain similarity among datasets, the data long-tailedness of a source dataset, and mainly used linear transformations to mitigate the lack of samples. This research applies model-based TL via domain similarity to improve the overall performance and data augmentation in both target and source domains to enrich the data quality and reduce the imbalance. Given a group of source datasets from similar industrial processes, we define which group is the most related to the target through the domain discrepancy score and the number of samples each has. Then, we transfer the chosen pre-trained backbone weights to train and fine-tune the target network. Our research suggests increases in the F1 score and the PR curve up to 20% compared with TL using benchmark datasets.
machine vision, automatic optical inspection (AOI), transfer learning, domain similarity, data augmentation, supervised learning, domain discrepancy.
## 1 Introduction
In Automatic Optical Inspection, defect detection and classification in products from multiple areas is critical in ensuring the quality of industrial products. Due to the incremental popularity and development of computer vision methods, researchers try implementing the latest technologies related to defect detection and feature extraction algorithms. Abd Al Rahman M. Abu Ebayyeh and Alireza Mousavi [1] reviewed research articles that conducted AOI systems and algorithms to detect defects in commonly inspected components in the electronics industry during the last two decades. It covers multiple defect features, image acquisition techniques, inspection algorithms, and sorting mechanisms. They highlighted various methods for object recognition, but we will focus on Convolutional Neural Networks since they outperform prolonged and repetitive activities compared with RNNs [2], offer options beyond detection and classification methods, extract features to improve the generalization, are computationally efficient and suitable for parameter tuning.
In recent years, multiple research papers have paid attention to enhancing elements of neural network architectures to meet the criteria required to detect minor and varied defects proper of Automatic Optical Inspection. For instance, Rui Huang et al. [3] modified the YOLOv3 model for detecting electronic components in complex backgrounds by adding a confidence score on each bounding box through logistic regression, replacing the loss function with BCE and independent logistic, and substituting the Darknet53 backbone with Mobilenet classifiers. Yibin Huang et al. [4] proposed a model for saliency detection of surface defects that consists of MCue to generate resized inputs, U-Net, and Push networks to define the specific location of predicted defects. Based on YOLOv4 architectures and their applications in defect detection and classification of rail surfaces, Noreen Anwar et al. [5] changed the activation functions of the CSPDarknet-53 with SELU, used SAM at upsampling and downsampling points, and redefined the loss function in terms of object classification, object confidence, and object location offset, all with balance coefficients. Junjie Xing and Mingping Jia [6] created a CNN backbone for the classification model (SCN) with symmetric modules plus three convolution branches with an FPN
structure for feature identification, adding an optimized IoU as the loss function. Clearly, the previous publications show how authors modify models without considering real-world data quality, distribution, and similarity among samples [7][8]. Usually, defect inspection tasks involve imbalanced datasets with a long-tailed and open-ended distribution. A classifier must categorize among majority and minority classes, generalize from a few known instances and recognize novelty upon a never observed input [9]. Other research in this field pointed out typical techniques to deal with the lack of samples by data augmentation [10, 11], transfer learning-aided models pretrained on benchmark datasets without domain similarities with the target domain [12, 13], specialized network structures to extract meaningful features from the targets [14], or fine-tuning the transferred weights [15].
The main contributions of our research are summarized as follows:
* In the context, we explore if and how the existing defect-detection datasets and benchmarks can be applied explicitly in model-based transfer learning for Automatic Optical Inspection. Since metal surface defect detection and other optical inspection datasets differ from standards in colorspace, categories, and data distribution, we aim to enhance the quality of pre-trained weights to achieve better results.
* We propose the domain discrepancy score and source domain selection method based on the Wasserstein distance, the Gini distance, and class overlapping to measure the difference among multiple distributions and obtain a score that describes the similarity among datasets. The lower the score, the closer the resemblance between the datasets. This algorithm lets us choose a similar source dataset for a target domain together with data augmentation techniques to match the source dataset with the target as closely as possible.
## 2 Methodology
Transfer learning aims to enhance the performance of the hypothesis function for a target task by discovering and transferring latent knowledge from a source domain and domain tasks, where neither the domains nor the source and target duties must be equivalent. Thus, transfer learning relaxes the hypothesis that such training data must be independent and identically distributed with the test data. However, real-world data usually contain structures among the data instances. and samples in categories are typically insufficient. Hence, transfer learning is suitable to overcome the reliance on large quantities of high-quality data by leveraging helpful information from other related domains.
Previous works in this field have demonstrated the effectiveness of transferring pretrained weights using benchmark datasets such as COCO and ImageNet. Indeed, they all showed relevant improvements in precision, recall, and other derived metrics and reductions in computing time and resources. Nevertheless, their source datasets may not be related to the target samples in terms of colorspace, categories, image size, or even skewed class instances. Thus, our proposal rates the source domain's similarity with respect to the target through a domain discrepancy score. It evaluates the domain discrepancy from the available source dataset with the target domain so that we can select the best match and so adapt the source domain appropriately.
Based on the binning method proposed by Tianyu Han et al. [16], our algorithm keeps categories from the source dataset that overlap with the target samples and removes the rest. We use this brand class-similar source dataset to train a model whose earlier layers (backbone) contain more generic features suitable to fine-tune the latter model that comprises the target domain. Since the target dataset and any other source domain may have severe data imbalance, data augmentation over minorities increases the number of samples and partially mitigates this issue. Likewise, instances from the majority classes could be redundant or cause noise due to their resemblance with the target. Adjusting their quantity through undersampling relieves data skewness and leaves enough samples that match the target dataset.
### Earth Mover's Distance as a metric for domain similarity
Since a domain \(D^{(l)}=\left\{\mathbb{X}^{(l)},P\left(\mathbb{X}^{(l)}\right)\right\}\) comprises a feature space \(\mathbb{X}^{(l)}\) and a marginal probability distribution \(P\left(\mathbb{X}^{(l)}\right)\), and a task \(T^{(l)}=\left\{Y^{(l)},f(\cdot)\right\}\) also contains a label space \(Y^{(l)}\) and conditional probability distribution \(f(\cdot)\), research on this topic usually assumes that domains share the same feature space or simply do not consider it. On the other hand, if the domains have distinct feature spaces or label spaces, one has to project the data onto the same feature or label space and then use the statistical distance estimations as a follow-up step [17]. This paper aims to match similarities among distribution and reduce its gap through a two-step domain screening based on the EMD distance and a modified Bin Similarity algorithm based on [16].
### Source dataset scoring and selection
#### Signatures estimation
Each dataset image \(\mathbf{x}_{m}^{(i)}\) possesses many pixels that characterize a distribution and describe the artifacts contained in a picture. Representing the overall distribution of the features of all the photos in the database with bins, either fixed (histogram) or adaptive, may not attain an equilibrium between expressiveness and efficiency [18]. Instead, using variable-size descriptions of distributions or signatures provides an alternative that takes advantage of the dominant clusters extracted from the original distribution to form a compressed representation. A signature comprises the main clusters of a distribution with a weight that denotes the size of each cluster. A clustering approach like the k-means objective helps us set such pixels of an image \(\mathbf{x}_{m}^{(i)}\) in \(K\) clusters and then group a full dataset \(X_{i}=\left\{\mathbf{x}_{1}^{(i)},...,\mathbf{x}_{N}^{(i)}\right\}\) into a few cohesive groups. Although our samples include label spaces, we temporarily omit them in the first step for all datasets. We want to represent them as a whole instead of selecting the most similar classes from the source domain to the target domain, as in [16]. The reason is that given a target dataset, we must pick one of the available magnetic tile datasets, which usually share a relevant number of classes because their fabrication process, environment, and material properties are similar.
Mathematically, given input image datasets \(X_{i}\), \(X_{T}\), and the number of bins \(K\), the signatures estimation algorithm can be defined as a function \(SE(X_{i},K)\), which computes a signature for each dataset plus their respective weights. Algorithm 1 presents the SE function.
Figure 1: Overview of our domain similarity and subset selection scheme. The system consisted of five parts. (1) The source dataset scoring executed the domain discrepancy analysis to select a source domain dataset from the repository close to the target dataset regarding EMD distance, categories overlapping, the number of samples, and the Gini score. (2) The dataset subset selection algorithm minimized the divergence between domains by removing samples from the source dataset to match as many categories as possible. (3) Data augmentation mitigated the data imbalance among categories by increasing minority classes’ samples. (4) Pretraining the network on the selected sub-dataset using benchmark pre-trained weights. (5) Transference of the best pre-trained weights that belong to the network backbone to the target network as initialized the rest randomly. Then, trained the last network using a data-augmented target dataset.
We first define the number of clusters to form and the number of centroids to generate. For this purpose, the k-means objective (distortion function) minimizes samples' average squared Euclidean distance from their cluster center. Such a center is the mean centroid \(c_{m,k}^{(i)}\) of the instances in a cluster \(\omega_{m,k}^{(i)}\):
\[c_{m}^{(i)}=kmeans\big{(}x_{m}^{(i)}\big{)}=\left\{c_{m,1}^{(i)},c_{m,2}^{(i)},...,c_{m,K}^{(i)}\right\} \tag{1}\]
\[c_{m,k}^{(i)}=\frac{1}{\left|\omega_{m,k}^{(i)}\right|}\sum_{\hat{x}_{m}^{(i)} \in\omega_{m,k}^{(i)}}\tilde{x}_{m}^{(i)} \tag{2}\]
Where \(\tilde{x}_{m}^{(i)}\)is a pixel value in the RGB color space belonging to picture \(x_{m}^{(i)}\), each image contains \(weight\times height\) pixels \(\tilde{x}_{m}^{(i)}\). Our proposal takes advantage of the triangle inequality to accelerate k-means [19] since it becomes more effective as the number K of clusters increases, which opens the possibility of thorough similarity analysis. We can take any optimized k-means algorithm to calculate the proper quantity of K centroids through the residual sum of squares (RSS). This function measures how well the centroids denote the members of their clusters via the squared distance of each \(\tilde{x}_{m}^{(i)}\) from its mean centroid summed over all \(\tilde{x}_{m}^{(i)}\) belonging to cluster \(\omega_{m,k}^{(i)}\):
\[RSS_{k}=\sum_{\tilde{x}_{m}^{(i)}\in\omega_{m,k}^{(i)}}\left| \tilde{x}_{m}^{(i)}-c_{m,k}^{(i)}\right|^{2} \tag{3}\] \[RSS=\sum_{k=1}^{K}RSS_{k} \tag{4}\]
Since each dataset has thousands of images, calculating K with a small sampling reduces the computing time and resources thanks to the similarity among elements of the same domain. Indeed, this premise applies only if the pictures belong to the same product category and share similar capturing conditions. Otherwise, image pre-processing may mitigate contrast, illumination, and distortion variations. Moreover, if the domain consists of different products, we can search for similar sub-datasets from the source domain and split it accordingly.
Having specified a fixed number of K centroids, we calculate the K-means for all images of datasets as in the previous steps, excluding the RSS estimation. Once the final \(c_{m,k}^{(i)}\) for each picture is obtained, we can define the weight of cluster \(w_{m,k}^{(i)}\) as the ratio of pixels \(\tilde{x}_{m}^{(i)}\) that belong to it, being the total sum of such parameters 1.
\begin{table}
\begin{tabular}{|l|} \hline \multicolumn{2}{|l|}{**Algorithm 1** Signatures Estimation} \\ \hline
**Input**: \\ \(X_{1},...,X_{1},X_{7}\): \(I\) sources and a target datasets. \(i\in\{1,...,I\}\) \\ \(x_{1}^{(i)},...,x_{M}^{(i)}\): M input images for dataset \(i\). \(m\in\{1,...,M\}\) \\ \hline
**Output**: \\ \(\left\{s^{(i)}=\left(\left(c_{1}^{(i)},...,c_{K}^{(i)}\right),\left(w^{\prime (i)}_{1},...,w^{\prime(i)}_{K}\right)\right)\right\}\) main \(K\) centroids and weights. \\ \hline
**Statement**: \\ For each dataset \(X_{i},X_{7}\): \\
1. Define the average number of centroids \(K\) per dataset to generate through the distortion function \(RSS\). \\
2. Calculate centroids \(c_{m,k}^{(i)}\) and weights \(w_{m,k}^{(i)}\) for all images \(x_{m}^{(i)}\) using k-means. \\
3. Average k-means clusters \(c_{m,k}^{(i)}\) and assign normalized weights \(w_{k}^{(i)}\)to each mean cluster \(\omega_{k}^{(i)}\). \\ \hline \end{tabular}
\end{table}
Table 1: Signatures Estimation
\[\sum\nolimits_{k=1}^{K}w_{m,k}^{(i)}=1 \tag{5}\]
\[w_{m}^{(i)}=\left\{w_{m,1}^{(i)},w_{m,2}^{(i)},...,w_{m,K}^{(i)}\right\} \tag{6}\]
Thus, a signature comprises both centroids and weights to form a compressed representation of an image.
\[\left\{s_{m}^{(i)}=\left(c_{m}^{(i)},w_{m}^{(i)}\right)\right\} \tag{7}\]
Since each image has its signature and they all form a dataset, we propose to denote the \(M\) signatures in only \(K\) main clusters \(c_{k}^{(i)}\) using k-means but considering all \(c_{m}^{(i)}\) and their weights \(w_{m}^{(i)}\) as inputs instead of pixel values and frequency. The main weights \(w_{k}^{(i)}\) may not meet the constraint in equation 5, so adjusting their values helps meet the prior criterion.
\[w^{\prime(i)}_{k}=\frac{w_{k}^{(i)}}{\sum_{k}w_{k}^{(i)}} \tag{8}\]
Finally, the signature that represents a dataset is described as follows:
\[\left\{s^{(i)}=\left(\left(c_{1}^{(i)},...,c_{K}^{(i)}\right),\left(w^{\prime (i)}_{1},...,w^{\prime(i)}_{K}\right)\right)\right\} \tag{9}\]
#### 2.2.2 Earth Mover's Distance for domain discrepancy estimation among datasets
The Earth Mover's Distance measures the difference between the source and target domains with features. Our approach provides a ground distance that assesses dissimilarity between datasets signatures to address the problem of lifting these metrics from individual elements to distributions. It implies obtaining distances between picture color distributions in colorspace terms. Indeed, the solution is the minimum amount of "work" required to transform one signature into the other [18]. We also aim to allow these distances for partial matches to compare one distribution with a subset of another.
Formally, the EMD is a linear programming problem: Let \(s^{(i)}=\left\{\left(c_{1}^{(i)},w_{1}^{(i)}\right),...,\left(c_{M}^{(i)},w_{ M}^{(i)}\right)\right\}\) be the first signature with \(M\) clusters; \(s^{(j)}=\left\{\left(c_{1}^{(j)},w_{1}^{(j)}\right),...,\left(c_{N}^{(j)},w_{ N}^{(j)}\right)\right\}\) the second signature with \(N\) clusters; and \(G=\left[g_{u,v}\right]\) the ground distance matrix where \(g_{u,v}\) is the ground distance between clusters \(c_{m}^{(i)}\) and \(c_{n}^{(j)}\). This last measure is simply the Euclidean distance in the color space \(g_{u,v}=\sqrt{(R_{u}-R_{v})^{2}+(G_{u}-G_{v})^{2}+(B_{u}-B_{v})^{2}}\) between the analyzed clusters. Next, we must find a flow \(F=\left[f_{u,v}\right]\), with \(f_{u,v}\) the flow between \(c_{m}^{(i)}\) and \(c_{n}^{(j)}\), that minimizes the overall cost and distance.
\[WORK(X_{i},X_{7},F)=\sum_{u=1}^{M}\sum_{v=1}^{N}g_{u,v}f_{u,v} \tag{10}\]
Subject to the following constraints
\[f_{u,v}\geq 0\hskip 28.452756pt1\leq u\leq M,\hskip 28.452756pt1\leq v\leq N \tag{11}\]
\[\sum\nolimits_{v=1}^{N}f_{u,v}\leq w_{u}^{\prime(i)}\hskip 56.905512pt1\leq u\leq M \tag{12}\]
\[\sum\nolimits_{u=1}^{M}f_{u,v}\leq w_{v}^{\prime(i)}\hskip 56.905512pt1\leq v\leq N \tag{13}\]
\[\sum\nolimits_{u=1}^{M}\sum\nolimits_{v=1}^{N}f_{u,v}=\min\left(\sum \nolimits_{u=1}^{M}w_{u}^{\prime(i)},\sum\nolimits_{v=1}^{N}w_{v}^{\prime(j)}\right) \tag{14}\]
So far, the EMD calculation demands an initial flow \(F_{0}\) close enough to the final solution so that we save computational time and resources. To do so, we took the work of Edward Russell [20], which extended Dantzig's algorithm (simplex) to calculate a starting basis for the transportation problem that produces a near-optimal basis, and then we optimize it through the Sequential Least-Squares Programming (SLSQP) [21]. Finally, we can obtain the Earth Mover's Distance with the optimal flow \(F\) previously estimated as the subsequent work normalized by the total flow:
\[EMD(s_{i},s_{T})=\frac{\sum_{u=1}^{M}\sum_{v=1}^{N}g_{u,v}f_{u,v}}{\sum_{u=1}^ {M}\sum_{v=1}^{N}f_{u,v}} \tag{15}\]
#### 3.2.2 Long-tailedness Metrics: The Gini Coefficient
Measuring the long-tailedness of data is a relevant issue to solving the long-tailed visual recognition problem. Although there are multiple long-tailedness metrics, such as the imbalance factor, standard deviation, or mean/median, the Gini coefficient [22] can effectively differentiate long-tailed and balanced datasets. The reason is that it is not affected by extreme samples, the absolute number of data, and has a bounded distribution (0,1) [23].
To obtain it, we must follow the next steps:
1. Compute the normalized cumulative distribution \(\{\mathbb{C}_{i}\}\), assuming that \(\mathbb{k}\) categories and their respective number samples \(m_{i}\), (\(i=1,2,...,\mathbb{k}\)) are in ascending order: \[\mathbb{C}_{i}=\frac{1}{\mathbb{k}}{\sum}_{j=1}^{i}m_{j}\] (16)
2. Calculate the area \(B\) under the Lorenz Curve \(L(x),x\in[0,1]\): \[L(x)=\begin{cases}\mathbb{C}_{i},&x=\frac{i}{\mathbb{k}}\\ \mathbb{C}_{i}+(\mathbb{C}_{i+1}-\mathbb{C}_{i})(\mathbb{k}x-i),&\frac{i}{ \mathbb{k}}<x<\frac{i+1}{\mathbb{k}}\end{cases}\] (17) \[B=\int_{0}^{1}L(x)dx={\sum}_{i=1}^{\mathbb{k}}\frac{\mathbb{C}_{i}+ \mathbb{C}_{i-1}}{2}\cdot\frac{1}{\mathbb{k}}\] (18) \[A=0.5-B\] (19)
3. Estimate the Gini coefficient: \[\delta=\frac{A}{A+B}>0\] (20)
#### 3.2.3 Domain discrepancy score and dataset selection
Now that we have a premise to describe the similarity across datasets, we must ponder a function that describes the similarity between the target dataset and the available sources. Thus, we propose a function that consists of the product of the EMD and the Gini score divided by the set cardinality of the label space intersection of the target and the source tasks. Since a dataset for object detection and classification has multiple instances belonging to the same category and its cardinality is proportional to the sum of samples per category, we convert such multiset \(Y^{(l)}\) to a set \(Y^{\prime(l)}\) by simply taking the intersection of our multisets with the universe \(U\), which outcomes the number of categories instead of elements, equilibrating the relevance of all categories in a dataset. Finally, the lower the score, the more similar the datasets are.
\[disc\big{(}\ D^{(T)},D^{(l)}\big{)}=\frac{EMD(s_{l},s_{T})\cdot\delta}{|Y^{ \prime(T)}\cap Y^{\prime(l)}|+\varepsilon} \tag{20}\]
Finally, we pick the three most minor scores and choose the one with more samples. The reason is that more images and labels result in higher performance in terms of the evaluation metrics described in section 3.2.
### Target Label-Space Conditioned Subset selection
In traditional machine learning, learning a model given a set of training samples to find an objective function assumes that both training and test data come from the same distribution and share a similar joint probability distribution. On the other hand, conventional domain adaptation aims to solve the prediction function \(f_{T}(\cdot)\) of the target task \(T_{T}\) in the target domain \(D_{T}\) through the knowledge acquired by the source domain \(D_{i}\) and the source task \(T_{i}\) if the feature and label spaces remain unchanged as their probability distributions may change between domains [24]. In manufacturing applications and other real-world matters violating this last constraint is feasible since the datasets can emerge from different tasks or distributions. Hence, easing the label space gap by removing samples containing tags outside the target task is the first step to reducing such discrepancy. Our purpose is to make the data in the source particular categories a subset of the target label set. This idea is worthy given a group of datasets that could be considered subsets of the target label set and thus enhance the target model performance by taking all the source domain knowledge included in the shared classes. Some authors, such as Saito et al. [25] and Hong Liu et al. [26], took advantage of the previous premise and utilized either adversarial training or binary classifiers to align target samples with source-known elements or reject them as unknown target ones. Since our datasets contain tags with the same names, we are not considering further processing and removing the source samples that did not match the target task space and kept the rest as they were.
### Data Augmentation
Generalization is part of any deep convolutional networks since their objective is to improve the performance of a network trained on previously seen data versus never seen samples. Indeed, models with poor generalization tend to overfit the training data, so augmenting the number of samples represents a way to expand a dataset, save labeling costs, and improve classification performance. Although Data Augmentation cannot overcome all biases present in the minority classes, this technique prevents or significantly lessens multiple biases such as occlusions, lighting, scales, or changes in the background. Our scope covers basic image manipulations, including flipping, rotation, contrast, and noise injection, because those changes are typical of a manufacturing environment. Nevertheless, these methods only create data by image-level linear transformations and may not represent new distributions introduced by unknown defects with changes in the defects' shape or lighting orientations [27].
### Transfer Learning
Transfer learning aims to enhance the performance of the hypothesis function for a target task by discovering and transferring latent knowledge from a source domain and domain tasks, where neither the domains nor the source and target duties must be equivalent. Hence, this technique is suitable to overcome the reliance on large quantities of high-quality data by leveraging helpful information from other related domains. Kim et al. [15] explored a couple of weight transference methods from a source network using ImageNet 2012 to a target network with data provided by DAGM [28] either freezing them or just fine-tuning them. Our proposal not only fine-tunes a network by using a benchmark dataset but also uses a close domain in terms of the domain discrepancy score. As shown in Figure 1, we first trained the closest dataset with pre-trained weights on COCO by transferring only those weights belonging to the network backbone and initialize the rest randomly. Again, we transfer these domain network backbone weights to the target network and started the remaining layer weights randomly. Finally, fine-tuning the transferred weights was a key point to obtain the highest performance compared with other methods, as stated in [15].
## 3 Experiments
### Datasets
It is a group of real-world class-imbalanced datasets containing 47561 gray-scale images with 20 different defect categories manually labeled arranged in 11 magnetic tile datasets, as shown in Table 1. They all follow a typical long-tailed distribution, where the one with the highest imbalanced rate used in this paper corresponds to jy-381-2, and the lowest belongs to dc-1. Pictures do not have any preprocessing, so the dataset can reflect the distribution of multiple defect types on a production line.
### Evaluation metrics for imbalanced data
Although a classifier should offer a balanced degree of predictive accuracy for both the minority and majority classes on the dataset, they usually provide a severely imbalanced degree of accuracy. The reason is that traditional metrics such as precision and recall are focused on the positive category only, avoiding the problems encountered by multi-class focus metrics in the case of long-tailed distributions [29]. Thus, we decided to use both the F-measure and PR-curve. The first measurement is the weighted harmonic mean of precision (P) and recall (R) of a classifier, taking \(\alpha\)=1 (F1 score). The PR curve can complement the previous score because it evaluates changes in distributions, observes variability in performance, and is practical in highly skewed domains. The absence of TN in its equation is functional in imbalanced classes like ours.
### Implementation details
We implement the proposed algorithm in Python using a Jupyter Notebook run in Ubuntu 20.04. Our experiments were performed on a PC with an Intel(R) Core(TM) i5-10400F 2.90GHz CPU and an NVIDIA RTX 3060 GPU. Regarding neural network models, we utilized YOLOv5 [30] because it offers model scaling, is easy to implement and modify. Its architecture loads pre-trained weights in COCO from their respective official repositories. This model was trained with a batch size of 11 on a single GPU for 50 epochs. All the source code and pre-trained models of this project are available at [https://github.com/ErikValle/RTLAOI-DD](https://github.com/ErikValle/RTLAOI-DD).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{11}{c|}{**Dataset**} \\ \hline
**Label** & **Defects** & **dc-1** & **jy-381-2** & **jy-381-4** & **lc-101** & **lc-201** & **nj-101** & **nj-201** & **xh-1** & **xh-2** & **xh-3** & **xh-4** & **Sum** \\ \hline \(y_{0}\) & white crack & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 267 & 0 & 0 & 267 \\ \hline \(y_{1}\) & standard chipping & 0 & 3041 & 1676 & 3875 & 4070 & 1021 & 1944 & 3102 & 1515 & 1209 & 1613 & 23066 \\ \hline \(y_{2}\) & standard crack & 349 & 62 & 250 & 2178 & 631 & 3058 & 2787 & 1403 & 1002 & 341 & 54 & 12115 \\ \hline \(y_{3}\) & chamfer & 0 & 1 & 212 & 84 & 393 & 1 & 2 & 40 & 0 & 0 & 0 & 733 \\ \hline \(y_{4}\) & multifaceted & 2 & 3 & 0 & 2778 & 294 & 705 & 762 & 373 & 138 & 272 & 0 & 5327 \\ \hline \(y_{5}\) & crystallization & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 14 & 4 & 234 & 0 & 252 \\ \hline \(y_{6}\) & contour chipping & 0 & 10 & 325 & 183 & 744 & 27 & 561 & 5 & 94 & 79 & 7 & 2035 \\ \hline \(y_{7}\) & superficial chipping & 0 & 6 & 1 & 3 & 20 & 7 & 43 & 263 & 2 & 3 & 0 & 348 \\ \hline \(y_{8}\) & ambiguity & 2 & 0 & 0 & 2 & 0 & 1 & 0 & 14 & 0 & 3 & 5 & 27 \\ \hline \(y_{9}\) & plane chipping & 4 & 0 & 0 & 2 & 1 & 7 & 2 & 6 & 0 & 0 & 0 & 22 \\ \hline \(y_{10}\) & light inking & 0 & 0 & 0 & 0 & 8 & 0 & 26 & 1 & 0 & 338 & 2 & 375 \\ \hline \(y_{11}\) & triangular row & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 4 \\ \hline \(y_{12}\) & bump & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 26 & 342 & 368 \\ \hline \(y_{13}\) & fine cracks & 18 & 0 & 0 & 315 & 2 & 119 & 101 & 146 & 3 & 2 & 3 & 709 \\ \hline \(y_{14}\) & impurities & 10 & 0 & 0 & 64 & 8 & 850 & 645 & 73 & 15 & 328 & 141 & 2134 \\ \hline \(y_{15}\) & chipping & 0 & 34 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 34 \\ \hline \(y_{16}\) & abnormal chamfer & 0 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 6 \\ \hline \(y_{17}\) & crack & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 3 \\ \hline \(y_{18}\) & gas hole & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ \hline \(y_{19}\) & stains & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ \hline & **Samples** & **385** & **3165** & **2464** & **9486** & **6171** & **5796** & **6873** & **5444** & **3040** & **2835** & **2169** & **47828** \\ \hline & **images** & **252** & **3154** & **2443** & **7008** & **5294** & **5134** & **6293** & **4946** & **2792** & **2542** & **2050** & **47561** \\ \hline \end{tabular}
\end{table}
Table 1: Magnetic tile datasets: number of samples per dataset and category.
### Domain discrepancy scores
In order to prove that the domain discrepancy score delivers a way to select a suitable dataset for transfer learning, we used each of them as a target dataset and the rest as sources as established in Table 2. Notice that some datasets obtained a score equals to zero, which means that we are using the source domain as a target (EMD = 0) or both datasets share only a class (\(\delta\) =0). Recall, the lower the score, the greater the similarity between the two datasets. Following this principle, it can be observed from Table 2 that NJ-101, LC-101 and XH-3 are the more similar to DC-1.
We performed a group of experiments to verify the performance of applying our transfer learning strategy in four target datasets using pre-trained weights from the closest source datasets, as shown in Figure 2. To interpret the outcomes, we refer to the target domain DC-1 and its experiments transferring weights from a pretrained network using: COCO (black line), the closes domain (LC-101) without subset selection (blue line), the previous dataset but adding the subset selection approach (red line), NJ-101 using the last two methods (purple and green lines, respectively), and XH-1 adjusted. Figure 2 shows how LC-101, which has the highest number of samples among the three datasets with the lowest domain discrepancy scores, obtained the best F1 and PR scores, but its outcomes slightly passed the other two inputs.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{8}{|c|}{**Source datasets**} \\ \cline{3-13} \multicolumn{2}{|c|}{} & \multicolumn{1}{|c|}{**dc-1**} & **jv-381-2** & **jv-381-4** & **lc-101** & **lc-201** & **ni-101** & **ni-201** & **xh-1** & **xh-2** & **xh-3** & **xh-4** \\ \hline \multirow{5}{*}{DC-1} & **dc-1** & 0 & 13.2222 & 0.0000 & 3.1342 & 7.1206 & 2.5852 & 8.7600 & 6.2698 & 13.6900 & 6.1870 & 13.4656 \\ \cline{2-13} & **jv-381-2** & 14.4008 & 0 & 4.3584 & 6.1309 & 2.2382 & 6.9052 & 9.5422 & 9.2553 & 12.0100 & 11.7779 & 22.5224 \\ \cline{2-13} & **jv-381-4** & 0 & 6.0989 & 0 & 7.2453 & 2.3069 & 7.6034 & 7.6264 & 6.7784 & 8.6271 & 9.9443 & 18.0893 \\ \cline{2-13} & **lc-101** & 3.7638 & 9.3354 & 6.5446 & 0 & 4.6349 & 0.8726 & 3.3026 & 2.0188 & 6.2050 & 3.4617 & 9.3213 \\ \cline{2-13} & **lc-201** & 7.9384 & 3.1054 & 2.2474 & 4.2335 & 0 & 3.9278 & 5.5972 & 5.1799 & 8.0777 & 5.4744 & 13.4756 \\ \cline{2-13} & **nj-101** & 3.0947 & 8.6234 & 6.1676 & 0.8576 & 4.2141 & 0 & 3.8647 & 2.6163 & 6.3437 & 4.0969 & 9.7979 \\ \cline{2-13} & **nj-201** & 11.1672 & 14.4550 & 7.6569 & 3.5589 & 6.4790 & 4.2496 & 0 & 1.8773 & 1.8912 & 1.1713 & 3.3375 \\ \cline{2-13} & **xh-1** & 7.2164 & 12.0095 & 6.0675 & 1.8969 & 5.2294 & 2.5016 & 1.6373 & 0 & 3.5316 & 0.9520 & 5.3757 \\ \cline{2-13} & **xh-2** & 14.0635 & 16.4956 & 9.3723 & 5.2274 & 8.3172 & 5.4108 & 1.3925 & 3.3588 & 0 & 1.9608 & 2.9308 \\ \cline{2-13} & **xh-3** & 10.8744 & 16.4718 & 9.4725 & 3.4586 & 7.6191 & 4.1345 & 1.2351 & 1.2738 & 2.5667 & 0 & 2.8788 \\ \cline{2-13} & **xh-4** & 16.3409 & 22.7409 & 11.9447 & 7.8019 & 12.2151 & 7.8602 & 2.2840 & 4.9038 & 2.4237 & 2.2619 & 0 \\ \hline \end{tabular}
\end{table}
Table 2: Domain discrepancy scores
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{**Subset Selection**} & \multicolumn{2}{|c|}{**Full dataset**} \\ \hline
**Target** & **Source** & **F1 score** & **PR score** & **F1 score** & **PR score** \\ \hline \multirow{5}{*}{DC-1} & NJ-101 & [email protected] & 0.741 [email protected] & [email protected] & 0.596 [email protected] \\ \cline{2-6} & JY-381-2 & [email protected] & 0.285 [email protected] & [email protected] & 0.289 [email protected] \\ \cline{2-6} & LC-101 & [email protected] & 0.754 [email protected] & [email protected] & 0.752 [email protected] \\ \cline{2-6} & XH-1 & [email protected] & 0.76 [email protected] & [email protected] & 0.67 [email protected] \\ \cline{2-6} & COCO & - & - & [email protected] & 0.256 [email protected] \\ \hline \multirow{5}{*}{JY-381-4} & JY-381-2 & [email protected] & 0.777 [email protected] & [email protected] & 0.742 [email protected] \\ \cline{2-6} & LC-201 & [email protected] & 0.891 [email protected] & [email protected] & 0.715 [email protected] \\ \cline{2-6} & NJ-201 & [email protected] & 0.726 [email protected] & [email protected] & 0.907 [email protected] \\ \cline{2-6} & COCO & - & - & [email protected] & 0.699 [email protected] \\ \hline \multirow{5}{*}{NJ-101} & XH-1 & [email protected] & 0.698 [email protected] & [email protected] & 0.624 [email protected] \\ \cline{2-6} & LC-101 & [email protected] & 0.644 [email protected] & [email protected] & 0.619 [email protected] \\ \cline{2-6} & COCO & - & - & [email protected] & 0.532 [email protected] \\ \cline{2-6} & LC-201 & [email protected] & 0.408 [email protected] & [email protected] & 0.437 [email protected] \\ \cline{2-6} & NJ-201 & [email protected] & 0.423 [email protected] & [email protected] & 0.453 [email protected] \\ \cline{2-6} & XH-3 & [email protected] & 0.396 [email protected] & [email protected] & 0.411 [email protected] \\ \cline{2-6} & COCO & - & - & [email protected] & 0.386 [email protected] \\ \hline \end{tabular}
\end{table}
Table 3: Experiments on selected target datasets from different source datasets and the COCO dataset. They take the pre-trained weights of the YOLOv5 backbone and randomly initialize the rest of the layers.
## 5 Conclusions
In this study, we proposed a domain discrepancy score to evaluate existing source datasets for a target dataset in terms of the EMD, categories overlapping, the number of samples, and the Gini score. The principal aim was to find highly similar sub-datasets from source datasets to target tasks through our dataset screening based on domain similarity. The network was first pretrained on a benchmark dataset (COCO) and subsequently transferred its backbone weights to the source network, which takes a subset of the source dataset as input. The fine-tuned parameters from the source network belonging to the backbone are assigned to the target network and later fine-tuned on the target dataset. We performed groups of experiments on different magnetic tile datasets to compare our model with typical transfer learning techniques such as full source domain or simply loading pre-trained weight from the COCO dataset. The results show that our method retrieved up to 20% higher F1 and PR scores than using a benchmark dataset as a source domain. In the future, we want to include the number of samples per category in the discrepancy score, add more defect inspection datasets to our experiments, and expand the techniques used in data augmentation to get richer inputs.
## Acknowledgment
This work was supported in part by the Interdisciplinary Foundation of Shenzhen International Graduate School of Tsinghua University (Grant No. JC2021003), in part by the Shenzhen Stable Supporting Program (Grant No. WDZC20200820200655001).
Figure 2: Metrics after transferring pretrained weights using the COCO dataset, LC-101, NJ-101, and our approach. a) F1 scores and b) PR curves. |
2306.10583 | Hierarchical entanglement shells of multichannel Kondo clouds | Impurities or boundaries often impose nontrivial boundary conditions on a
gapless bulk, resulting in distinct boundary universality classes for a given
bulk, phase transitions, and non-Fermi liquids in diverse systems. The
underlying boundary states however remain largely unexplored. This is related
with a fundamental issue how a Kondo cloud spatially forms to screen a magnetic
impurity in a metal. Here we predict the quantum-coherent spatial and energy
structure of multichannel Kondo clouds, representative boundary states
involving competing non-Fermi liquids, by studying quantum entanglement between
the impurity and the channels. Entanglement shells of distinct non-Fermi
liquids coexist in the structure, depending on the channels. As temperature
increases, the shells become suppressed one by one from the outside, and the
remaining outermost shell determines the thermal phase of each channel.
Detection of the entanglement shells is experimentally feasible. Our findings
suggest a guide to studying other boundary states and boundary-bulk
entanglement. | Jeongmin Shim, Donghoon Kim, H. -S. Sim | 2023-06-18T15:22:40Z | http://arxiv.org/abs/2306.10583v1 | # Hierarchical entanglement shells of multichannel Kondo clouds
###### Abstract
Impurities or boundaries often impose nontrivial boundary conditions on a gapless bulk, resulting in distinct boundary universality classes for a given bulk, phase transitions, and non-Fermi liquids in diverse systems. The underlying boundary states however remain largely unexplored. This is related with a fundamental issue how a Kondo cloud spatially forms to screen a magnetic impurity in a metal. Here we predict the quantum-coherent spatial and energy structure of multichannel Kondo clouds, representative boundary states involving competing non-Fermi liquids, by studying quantum entanglement between the impurity and the channels. Entanglement shells of distinct non-Fermi liquids coexist in the structure, depending on the channels. As temperature increases, the shells become suppressed one by one from the outside, and the remaining outermost shell determines the thermal phase of each channel. Detection of the entanglement shells is experimentally feasible. Our findings suggest a guide to studying other boundary states and boundary-bulk entanglement.
## Introduction
Boundary quantum critical phenomena [1; 2] appear in gapless systems of quantum impurities [3; 4; 5; 6; 7; 8; 9; 10; 11], magnets with surfaces [12], edge states of topological orders [13], and qubit dissipation [14; 15]. There, the presence of a boundary causes various boundary criticalities that affect the bulk, depending on boundary-bulk coupling. A character of boundaries has been revealed by the boundary or impurity entropy [16; 17; 18; 19] that is the entropy difference between the presence and absence of the boundary. This entropy corresponds to the constant term in the dependence of the ground-state entanglement entropy on the location of the entanglement partition [18]. The entropy is a bulk quantity, as the partition is placed at long distance from the boundary, and it has been obtained by using the boundary conformal field theory (BCFT) [8; 9; 10; 20; 21; 22], a standard approach for the criticalities.
While bulk quantities have been understood, boundary states are yet to be explored [23; 24; 25; 26]. The Kondo singlet [23] in the single-channel Kondo effect, a many-body state of metallic electrons formed to screen a local impurity spin, implies that quantum entanglement between a bulk and its boundary is essential for understanding the quantum coherent boundary-bulk coupling [27; 28; 29]. The spatial distribution of the particles forming the boundary-bulk entanglement will be a key information of boundary quantum criticalities and related many-body effects. As the partition for the boundary-bulk entanglement is placed right at the boundary [27; 28; 29; 30], the entanglement differs from the boundary entropy. There are difficulties in studying the entanglement. In BCFTs, the boundary degrees of freedom are absorbed into the bulk as boundary conditions, and bulk properties at long distance from the boundary are considered. Experimentally detecting entanglement typically requires inaccessible multiparticle observables. Understanding about the entanglement is desired.
Multichannel Kondo effects, where multiple channels of conduction electrons compete to screen an impurity spin, serve as a paradigm of many-body physics and boundary criticalities [6; 7; 8; 9; 10]. For example, in the \(k\)-channel Kondo (\(k\)CK) effect, \(k\) electron channels compete to screen an impurity spin \(1/2\). It is described by the Hamiltonian
\[H_{k\mathrm{CK}}=\sum_{j=1}^{k}J_{j}\mathbf{S}_{\mathrm{imp}}\cdot\mathbf{S}_ {j}(0)+\sum_{j=1}^{k}H_{j}. \tag{1}\]
Here, the impurity spin \(\mathbf{S}_{\mathrm{imp}}\) locally couples to the spin \(\mathbf{S}_{j}(0)\) of electrons in the \(j\)th channel with strength \(J_{j}>0\), and \(H_{j}\) describes free electrons in the \(j\)th channel. In the Affleck-Ludwig BCFT [8; 9; 10], the channel isotropic case of \(J_{1}=\cdots=J_{k}\) is transformed into a free electron Hamiltonian with a nontrivial boundary condition, by mapping \(H_{j}\) to a semi-infinite one dimension, and fusing the impurity with the boundary of the one dimension. It exhibits a boundary criticality. In channel anisotropic cases, the competition between the channels results in quantum phase transitions [2], various non-Fermi liquids (NFLs) [6; 8], and fractionalizations [31], making the effects rich. Thermal phases and their renormalization flows of the channel anisotropic Kondo effects were experimentally observed by using quantum dots or metallic islands [32; 33; 34; 35; 36].
The boundary states of the Kondo effects involve a Kondo cloud [24; 25; 26] formed by the conduction electrons screening the impurity spin. Theoretically the cloud has been studied [17; 18; 19; 37; 38; 39] mostly for channel isotropic cases. For anisotropic 2CK effects, a quantity called the excess charge density was used to study a real-space structure that indicates spatial regions corresponding to the local moment and strong coupling phases [40]. How
ever this quantity can hardly quantify the spatial distribution of a Kondo cloud, as it can be negative at certain distances from the impurity spin and even increase with the distance. The properties of the cloud, such as its channel-resolved spatial distribution, its entanglement with the impurity, its correspondence to the transition or crossover between distinct NFL phases, and its thermal suppression, are yet to be studied. It also remains unknown how to detect the clouds in the multichannel cases, while a cloud was recently observed [41, 42] in the single channel case.
The entanglement between an impurity and its Kondo cloud is a boundary-bulk entanglement [27, 28, 29, 30]. The spatial distribution of the electrons forming this entanglement will characterize how the cloud spatially screens the impurity quantum coherently. In this work, we propose how to theoretically quantify and experimentally measure the distribution by applying a perturbation of local symmetry breaking (LSB) at a distance from the impurity. The distribution is found to exhibit channel-dependent hierarchical entanglement shells of NFL, Kondo Fermi liquid (FL), or non-Kondo FL characters in the channel anisotropic cases. Each shell is identified by a power-law decay of the distribution with the distance, whose exponent is determined by the scaling dimension of the boundary operator describing the character. As the temperature increases, the shells are suppressed one by one from the outside, and the remaining outmost shell determines the thermal phase of each channel. The entanglement shell structure shows that different NFLs and FLs hierarchically coexist around the boundary with spatial and energetical separation, reflecting the renormalization of the quantum coherent impurity screening (quantified by the entanglement) in the presence of the channel competition.
**Results**
**Quantifying boundary entanglement distribution --** We study the entanglement negativity \(\mathcal{N}\equiv\left\|\rho^{\mathrm{T_{I}}}\right\|_{1}-1\) between the impurity and the channels in the \(k\)CK effects. \(\rho\) is the density matrix of the whole system, \(\left\|\cdot\right\|_{1}\) is the trace norm, and \(\mathrm{T_{I}}\) means the partial transpose on the impurity. This negativity is twice the conventional definition [43, 44] so that its maximum value is 1. It measures quantum coherence of the screening. The screening happens by the maximal entanglement \(\mathcal{N}=1\) independent of \(k\) in the channel isotropic cases at zero temperature [30].
To quantify the spatial distribution of the entanglement, we apply an LSB perturbation breaking the Kondo SU(2) symmetry in a channel \(n\) at distance \(L\) from the impurity [Fig. 1a], and study the reduction \(\rho_{n}\) of the negativity from the value \(\mathcal{N}_{0}(T)\) in the absence of the LSB to \(\mathcal{N}(L,T;n)\) in the presence of the LSB,
\[\rho_{n}(L,T)\equiv\mathcal{N}_{0}(T)-\mathcal{N}(L,T;n), \tag{2}\]
at temperature \(T\). \(\rho_{n}\) varies between 0 and 1. Larger \(\rho_{n}\) implies that at the distance \(L\) there exist more electrons participating in the entanglement. Therefore the \(L\) dependence of the reduction \(\rho_{n}(L,T)\) quantifies the spatial distribution of the Kondo cloud in the channel \(n\).
The negativity has a direct relation [30] with the impurity magnetization \(\mathbf{M}=\langle\mathbf{S}_{\mathrm{imp}}\rangle\) at zero temperature (Supplementary Note 1),
\[\mathcal{N}=\sqrt{1-\frac{4\mathbf{M}^{2}}{\hbar^{2}}}, \tag{3}\]
where \(\mathbf{S}_{\mathrm{imp}}\) is the impurity spin operator. This shows that the magnetization is larger as the impurity spin is less screened by, equivalently less entangled with, conduction electrons. This relation is valid at zero temperature in general situations of the Kondo effects, and it is a good approximation at low temperature \(T\ll\mathrm{T_{K}}\), where \(T_{\mathrm{K}}\) is the Kondo temperature.
For details, we consider a Hamiltonian \(H_{k\mathrm{CK}}+H_{\mathrm{LSB}}\). The Kondo Hamiltonian \(H_{k\mathrm{CK}}\) is shown in Eq. (1). Here each channel is described by free electrons in a semi-infinite one dimensional system and the impurity spin is located at the boundary of the one dimension. \(H_{\mathrm{LSB}}\) describes the LSB by a local magnetic field \(B\) along \(x\) axis coupled to the spin \(S_{n,x}(L)\) in a channel \(n\) at distance \(L\) from the impurity,
\[H_{\mathrm{LSB}}=BS_{n,x}(L). \tag{4}\]
Figure 1: **Channel-isotropic Kondo cloud.****a** An impurity spin couples to three channels with equal strengths \(J_{1}=J_{2}=J_{3}\). A perturbation \(B\) breaks the SU(2) spin symmetry at distance \(L\) from the impurity in channel 1. The cloud distribution \(\rho_{1}(L)\) in channel 1 is read out from the \(L\) dependence of the entanglement \(\mathcal{N}\) between the impurity and the channels. **b** Schematic cloud distribution. Crossover between the core and the tail happens around the cloud length \(\xi_{\mathrm{K}}\). **c** Numerical renormalization group (NRG) results of \(\rho_{1}(L)\) at zero temperature for the isotropic single-channel (1CK), two-channel (2CK), and three-channel Kondo (3CK) effects. **d** Log-log plot of **c**. The tail follows the power-law decay \(L^{-2\Delta}\) in agreement with the boundary conformal field theory (BCFT).
In the presence of the LSB, we compute the negativity between the impurity and the channels at finite temperature by using the numerical renormalization group (NRG) method (Supplementary Notes 2-4) that we have developed [29]. We also obtain the negativity at zero temperature by using Eq. (3) and analytically computing the magnetization based on the BCFT in the presence of the LSB (Supplementary Note 5).
**Isotropic multichannel Kondo clouds --** We first consider the channel isotropic case of \(J_{1}=J_{2}=\cdots=J_{k}=J\). At \(T\sim T_{\rm K}\), there occurs thermal crossover from the infrared Kondo fixed point to the ultraviolet local moment (LM) phase. The Kondo phase is a FL in the single-channel case [4, 5] and a NFL in the multichannel cases of \(k\geq 2\)[6, 8].
**Entanglement shells of anisotropic multichannel Kondo clouds --** We next consider channel anisotropic cases of \(k\) channels. It is known that there are multiple crossover temperatures [6]. At \(T\mathrel{\raise 1.29pt\hbox{$>$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}}T_{\rm K}\), the LM phase happens. At \(T^{*}\mathrel{\raise 1.29pt\hbox{$<$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}}T \mathrel{\raise 1.29pt\hbox{$<$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}}T_{\rm K}\), the Kondo effect by the \(k\) channels (\(k\)CK) occurs, where \(T^{*}\) is a crossover temperature determined by the anisotropy. Below \(T^{*}\) there can appear \(k^{\prime}\)-channel Kondo effects with \(k^{\prime}<k\). The zero temperature phase is a \(k^{\prime\prime}\)CK with \(k^{\prime\prime}\leq k^{\prime}\) where \(k^{\prime\prime}\) is the number of the channels having the largest coupling. These are shown in the phase diagrams of Figs. 2a and 3a.
We first discuss the Kondo cloud of the anisotrpic \(k\)CKs at zero temperature. We find that the spatial distribution \(\rho_{n}\) has the core and the tail of a shell structure.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline shell & 1CK & 2CK & 3CK & \(k\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
ture [Figs. 2 and 3a-h].\(\rho_{n}\) is much larger in the core, which appears over \(L\lesssim\xi_{\rm K}\), than in the tail, as in the isotropic case. The tail has hierarchical multiple shells of distinct entanglement scaling behaviors. In the innermost shell, all the \(k\) channels follow the power law decay of \(\rho_{n}(L)\propto(\xi_{\rm K}/L)^{2\Delta}\) with \(\Delta=2/(2+k)\). This shell corresponds to the NFL of the isotropic \(k\)CK, as identified by Eq. (5) and shown in Table 1, and appears at \(\xi_{\rm K}\lesssim L\lesssim\xi^{*}\) with \(\xi^{*}=\hbar v/(k_{\rm B}T^{*})\). The core and the innermost shell are identical between the channels, although the coupling strengths \(J_{i}\) are different.
On the other hand, the other shells are channel dependent. In the outermost shell, the \(k^{\prime\prime}\) channels having the same coupling strength but larger than the others show different behavior from the others. These largest
Figure 3: **Three-channel cloud shells and their thermal evaporation.** The three-channel Kondo (3CK) model of couplings \(J_{1,2}=J+(\delta J)/2\) and \(J_{3}=J-\delta J\) is considered. \(\delta J\) is the channel anisotropy. **a-d** The phase diagram of the model, shown in **a**, is composed of the local moment (LM), single-channel Kondo (1CK), two-channel Kondo (2CK), and three-channel Kondo (3CK) phases. At a point of \(\delta J<0\) and zero temperature \(T=0\) marked by the red star in the phase diagram **a**, the cloud distribution is drawn in **b**, the log-log plot of numerical renormalization group (NRG) results of the distribution \(\rho_{1}(L)\) is in **c**, and the log-log plot of \(\rho_{3}(L)\) is in **d**. \(\rho_{2}\) is identical to \(\rho_{1}\). In **b**, the core, 1CK, 3CK, and non-Kondo Fermi liquid (FL) shells are identified. \(T_{\rm K}\) is the Kondo temperature, \(T^{*}\) is the crossover temperature, \(\xi_{\rm K}\) is the Kondo length, and \(\xi^{*}\) is the crossover length. **e-h** The same plots, but at a point of \(\delta J>0\) and \(T=0\). **i-l** The same plots, but at a point of \(\delta J>0\) and \(T=T^{*}\). **m-p** The same plots, but at a point of \(\delta J>0\) and \(T=T_{\rm K}\). As temperature increases, the outer shells disappear one by one.
coupling channels exhibit the distribution \(\rho_{n}(L)\) of the power law decay with \(\Delta=1\) for \(k^{\prime\prime}=1\) (namely when one channel has stronger coupling than all the others) and \(\Delta=2/(2+k^{\prime\prime})\) for \(k^{\prime\prime}\geq 2\). These channels in the shell exhibit the zero-temperature \(k^{\prime\prime}\)CK phase, as implied by Eq. (5)(see also Table 1). The other \(k-k^{\prime\prime}\) channels of weaker coupling in this shell also have nonzero distribution \(\rho_{n}\), albeit smaller than that of the \(k^{\prime\prime}\) channels. They follow the power law decay of \(\rho_{n}(L)\) with \(\Delta=1\), showing a non-Kondo FL that does not show the Kondo effect as discussed below. Hence the outermost shell of the Kondo cloud is composed of the NFL (resp. FL) of the \(k^{\prime\prime}\)CK in the \(k^{\prime\prime}\) channels of the strongest coupling for \(k^{\prime\prime}\geq 2\) (resp. \(k^{\prime\prime}=1\)) and the non-Kondo FL in the other channels.
We discuss about the non-Kondo FL behavior in the \(k-k^{\prime\prime}\) channels of weaker coupling. The value of \(\Delta=1\) implies that these channels are Fermi liquids. Although the value is identical to that of the 1CK case (see Table 1), these channels of weaker coupling do not exhibit Kondo behaviors. For example, in an anisotropic 2CK model [4; 5; 45; 46], the channel of stronger coupling exhibits the \(\pi\) scattering phase shift as in the 1CK case, while the weaker-coupling channel does not. It is interesting that a spin cloud, having an algebraic tail (indicated by the non-vanishing entanglement between the impurity and the channels), is developed in these weaker-coupling channels. A recent work [26] reported a similar finding that a spin cloud appears in a non-Kondo phase of a superconductor coupled with a magnetic impurity.
In Figs. 2 and 3a-h, these features of the outermost shell are shown for the 2CK of \(J_{1}=J+\delta J\) and \(J_{2}=J-\delta J\), and the 3CK of \(J_{1,2}=J+(\delta J)/2\) and \(J_{3}=J-\delta J\). The shell appears at \(L\gtrsim\xi^{*}\), where \(\xi^{*}\propto|\delta J|^{-2}T_{\rm K}^{-1}\) for the 2CK and \(\xi^{*}\propto|\delta J|^{-5/2}T_{\rm K}^{-1}\) for the 3CK [35]. At \(L\gtrsim\xi^{*}\) in the 2CK, the channel 1 of stronger coupling has the 1CK FL, while the channel 2 has a non-Kondo FL. We find, using the bosonization [47; 48] (Supplementary Note 6), that the channel 2 shows nonzero distribution \(\rho_{2}\) smaller than the channel 1, following \(\rho_{2}/\rho_{1}\cong T^{*}/\nu T_{\rm K}^{2}\) at \(L\gg\xi^{*}\) [Fig. 2e]. \(\nu\) is the density of states. In the 3CK with \(\delta J>0\), the channels 1 and 2 having the largest coupling exhibit the 2CK NFL in the outermost shell, while the channel 3 shows a non-Kondo FL. In the 3CK with \(\delta J<0\), the channel 3 of the largest coupling shows the 1CK FL in the outermost shell, while the other channels exhibit a non-Kondo FL.
In general anisotropic \(k\)CKs, there appear intermediate shells corresponding to a \(q_{1}\)CK, a \(q_{2}\)CK, \(\cdots\) (from outer to inner) between the innermost and outermost shells, with the hierarchy \(k^{\prime\prime}<q_{1}<q_{2}<\cdots<k\) determined by the coupling strengths \(J_{n=1,2,\cdots,k}\). In the shell of the \(q_{i}\)CK, the \(q_{i}\) channels having larger coupling than the others exhibit the \(q_{i}\)CK NFL, while the other \(k-q_{i}\) channels show a non-Kondo FL. For example, we find that in the most general case of the 3CK with \(J_{1}>J_{2}>J_{3}\), the Kondo cloud is composed of the core, the innermost 3CK shell, the intermediate 2CK shell (having the 2CK NFL in two channels of larger coupling and a non-Kondo FL in the other), and the outermost 1CK shell (having the 1CK FL in the channel of the largest coupling and a non-Kondo FL in the others) at zero temperature (Supplementary Note 4).
**Thermal evaporation of entanglement shells --** To examine the thermal decoherence of the entanglement shells and hence the Kondo cloud, we compute \(\rho_{n}(L,T)\) in Eq. (2) at finite temperatures, using the NRG. \(\rho_{n}(L,T)=\mathcal{N}_{0}(T)-\mathcal{N}(L,T;n)\) quantifies the difference of the entanglement between the absence and presence of the LSB at temperature \(T\); \(\mathcal{N}_{0}(T)\) measures the entanglement that survives against thermal fluctuations at \(T\), while \(\mathcal{N}(L,T;n)\) measures the entanglement at \(T\) further reduced by the LSB at distance \(L\) in channel \(n\). More reduction occurs as the impurity spin is more entangled with (i.e., more screened by) electrons at \(L\). Hence, \(\rho_{n}(L,T)\) quantifies the entanglement distribution at \(T\) with varying \(L\). Note that in the absence of the LSB, the entanglement algebraically decays thermally [30], \(\mathcal{N}_{0}(T)=1-a_{k}(T/T_{\rm K})^{2\Delta}\) at \(T\ll T_{\rm K}\), where \(a_{k}>0\) is a constant.
For the 3CK with \(\delta J>0\), Figs. 3e-p show the temperature dependence of the entanglement shells. Thermal fluctuations suppress shells outside the thermal length \(\hbar v/(k_{\rm B}T)\), while it does almost not affect shells inside. So the outer shells are thermally "evaporated" one by one. At \(T\ll T^{*}\), the outermost shell, located at \(L>\xi^{*}\), shows the 2CK NFL in the channels 1 and 2, as discussed above. At \(T^{*}\lesssim T\lesssim T_{\rm K}\), the outermost shell is almost suppressed. Then the remaining inner shell at \(\xi_{\rm K}\lesssim L\lesssim\xi^{*}\), whose character is the 3CK NFL, determines the thermal phase. When the temperature further increases to \(T\gtrsim T_{\rm K}\), only the core at \(L\lesssim\xi_{\rm K}\) survives and represents the LM thermal phase.
This clearly shows that the hierarchical shells of the boundary entanglement at zero temperature is the manifestation of the renormalization group flow in the development of the Kondo effects. Inner shells are "bound" more strongly with, namely more entangled with, the impurity, being more robust against thermal fluctuations. Namely, inner shells cause the boundary condition of the bulk conduction electrons of higher energies, hence, determining phases at higher temperature. Note that a related temperature dependence of a single-channel Anderson impurity model was discussed in Ref. [40].
**How to detect boundary entanglement shells --** Equation (3) implies that the entanglement distribution \(\rho_{n}(L)\), hence, the Kondo cloud can be experimentally detected by monitoring the change of the impurity magnetization with varying the position \(L\) of an LSB in a channel \(n\). The relation is exact at zero temperature and a very good approximation at \(T\ll T_{\rm K}\) and \(L\lesssim\hbar v/(k_{\rm B}T)\) where thermal fluctuations negligibly affect \(\rho_{n}(L)\) as demonstrated in Fig. 3.
We propose an experiment based on a charge-Kondo circuit [34, 35] with which multichannel Kondo effects can be manipulated. It has a metallic dot coupled to \(k\) quantum Hall edge channels (Fig. 4). Energy-degenerate charge states \(|N\rangle\) and \(|N+1\rangle\) of the dot form the pseudospin \(1/2\), and the excess charge \(\Delta Q\equiv Q-(N+1/2)e\) of the dot plays the role of the magnetization \(M/\hbar\) of the pseudospin. Here \(N\) and \(Q\) denote the number of electrons and the charge operator for the dot, respectively, and \(e\) is the electron charge.
We show that a quantum point contact placed on a channel \(n\) at distance \(L\) from the dot results in an LSB breaking the SU(2) pseudospin symmetry (Fig. 4, Supplementary Note 7). At \(T=0\), the negativity in the absence of the LSB is \(\mathcal{N}_{0}(T=0)=1\)[30], while the negativity in the presence of the LSB is \(\mathcal{N}(L,T=0;n)=\sqrt{1-4(\Delta Q/e)^{2}}\) [see Eq. (3)]. These give \(\rho_{n}(L,T=0)=\mathcal{N}_{0}(T=0)-\mathcal{N}(L,T=0;n)=1-\sqrt{1-4(\Delta Q /e)^{2}}\simeq 2(\Delta Q/e)^{2}\) for small \((\Delta Q/e)\ll 1\). At low temperature \(T\ll T_{\mathrm{K}}\) where thermal fluctuation on \((\Delta Q/e)^{2}\) is negligible, \(\rho_{n}(L,T)\) can be approximated as the zero-temperature value of \(\rho_{n}(L,T=0)\simeq 2(\Delta Q/e)^{2}\). It is possible to measure \(\Delta Q(L)\), hence \(\rho_{n}(L)\), by monitoring electric current through another quantum point contact [49] nearby the dot. The entanglement shells in isotropic and anisotropic \(k\)KCs can be experimentally identified with realistic parameters (Supplementary Note 7).
## Discussion
Our work demonstrates how a spin cloud screening a local magnetic impurity in a metal differs at the fundamental level from a charge cloud screening an excess charge. For the demonstration, we developed a theory of the boundary-bulk entanglement in multichannel Kondo effects. Utilizing an LSB, the spatial distribution and thermal suppression of the entanglement can be computed and experimentally detected. The distribution is a visualization of the spatial and energy structure of the quantum coherent Kondo spin screening cloud.
The boundary-bulk entanglement is applicable to general boundary quantum critical phenomena as below. The entanglement quantifies the quantum coherent coupling between the boundary and the bulk in boundary criticalities. Its spatial structure will have information of competing phases or boundary conditions, as suggested by the hierarchical shells of Kondo clouds. In spin-\(1/2\) boundary criticalities, it is obtained, using the boundary magnetization and Eq. (3). In more general cases, it may be calculated with BCFT boundary operators [30].
An LSB that breaks the boundary-bulk coupling symmetry will be useful for identifying the boundary structure of boundary criticalities. The spatial structure is estimated by the change of the entanglement as a function of the location of the LSB, while the partition for the entanglement is placed at the boundary. This differs from the usual way [16] where an entanglement is studied with placing the entanglement partition in the bulk.
The boundary-bulk entanglement will be experimentally accessible. As in Eq. (3), it may have a simple relation with a boundary observable, when the entanglement has a simple form like Kondo singlets near a fixed point of boundary criticalities. Such a simple relation between an entanglement and an observable is rare. It is another usefulness of the boundary-bulk entanglement.
We anticipate that the boundary-bulk entanglement is an essential aspect of boundary criticalities and related effects such as Kondo lattices and heavy fermions [50, 51, 52].
## Data availability
All the calculation details are provided in Supplementary Information.
|
2302.02470 | Towards inferring network properties from epidemic data | Epidemic propagation on networks represents an important departure from
traditional massaction models. However, the high-dimensionality of the exact
models poses a challenge to both mathematical analysis and parameter inference.
By using mean-field models, such as the pairwise model (PWM), the complexity
becomes tractable. While such models have been used extensively for model
analysis, there is limited work in the context of statistical inference. In
this paper, we explore the extent to which the PWM with the
susceptible-infected-recovered (SIR) epidemic can be used to infer disease- and
network-related parameters. The widely-used MLE approach exhibits several
issues pertaining to parameter unidentifiability and a lack of robustness to
exact knowledge about key quantities such as population size and/or proportion
of under reporting. As an alternative, we considered the recently developed
dynamical survival analysis (DSA). For scenarios in which there is no model
mismatch, such as when data are generated via simulations, both methods perform
well despite strong dependence between parameters. However, for real-world
data, such as foot-and-mouth, H1N1 and COVID19, the DSA method appears more
robust to potential model mismatch and the parameter estimates appear more
epidemiologically plausible. Taken together, however, our findings suggest that
network-based mean-field models can be used to formulate approximate
likelihoods which, coupled with an efficient inference scheme, make it possible
to not only learn about the parameters of the disease dynamics but also that of
the underlying network. | István Z. Kiss, Luc Berthouze, Wasiur R. KhudaBukhsh | 2023-02-05T19:59:33Z | http://arxiv.org/abs/2302.02470v1 | # Towards inferring network properties from epidemic data
###### Abstract
Epidemic propagation on networks represents an important departure from traditional mass-action models. However, the high-dimensionality of the exact models poses a challenge to both mathematical analysis and parameter inference. By using mean-field models, such as the pairwise model (PWM), the complexity becomes tractable. While such models have been used extensively for model analysis, there is limited work in the context of statistical inference. In this paper, we explore the extent to which the PWM with the susceptible-infected-recovered (SIR) epidemic can be used to infer disease- and network-related parameters. The widely-used MLE approach exhibits several issues pertaining to parameter unidentifiability and a lack of robustness to exact knowledge about key quantities such as population size and/or proportion of under reporting. As an alternative, we considered the recently developed dynamical survival analysis (DSA). For scenarios in which there is no model mismatch, such as when data are generated via simulations, both methods perform well despite strong dependence between parameters. However, for real-world data, such as foot-and-mouth, H1N1 and COVID19, the DSA method appears more robust to potential model mismatch and the parameter estimates appear more epidemiologically plausible. Taken together, however, our findings suggest that network-based mean-field models can be used to formulate approximate likelihoods which, coupled with an efficient inference scheme, make it possible to not only learn about the parameters of the disease dynamics but also that of the underlying network.
Keywords: Epidemics, Networks, Inference.
## 1 Introduction
Exact mathematical models for describing the spread of epidemics on networks are often insoluble or intractable for large networks[16, 13]. 'Mean-field' models provide a solution by introducing approximations and focusing on quantities at the population level, such as the expectation of the number of infected or susceptible individuals, or the number of direct connections between two such groups [15]. Many mean-field models exist to describe the dynamics of epidemic processes on networks. They usually take the form of a system of ODEs describing these processes [6]. Such models typically involve applying a 'closure' to exact models. Closures rely on assumptions about the underlying contact network and/or even the dynamics (usually simplifying ones), and these assumptions bring the complexity of a given system to manageable levels [20].
Modelling epidemics on networks using mean-field approximations is a well studied and active area of research [17, 1]. In both theoretical and applied settings, it is used for parameter estimation, prediction and informing intervention or policy making [2], as recently demonstrated during the COVID-19 global pandemic [18]. However, there is a lack of understanding as to how such models
operate in combination with the explicit inclusion of contact structures via networks, especially when placed in the context of statistical parameter inference. As such an investigation is warranted into whether current methods could be improved upon, or otherwise better informed, by incorporating models of epidemics on networks and by including structured population-level information and/or assumptions.
As previously mentioned, existing mean-field models are characterised by varying levels of complexity based on the assumptions used to close the exact system. This often requires making a statement about the links in the network, e.g., the number of edges that form [SI] (susceptible-infected) pairs, or [ISI] (infected-susceptible-infected) triples. For example, contact homogeneity - that is, a fixed number of links between each node in the network - is a common assumption [7, 13]. In this work, we use the 'pairwise' mean-field model, closed at the level of triples. Pairwise models are based on a bottom-up approach starting at node-level and building towards links and thereafter triples. This makes them very intuitive and the 'go-to choice' in many different areas. Moreover, pairwise models extend naturally to networks with heterogeneous degrees, weighted networks or even more complex epidemic dynamics.
The aim of this paper is to investigate to what extent this model can be used for inference purposes, and more specifically, for gaining insights about both the value of the parameters of the disease dynamics and that of the contact network, thus expanding the current body of work in the field (a review of which can be found in [14]).
In Section 2, we outline the principle of epidemics on networks as stochastic processes before detailing the pairwise system of ODEs constituting the so-called mean-field SIR model. Section 3 describes simulated data - namely, the output from the forward model with noise and Gillespie simulations, which we used to benchmark the performance of our inference schemes - as well as three real-world datasets: (i) the 2001 UK foot-and-mouth disease outbreak, (ii) The A(H1N1) outbreak in Washington State University (WSU) campus at Pullman, and (iii) the third wave of COVID-19 in India. Section 4 details the two inference schemes we considered, namely, maximum likelihood estimation and dynamical survival analysis. Section 5 presents a comparative analysis of these two schemes, both when ground-truth data is available (simulated data) and when it is not (real-world datasets). An interpretation of these results is provided in Section 6, along with potential new research directions.
## 2 Model
### Epidemics on networks as a stochastic process
The starting point is the modelling of population contact structures as a network of nodes connected by links which represent possible routes of disease transmission. The network can be represented by an adjacency matrix \(G=(g_{ij})_{i,j=1,2,\ldots,N}\), where \(N\) is the number of nodes and the entries are either zero or one and the matrix is symmetric and all elements on the main diagonal are zero, i.e., no self-loops are allowed. In this paper, we will focus on regular or homogeneous networks where each node has exactly \(n\) links.
When modelled as a continuous-time Markov Chain, a stochastic susceptible-infected-recovered (SIR) epidemic on a network results in a state space of size \(3^{N}\) since each of the \(N\) nodes can be independently S, I or R, and each state, that is, a labelled network, needs an equation [6]. This of course makes the model intractable both theoretically and numerically, even at modest values of \(N\). Of course, Gillespie [5] simulations can help deal with the problem and enable us to produce
true stochastic paths of the process, see Figure 1 for example. This is based on the simple principle that in the Markovian framework, infection and recovery are independent Poisson process with rate \(\tau\) and \(\gamma\). \(\tau\) is the per-link rate of infection and is the rate at which the I (infected) node in an I-S link infects the S (susceptible) node. This process is network-dependent. All infected nodes recover independently of the network and of each other at rate \(\gamma\).
One way to move beyond simulations while dealing with the challenges of intractable high-dimensional models is to use mean-field models that focus on some expected quantity from the exact system, such as the expected number of infected nodes or the expected number of pairs of various types (e.g., S-S and S-I). One widely used model is the pairwise model [6] which is briefly described below.
### Pairwise model as an approximation of epidemics on networks
In essence, the pairwise model focuses on a hierarchical construction where the expected number of nodes in state \(A\) at time \(t\), \([A](t)\), depends on the expected number of pairs of various types (e.g., \([AB]\)) and these, in turn, depend on triples such as \([ABC]\). Here, the counting is done in all possible directions, meaning that \([SS]\) pairs are counted twice and \([SI]=[IS]\). With this in mind, the pairwise model becomes
\[[\dot{S}] =-\tau[SI];\ \ [\dot{I}]=\tau[SI]-\gamma[I];\ \ [\dot{R}]=\gamma[I], \tag{1}\] \[[\dot{SI}] =-(\tau+\gamma)[SI]+\tau([SSI]-[ISI]);\ \ [\dot{SS}]=-2\tau[SSI]. \tag{2}\]
This system is not self-consistent as pairs depend on triples and equations for these are needed. This, however, would lead to an explosion in system size as triples will then depend on quadruples connected in ways different from the simple line graphs over four nodes. To tackle this dependency on higher-order moments, the triples in equation (2) are closed using the following relation,
\[[ASB]=\frac{n-1}{n}\frac{[AS][SB]}{[S]}, \tag{3}\]
where \(A,B\in\{A,B\}\). Applying this closure leads to
\[[\dot{S}] =-\tau[SI], \tag{4}\] \[[\dot{I}] =\tau[SI]-\gamma[I],\] (5) \[[\dot{R}] =\gamma[I],\] (6) \[[\dot{SI}] =-(\tau+\gamma)[SI]+\tau\frac{n-1}{n}\frac{[SI]([SS]-[SI])}{[S]},\] (7) \[[\dot{SS}] =-2\tau\frac{n-1}{n}\frac{[SS][SI]}{[S]}, \tag{8}\]
which is now a self-contained system. For a chosen set of parameters \((n,\tau,\gamma)\) and initial conditions, the system above can be numerically integrated, furnishing us with \([I](t)\) for example. As it turns out, see Figure (1), this low-dimensional mean-field model is exact in the asymptotic limit of \(N\rightarrow\infty\), and the numerical solution of the PW model is indistinguishable from the average of stochastic realisations. We note that there are necessary and sufficient conditions which guarantee that the PW model is exact in the limit of large network sizes. In particular, it is true for networks with Binomial (with Regular being a special case of Binomial), Poisson and Negative Binomial degree distributions [12, 10]. Using that \(R_{0}=\frac{\tau(n-1)}{\tau+\gamma}\), the closed pairwise equations can be re-parameterised to include \(R_{0}\) explicitly. Using \(\xi\) to denote \(\xi=\frac{n-1}{n}\), the re-parameterised system
now reads
\[[\dot{S}] =-\frac{\gamma R_{0}}{(n-1)-R_{0}}[SI], \tag{9}\] \[[\dot{I}] =+\frac{\gamma R_{0}}{(n-1)-R_{0}}[SI]-\gamma[I],\] (10) \[[\dot{R}] =+\gamma[I],\] (11) \[[\dot{SI}] =-\left(\frac{\gamma R_{0}}{(n-1)-R_{0}}+\gamma\right)[SI]+\xi \frac{\gamma R_{0}}{(n-1)-R_{0}}\frac{[SI]([SS]-[SI])}{[S]},\] (12) \[[\dot{SS}] =-2\xi\frac{\gamma R_{0}}{(n-1)-R_{0}}\frac{[SS][SI]}{[S]}. \tag{13}\]
## 3 Data
Typically, real-world data for inference comes as daily counts of some quantity of interest (e.g., daily new cases or daily deaths) at discrete time steps, that is
\[(\mathbf{y},\mathbf{t})=\{(y_{1},t_{1}),...,(y_{n_{obs}},t_{n_{obs}}\}, \tag{14}\]
where \((y_{1},...,y_{n})\in\{0,...N\}\) and \((t_{1},...,t_{n_{obs}})\in\{0,T\}\) with \((0\leq t_{1}<t_{2}<\cdots<t_{n_{obs}}\leq T)\) are the counts and times respectively. In this paper, we will consider three types of data, which are described below.
### Data: PWM output with noise
Since the mean-field model is an approximation of the true stochastic process, we start by simulating data directly from the mean-field model and with varying levels of noise dispersion added in order to assess the ability of the inference schemes to recover the expected parameters, i.e., those used to generate the data (before noise). Since we mainly fit to daily reported cases, we first solve the PW model numerically with a given set of parameters and compute the daily new cases on day \(i\), \(([S](i+1)-[S](i))\). Observations begin on the first day, at the earliest, and the initial conditions of the PWM are set at \(t=0\). Noise is introduced using draws from the Negative
Figure 1: Prevalence based on Gillespie simulations. Thin lines/cloud in grey are the outcome of \(\sim 100\) individual realisations (10 networks with 10 realisations each) of an SIR stochastic epidemic on regular networks (\(n=6\)), with their average plotted in thick red lines. Epidemics are started with \(I_{0}=100\) (left panel) and \(I_{0}=250\) infectious nodes chosen at random (middle and right panels) and only epidemics that reach \(2I_{0}\) are kept and averaged over. The numerical solution of the corresponding pairwise model is plotted as a continuous black line. All networks have \(N=10000\) nodes and the recovery rate is \(\gamma=1\). From left to right, \(\tau\) takes value 0.3, 0.4 and 0.5, respectively.
Binomial distribution. This is done such that the mean of the distribution is given by the model and the variance is controlled by the experimenter. For the Negative Binomial, and given a daily new cases count, \(y_{d}\), from the true model without noise, we draw a sample from
\[X\sim NB\left(m(k)=\frac{1}{k},p=\frac{1}{1+ky_{d}}\right), \tag{15}\]
where the mean of this distribution is \(y_{d}\), the variance is given by \(y_{d}+y_{d}^{2}k\) with \(k\) the dispersion parameter, and the negative binomial distribution is interpreted as giving the probability of observing \(y_{D}\) failures given \(m\) successes, that is
\[\mathcal{P}(X=y_{d})=\binom{y_{d}+m-1}{y_{d}}p^{m}(1-p)^{y_{d}}. \tag{16}\]
### Data: stochastic simulations
Since the real challenge is to fit to stochastic data, in the first instance, we consider simulated data constructed by using the Gillespie algorithm [5] for a stochastic SIR epidemic on an explicit network of contacts. The idea behind the simulation is rather simple. Each node has its own rate, resulting in a rate vector \((r_{i})_{i=1,2,\ldots,N}\). A susceptible node with \(m\) infected neighbours will have rate \(\tau m\) and an infected node will have rate \(\gamma\). Recovered or removed nodes have rate zero as they no longer play a role in the dynamics. The time to next event is chosen from an exponential with rate \(R=\sum_{i}r_{i}\), and the event itself will be chosen at random from all possible \(N\)-events but proportionally to the values of the rate, e.g., event \(j\) will be chosen with probability \(r_{j}/R\). Typical simulation plots are shown in Figures (1).
### Data: real epidemic data
In addition to assessing the robustness of the inference schemes on synthetic data for which ground truth is known, we considered real-world outbreak data from three different data sets:
1. _The 2001 Foot-and-mouth (FMD) disease outbreak in the UK._ The 2001 FMD outbreak in the UK started towards the end of February in 2001 and ended in September 2001, impacting more than 2000 farms. Control efforts resulted in the culling of millions of livestock [3], see Figure 15.
2. _The A(H1N1) outbreak in Washington State University (WSU) campus at Pullman, Washington_. In April 2009, there was an outbreak of influenza virus in Veracruz, Mexico. After this initial outbreak, a new strain of the virus, A(H1N1)pdm09, started to spread around the world in the autumn. See [19, 9] for more details about this triple reassortment virus, which spread even among young, healthy adults. As a result, multiple outbreaks on college campuses were seen, one of which was on the Washington State University (WSU) campus in Pullman, Washington in late August 2009. Within the space of three months, almost 2300 students came to the campus health centre with influenza-like illnesses that were treated as influenza A(H1N1) infections. Figure 15 shows the daily new cases starting on 22 August 2009.
3. _The third wave of COVID-19 in India_ The COVID-19 pandemic has killed millions of people across the globe. Here, we consider the third wave in India. Similar to the other two datasets, we have daily incidence and prevalence of cases, recoveries and deaths from 15 February 2021 to 31 June 2021 (see Figure 15).
Inference methods
While most inference methods are based on the optimisation of a likelihood function, the likelihood function itself can be formulated based on different considerations of the underlying model and data. The most direct method typically focuses on matching model output and data as closely as possible, i.e., it is an error minimisation process. More sophisticated methods consider the underlying stochastic model in a more direct way and involve the timing of events, even if simplifying assumptions may be needed. To ensure that investigation into the possibility of inferring epidemic and network parameters using the pairwise model is not affected or biased by the inference scheme used, we consider two different methods as described below.
### Maximum-likelihood-based approach
In order to fit data produced by the PW model with the likelihood based on the PW model, we simply test how well the true parameters can be recovered. This scenario does not require any approximation. When fitting to stochastic data from an exact epidemic or a real epidemic, however, we are making the assumption that the exact forward model can be approximated by the PW model.
In this paper, we use the negative-binomial distribution as likelihood of choice, because of its flexibility. The distribution models the number of failures given a target number of successes, \(n\), and the probability of each experiment's success \(p\). For these parameters, we have the expressions:
\[m(k)=\frac{1}{k},\quad p(y(\theta,t_{i}),k)=\frac{1}{1+ky(\theta,t_{i})}, \tag{17}\]
with \(k>0\) being the dispersion parameter, which we also attempt to infer. In this case, the distribution has mean \(y_{\theta}(t_{i})\) and variance \(y_{\theta}(t_{i})+y_{\theta}(t_{i})^{2}k\). This yields the following likelihood
\[\mathcal{L}_{NB}((\theta,k)|(\mathbf{y},\mathbf{t}))=\prod_{i=0}^{N}{y_{i}+m -1\choose y_{i}}p^{m}(1-p)^{y_{i}}, \tag{18}\]
Using \(\mathcal{L}_{NB}\) effectively decouples the mean and the variance of the distribution describing the data. This is expected to be sufficient to capture the variability of the data resulting from either natural stochasticity or variability due to how data was collected.
Parameter estimation was performed by minimising the negative log-likelihood using the widely used direct search Nelder-Mead method. Because this technique can converge to non-stationary points, for each estimation process, multiple initial conditions (15) were used. To avoid biasing the search, initial conditions were drawn using Latin hypercube sampling, maximising the minimum distance between points. Because Latin hypercube sampling cannot prevent inappropriate parameter settings, initial conditions were only accepted if the ratio \(\tau/\gamma\) was not too large. Specifically, we enforced that the denominator in the expression of \(\tau\), i.e., \(n-1-R_{0}\), was greater or equal than 1.5 (chosen empirically). On average, 10 out of 15 initial conditions survived. Code and data used to produce the results in Sections 5 are available from [https://github.com/berthouz/EpiPWMInf](https://github.com/berthouz/EpiPWMInf) for fitting the ODE realisations with negative binomial noise, [https://github.com/berthouz/EpiPWMInfwI0](https://github.com/berthouz/EpiPWMInfwI0) for fitting the Gillespie stochastic realisations and [https://github.com/berthouz/EpiPWMInfwI0N](https://github.com/berthouz/EpiPWMInfwI0N) for fitting the real-world datasets. The key difference between these will be explained in the relevant results sections.
### Dynamical Survival Analysis
The statistical methodology Dynamical survival analysis (DSA) has recently been developed in a series of papers [9, 4, 22, 8] to address some of the shortcomings of traditional inference methods used in infectious diseases epidemiology. In essence, the method combines classical dynamical systems theory with tools from survival analysis. The crux of the methodology lies in interpreting the mean-field ODEs (representing population proportions) as describing probability distributions of transfer times, such as time to infection, time to recovery. Such a change in perspective allows one to use population-level mean-field ODEs to describe the dynamics of scaled compartment sizes as well as to write a likelihood function for individual-level trajectories based on transfer times, which may be censored, truncated or even aggregated.
To apply the DSA methodology, let us first define \([D]=[SI]/[S]\), which satisfies
\[[\dot{D}]=\tau(1-\xi)[D]^{2}+\left(\xi n\tau[S]^{(2\xi-1)}-\tau- \gamma\right)[D],\]
with initial condition \([D](0)=n\rho\) and \([S](0)=1\), where, as before, \(\xi=(n-1)/n\) and \([S]\) satisfies the pairwise mean-field equation with \([S](0)=1\) and \([I](0)=\rho\). The reason we normalize the system so that \([S](0)=1\) will be clear when we describe the DSA likelihood. Now, dividing the above equation by \([S]=-\tau[S][D]\), solving for \([D]\) in terms of \([S]\) with initial condition \([S](0)=1\) and then putting the solution back in \([\dot{S}]=-\tau[S][D]\), we get
\[-[\dot{S}]=n\tau\left(1-[S]^{\xi}\right)[S]^{\xi}+\frac{\gamma+ \tau}{1-\xi}[S]\left(1-[S]^{\xi-1}\right)+n\tau\rho[S]^{\xi},\]
with initial condition \([S](0)=1\). In essence, DSA interprets the susceptible curve as an improper survival function for the time to infection of a randomly chosen initially susceptible individual. That is, \([S](t)=\mathsf{P}(T_{I}>t)\), where the random variable \(T_{I}\) describes the time to infection. Because \([S](t)\) is interpreted as a survival function, we set \([S](0)=1\). This survival function is improper because \(\lim_{t\rightarrow\infty}[S](t)=\mathsf{P}(T_{I}=\infty)>0\). However, we can transform it into a proper survival function by conditioning it on a final observation time \(T\in(0,\infty)\). We define the probability density function \(f_{T}\) on \([0,T]\) as follows:
\[h_{T}(t)=-\frac{[\dot{S}](t)}{(1-[S](T))}.\]
Given a random sample of infection times \(t_{1},t_{2},\ldots,t_{n}\), the likelihood contribution of the infection times is given by
\[\ell_{I}(\xi,\tau,\gamma,\rho\mid t_{1},t_{2},\ldots,t_{n})=\prod _{i=1}^{n}h_{T}(t_{i}). \tag{19}\]
Note that DSA does not require knowledge of removal times. However, if individual recovery or removal times are known, they may be used to enhance the quality of inference. The likelihood contribution of a random sample of individual recovery times \(t^{\prime}_{1},t^{\prime}_{2},\ldots,t^{\prime}_{m}\) is given by
\[\ell_{R}(\xi,\tau,\gamma,\rho\mid t^{\prime}_{1},t^{\prime}_{2}, \ldots,t^{\prime}_{m})=\prod_{i=1}^{m}r_{T}(t^{\prime}_{i}), \tag{20}\]
where
\[r_{T}(t)=\frac{\int_{0}^{t}h_{T}(u)\gamma e^{-\gamma(t-u)}\mathrm{d}u}{\int_ {0}^{T}\int_{0}^{t}h_{T}(u)\gamma e^{-\gamma(t-u)}\mathrm{d}u\mathrm{d}t}\]
is the density of the individual recovery times. The density \(r_{T}\) is a convolution of two densities: \(h_{T}\) for the time of infection and the density of an exponential distribution with rate \(\gamma\) corresponding to the infectious period. In practice, it is convenient to differentiate the density \(r_{T}(t)\) with respect to \(t\) and then solve a system of ODEs.
Finally, the DSA likelihood function based on a random sample of infection times \(t_{1},t_{2},\ldots,t_{n}\) and a random sample of recovery times \(t^{\prime}_{1},t^{\prime}_{2},\ldots,t^{\prime}_{m}\) is given by
\[\ell(\xi,\tau,\gamma,\rho\mid t_{1},t_{2},\ldots,t_{n};t^{\prime}_{1},t^{ \prime}_{2},\ldots,t^{\prime}_{m})=\ell_{I}(\xi,\tau,\gamma,\rho\mid t_{1},t_{2 },\ldots,t_{n})\ell_{R}(\xi,\tau,\gamma,\rho\mid t^{\prime}_{1},t^{\prime}_{2}, \ldots,t^{\prime}_{m}). \tag{21}\]
For practical convenience (and as with the MLE-based approach), we work with the loglikelihood function, i.e., the logarithm of the likelihood function, rather than the likelihood function. It is, of course, possible to maximise the DSA likelihood function \(\ell\) in equation (21) to get point estimates of the parameter set \((\xi,\tau,\gamma,\rho)\). Such a procedure would then be called a maximum likelihood approach and the difference between the two inference schemes discussed here would simply be that they maximise two different likelihood functions. An alternative way to perform parameter inference using DSA is to adopt a semi-Bayesian approach via a Laplace approximation to the posterior. In this paper, we adopted a fully Bayesian approach. Specifically, we drew posterior samples of \((\xi,\tau,\gamma,\rho)\) using a Hamiltonian Monte Carlo (HMC) scheme implemented in the _Stan_ programming language [21] interfaced with **R**. The code will be made available upon request.
Some of the datasets used in this paper (see relevant sections) provides daily new infection cases, rather than infection and/or recovery times. As mentioned earlier, the DSA methodology does not require knowledge of removal times. When these are not available, one can simply work with the likelihood function \(\ell_{I}\) (or the corresponding loglikelihood) in equation (19). Infection times, in turn, can be constructed from daily new cases as follows: If we observe 10 new cases on day \(t\), then we simply draw 10 random samples from a uniform distribution over \([t-0.5,t+0.5]\). By repeating this procedure for all days for which daily new case counts are available and combining the individual infection times (samples from the uniform distributions), we can transform the original count data into data on infection times. A random sample of those infection times can then be fed into the likelihood function \(\ell_{I}\) in equation (19). In datasets in which daily recoveries are available, we can construct individual recovery times in a similar fashion: If we observe 5 recoveries on day \(t\), we draw a random sample of size 5 from a uniform distribution over \([t-0.5,t+0.5]\). We repeat this procedure for all days for which we have daily number of recoveries available, and then combine the individual recovery times. A random sample of this data on individual recovery times is then fed into the likelihood function \(\ell_{R}\) in equation (20).
## 5 Results
### ML-based inference using data produced by the PW model
As a very first step toward assessing the ability of the inference scheme to recover the expected parameters, we first fitted the PW model (see Eqs. (9)-13) to daily cases data generated by the PW model and contaminated by some noise, whose dispersion was manipulated as will be described. Here, initial conditions for parameters \(R_{0}\), \(k\), \(n\) and \(\gamma\) were taken from \([0.2,10]\), \([0.00001,0.05]\), \([3,20]\) and \([0.001,0.1]\) respectively.
The top row of Figure 2 shows the histograms of parameters obtained when fitting \(M=1000\) data-series, i.e. solving Eqs. (9)-13 with true \([R_{0},n,\gamma,I_{0}]=[2,6,1/14,1]\) and \(N=10000\). Here,
noise was simulated according to Eq. (15) using \(k=0.0005\) (i.e., very low dispersion). These results confirm that the mean values are close to the true parameters, which is expected because the value of \(k\) is very small.
To illustrate the sensitivity of the estimation process to the value of the dispersion parameter, we repeated the fitting process when considering 5 levels of dispersion, from \(0.0005\) to \(0.01\). As shown by the bottom left panel in Figure 2, as the dispersion level increases, so does the range of inferred \(R_{0}\) values. Nevertheless, the mean estimated value remains close to the true value in all cases.
Likewise, we found the inference process to be robust to the choice of time horizon (full epidemic \(t_{max}=150\), partial epidemic including the peak \(t_{max}=80\), epidemic up to the peak \(t_{max}=70\), partial epidemic not including peak \(t_{max}=60\)). As shown by the bottom right panel in Figure 2, as the time horizon reduces, the range of inferred \(R_{0}\) values increases but the average remains close to the true value. Importantly, whilst the inclusion of the peak does narrow the range of inferred values, it is not necessary for the inference process to correctly recover the expected value of \(R_{0}\).
Figure 2: Inferring \([R_{0},n,\gamma,k]\) based on \(M=10^{3}\) data realisations generated using \([R_{0},n,\gamma,k]=[2,6,1/14,5\times 10^{-4}]\) with \(N=10^{4}\), \(I_{0}=1\). Dashed lines indicate the true values of the parameters. Expected values were \([1.999,6.005,0.0714,0.00061]\), respectively.
### Identifiability
As Fig. 3 shows, the inferred values of \(\tau\) and \(n\) describe a hyperbola-like curve which indicates a clear identifiability problem; that is the values of \(\tau\) and \(n\) cannot be disentangled. However, we make two important remarks. First, it is possible to characterise this hyperbola analytically. Second, the values of \(\tau\) and \(n\) combine favourably into the expression of \(R_{0}\) whose inferred values are well behaved, see bottom panels in Fig. 2.
To formally characterise the hyperbola, we rely on quantities that can be derived analytically from the PW model. These are the leading eigenvalue (or growth rate under some transformation) and the final epidemic size. These are given below in terms of \(\tau\) as a function of \(n\).
\[\tau =\frac{\lambda_{L}^{*}+\gamma^{*}}{n-2}, \tag{22}\] \[\tau =\gamma\frac{{s_{\infty}^{*}}^{1/n}-{s_{\infty}^{*}}^{2/n}}{{s_{ \infty}^{*}}^{2/n}-{s_{\infty}^{*}}}, \tag{23}\]
where \(\lambda_{L}^{*}\) and \({s_{\infty}^{*}}={S_{\infty}^{*}}/{N}\) are obtained by setting all parameters to some desired values, \((n,\tau,\gamma)=(n^{*},\tau^{*},\gamma^{*})\); note that often \(R_{0}\) instead of \(\tau\) is given, with knowing the value of either being sufficient to have a well-defined system. The growth rate follows from the linear stability analysis of the pairwise model at the disease-free equilibrium, while the implicit formula for the final epidemic size can be found in [13] and is used here with initial conditions corresponding to the disease-free steady state.
### ML-based inference using data from exact stochastic simulations
Five hundred Gillespie realisations were generated using parameters \([R_{0},n,\gamma,I_{0}]=[2,6,1/7,1]\) and \(N=10000\). Of these 500 realisations, \(M=370\) realisations did not die out. Figure 4 shows the histograms of the parameters estimated from fitting those realisations. Unlike with noisy realisations of the ODE, we also subjected \(I_{0}\) to the inference process. Results (not shown)
Figure 3: Scatter plots of the parameter estimates on the \(n\), \(\tau\) plane with the two unidentifiability curves calculated as per Eqs. 22 (dotted line), and 23 (dashed line). The star denotes the true values, i.e., true \(n\) and calculated value of \(\tau\) given true values of \(R_{0}\) and \(n\). Main panel: scatter plot when the full epidemic is used for inference. Inset: scatter plot when the time horizon does not include the peak, i.e., \(t_{max}=60\). Note that an arbitrary cut-off of \(n<500\) was used for clarity of the plot.
obtained when assuming \(I_{0}=1\) during estimation revealed that the inclusion of \(I_{0}\) in the estimation process was key to being able to account for the stochasticity in the onset of the epidemic, or more precisely, the time elapsed before the growth becomes exponential. For the purpose of initialising Latin hypercube sampling, values were taken in \([0.01,10]\). This particular choice has no bearing on our findings (results not shown). The mean of the estimated \(I_{0}\) was found to be \(1.355\), i.e., close to the expected \(1\); however, it showed a broad distribution of values, ranging from \(0.012\) to \(5.534\).
Comparing these histograms to those shown in Figure 2, we find that whilst the mean estimated values do not significantly differ, the variance in estimation is, not surprisingly, substantially larger. To quantify this more precisely, we calculated the mean (and standard deviation) of the confidence intervals on \(R_{0}\) over all \(M=370\) realisations. Specifically, we determined the nominal \(99\%\) profile likelihood confidence interval widths for \(R_{0}\) as described in [11]. Confidence intervals are \(0.534\pm 0.203\) compared to \(0.498\pm 0.071\) when fitting the ODE realisations with noise (dispersion level \(k=0.0005\)). These results are representative of those obtained when calculating confidence intervals for the other parameters (not shown).
### Inference based on DSA
Before describing the results of DSA on the synthetic data, we highlight that, unlike the MLE-based approach which either assumes or infers both population size and initial number of infected individuals (see also Section 5.5.1), DSA inherently assumes an infinite population size (for both susceptible and infected individuals). Therefore, we do not infer the initial number of infected individuals. However, the ratio of initially infected to susceptible individuals, the parameter \(\rho\), can be meaningfully inferred. In fact, having observed a finite number of infections in a given observation window \([0,T]\), DSA is also able to infer an _effective population size_ using the discount
Figure 4: Inferring \([I_{0},R_{0},n,\gamma,k]\) based on \(M=370\) data realisations generated using \([I_{0},R_{0},n,\gamma]=[1,2,6,1/7]\) with \(N=10^{4}\). Dashed lines indicate the true values of the parameters. Mean estimated values were \([1.355,2.029,6.522,0.141,0.0071]\), respectively.
estimator [9, 4]:
\[n_{T}=\frac{k_{T}}{1-[S](T)}, \tag{24}\]
where \(k_{T}\) is the number of infections observed by time \(T>0\). It should be noted that estimates of the effective population size depend on the observation time \(T\), and could be substantially different from the true population size when applying the method to a real epidemic. Nevertheless, as evidenced by the posterior distributions of the parameters \((\tau,R_{0},n,\gamma,\rho,n_{T})\) shown in Figure 5, for this synthetic dataset, the method is able to infer the parameters well. The posterior distributions are unimodal, centred around the true values of the parameters. Here, at first random samples of individual infection and recovery times (of size 5000 each) were constructed from the count dataset (one single trajectory of the Gillespie simulation) by drawing samples from appropriate uniform distributions (see Section 4.2). These random samples were then fed into the HMC scheme using four parallel Markov chains. Uninformative, flat priors were used.
For this dataset, the parameter values estimated by both approaches are comparable. However, it is important to note that the two methods adopt two quite different likelihood constructions. Whilst the MLE-based approach relies on counts and the size of the population to construct the likelihood function, the DSA likelihood function only requires a random sample of infection times (and recovery times, if available). In other words, whilst the MLE-based approach assigns a likelihood to the epidemic trajectories, DSA identifies the probability laws of individual transfer times (infection and recovery times). These are often, even if censored, or truncated, more reliable and easily observed or derived statistical data than counts. For instance, even when we have partially observed count data on daily new infections, one can create a random sample of infection times (possibly censored/truncated). Even when the entire population is _not_ monitored and only a set of randomly chosen individuals are followed through time and their transfer times are noted, the DSA methodology is still applicable. This advantage of DSA is particularly important when we fit the PW model to real epidemic data, which we do in the next section.
Figure 5: Posterior distributions of \((\tau,R_{0},n,\gamma,\rho,n_{T})\) using the DSA method on the synthetic data. The red triangles indicate the true values of the parameter. The means and the medians of the posterior distributions are \((0.0994,1.992,5.997,0.144,0.0002,10049)\) and \((0.0989,1.989,5.891,0.144,0.0002,10047)\), respectively.
### Inference from real-world data
#### 5.5.1 System size and the MLE approach
In deploying the MLE approach to the above data, we used our knowledge of the true value of \(N\). With real-world datasets, however, such information is typically not available. Whilst this is not an issue for DSA since it can infer an effective system size, it is for the MLE-based approach particularly in light of the unidentifiability issue discussed in 5.2. In what follows, we infer the value of \(N\) along with the other parameters, accepting that the increase in dimensionality of the parameter space will likely exacerbates unidentifiability. Here, we investigate the robustness of the inference process when inferring known parameters on the stochastic realisations presented in Section 5.3. The data presented in Figure 6 result from the 289 out of a possible 370 realisations who satisfied the following conditions: (a) good fit (as quantified by the ratio 1.2 to the smallest likelihood value 217.25 obtained over the 370 realisations) - this excluded 66 estimates, (b) reasonable \(n\) (i.e., \(n<500\) arbitrarily - this excluded 13 estimates) and (c) reasonable \(\gamma\) (i.e., \(\gamma<1\) - this excluded a further 2 estimates - interestingly those estimates had very large \(N\), specifically 26038.74 and 30168.98 but still showed very low nLL (244.02 and 228.3 respectively). The median values for the 6 parameters were: \(I_{0}=1.256792\), \(R_{0}=2.11\), \(n=8.84\), \(\gamma=0.129\), \(k=0.00002\) and \(N=9877.83\). These values are reasonably close to the theoretical values (\(I_{0}=1\), \(R_{0}=2\), \(n=6\), \(\gamma=0.14\) and \(N=10000\)) which is encouraging. In particular, the percentage error in \(N\) is under \(1.5\%^{\prime}\) (For reference, the percentage error for DSA on the same data is in the order of \(0.01\%\)). Nevertheless, as shown by Figure 6, there is substantial variance in the estimates including significantly higher values of both \(N\) and \(R_{0}\) (e.g., 70 estimates have \(R_{0}>4\)) despite excellent fits.
To illustrate this point, we plotted the estimates on the \((\tau,n)\) plane (see Figure 7) and confirmed that they conform to the unidentifiability curve previously identified. The inset shows two stochastic realisations and the corresponding fits with one fit producing an estimate for the degree \(n\) close to the true value (6) and one producing an estimate magnitudes of order larger (275). As shown by the
Figure 6: Inferring distributions for \([I_{0},R_{0},n,\gamma,k,N]\) for the stochastic realisations. The ground truth parameter values (\(R_{0}\), \(n\), \(\gamma\) and \(N\)) are denoted by vertical dashed lines. Data shown correspond to 289 out of the 370 stochastic realisations (see detail in text).
Figure (as well as the likelihood values), the fits are equally excellent. Inferred parameters for the data with the expected degree were: \(I_{0}=2.38\), \(R_{0}=2.18\), \(n=6.05\), \(\gamma=0.111\) and \(N=9689.76\), i.e., close to the ground truth data. In contrast, the inferred parameters for the data with the large degree were: \(I_{0}=0.24\), \(R_{0}=10.56\), \(n=274.92\), \(\gamma=0.024\) and \(N=9281.53\).
#### 5.5.2 FMD data
Unlike DSA, the MLE approach only provides a single point estimate. This makes it difficult to provide a meaningful comparison of the two methods. To mitigate this issue, we repeated the MLE inference process 100 times, each time using a different set of initial conditions. To construct the equivalent of a posterior, we included all parameter values obtained in each of the 100 times, provided the nLL was sufficiently close to the best nLL over the 100 rounds. The number of estimates excluded for each dataset will be reported but highlights the fact that the search algorithm can get stuck in very sub-optimal local minima.
Histograms of inferred parameters for the FMD dataset using the MLE approach are shown in Figure 8. 11 out of 100 estimates were excluded because of an anomalous outcome of the inference process. The estimates with the lowest nLL are \(I_{0}=10.54\), \(R_{0}=2.58\), \(n=153.67\), \(\gamma=0.0723\), \(k=0.010\), and \(N=1817.2\). There is quite a bit of dispersion around the parameters, with fairly fat tails. For example, whilst the median for \(R_{0}\) (2.71, see Table 1) is relatively close to the best estimate, we also observe some fairly large values (in fact 10 out of 100 estimates were excluded because of \(R_{0}>10\)). The best and median estimate for \(N\) was 1817 and 1747 respectively. This number is very likely implausible as well more than 2000 farms will have been involved in the epidemic, but see DSA results below. Likewise the inferred average degree seems far overinflated. The value of \(\gamma\eqsim 0.07\) implies 14 days for the infection period. Note that previous studies, see [4] for example, have reported a mean of 10.2 days. Importantly, the fits are good with all (accepted) estimates showing a very narrow range of nLL values (from 233.03 to 248.67 with a mean of 236.31 and a std of 4.06). This once again provides evidence of the fact that the MLE approach ascribes
Figure 7: Main panel: Scatter plot of the parameter estimates on the \(n\), \(\tau\) plane with the two unidentifiability curves calculated as per Eqs. 22 (dotted line), and 23 (dashed line). The star denotes the true values, i.e., true \(n\) and calculated value of \(\tau\) given true values of \(R_{0}\) and \(n\). Only those estimates who did not provide a good fit, as per the criterion above) were excluded, resulting in 304 surviving estimates. Inset: Empirical data and fit for two stochastic realisations corresponding to the triangles in the main panel with two significantly different inferred degree \(n\) (see detail in text).
a likelihood to the trajectory produced by the inferred parameters rather than to the parameters themselves.
The posterior distributions obtained by DSA method on the FMD dataset are shown in Figure 9. It is important to note that, unlike with the MLE approach, these results were obtained when using an informative prior, an exponential distribution with mean 10.2 days, for the \(\gamma\) parameter following on the analysis in [4]. The posterior distributions are unimodal. The mean estimates are consistent with previously reported values, for example in [4]. Interestingly, and as with the MLE approach, the estimated effective population size is less than 2000. This is not to be confused with the number of farms, however (see brief explanation in Section 5.5.5).
#### 5.5.3 H1n1-N18234
The A(H1N1) dataset presents an interesting challenge as it has a long persistent tail with visible stochastic effects. We therefore present two sets of results: one where we infer parameters on the full dataset (i.e., including the tail) and one when we restrict to \(T=42\). Figure 10 shows the results of the MLE-based approach for both scenarios. As clearly evidenced by the bottom right panel of Figure 15, when the full horizon is considered, the fits are poor, the noisy tail seemingly obfuscating the true trajectory of the epidemic. Not surprisingly, the parameter estimates appear meaningless and highly variables from one round of inference to the other despite similar nLL (see Table 1). When restricting to \(T=42\), the fits are good and the parameter estimates are slightly better behaved albeit with not unimodal and with implausibly large \(n\) considering the inferred population size \(N\). In fact, only 51 out of 100 parameter estimates survived once we excluded 3 estimates for being poor fits, 13 for excessive values of \(R_{0}\) (\(>10\)) and 33 estimates for excessive value of \(\gamma>1\). Interestingly, we note the high value of \(k\) inferred in both scenarios, with MLE correctly recognising the high dispersion of the counts.
When deploying DSA, once again, a prior was used for \(\gamma\) (\(\gamma^{-1}=5.5\)) based on published literature (see [19, 9]). Figures 11 and 12 show the posterior distributions of the parameters (\(\tau,R_{0},n,\rho,n_{T}\)
Figure 8: Distributions of \([I_{0},R_{0},n,\gamma,k,N]\) using MLE on the FMD data using 100 rounds of inference with different initial conditions. The median values are listed in Table 1 and compared with the DSA approach in Section 5.5.5. Five estimates for which \(n>100\) (154, 156, 279, 294 and 368) were excluded from the figure (but not the statistics) for improved readability of the histogram.
based on the full and partial data respectively. As with the MLE-based approach, when fitting to the full data, the DSA fit is poor, and in fact, very similar to that of the MLE approach (see bottom right panel of Figure 15). When removing the noisy tail of the data, the quality of inference improves significantly with both MLE and DSA producing near identical fits (bottom left panel of Figure 15). However, unlike with the FMD dataset, the inferred parameters are quite different although interestingly the ML-estimated population size and the DSA effective size are very similar (see Table 1).
#### 5.5.4 COVID-19 in India
Figure 13 shows the histograms of the estimates obtained by the ML-based approach on the final dataset. Here, unlike with the previous dataset, there was high consistency between estimates over the 100 rounds with no exclusions needed. Curiously, this homogeneity of results is associated with an apparent mismatch between the fitted model and the data, as shown by the top right panel in Figure15.
As in the synthetic data study, random samples of individual infection and recovery times (of size 5000 each) were constructed from the count dataset. These random samples were then fed into the HMC scheme using four parallel Markov chains. Uninformative, flat priors were used. The posterior distributions of the parameters \((\tau,R_{0},n,\gamma,\rho,n_{T})\) using the DSA method are shown in Figure 14. The estimated parameters correspond to probability distributions that have similar measures of central tendency as those reported in an earlier analysis of the data in [4].
Interestingly, for both methods, the majority of the probability mass in the (posterior) distribution for the degree \((n)\) is concentrated around small values, indicating a low contact pattern. This is in agreement with various non-pharmaceutical interventions such as lockdowns that were put in place to reduce the spread of the virus. Finally, both ML-estimated population size and DSA effective size are in the same order of magnitude.
Figure 9: Posterior distributions of \((\tau,R_{0},n,\gamma,\rho,n_{T})\) using DSA on the FMD dataset. The red triangles indicate the means of the posterior distributions. The means and medians of the posterior distributions are \((0.0266,2.095,9.659,0.0859,0.0079,1901)\) and \((0.0233,2.054,9.982,0.0737,0.0078,1819)\), respectively.
#### 5.5.5 Comparison across real-world datasets
Figure 15 shows the data for all three real-world outbreaks together with fits produced when taking the best parameter estimates using the ML-based approach and the median values of the posteriors produced by DSA. Whilst our investigation of the COVID-19 dataset supports a like-for-like comparison between inference schemes, there are differences in the way the analyses of the FMD and the A(H1N1) datasets were carried out. Specifically, whereas no prior was involved in the MLE-based approach, informative priors (based on published literature) were used for the Hamiltonian Monte Carlo scheme for DSA. This reflects an important and fundamental difference between MLE-based approach and DSA methodology (here implemented via a Hamiltonian Monte Carlo scheme), namely that the latter follows a Bayesian route. It should be noted, however, that the effect of the choice of priors should vanish in the limit of a large number of data points.
With this in mind, we can make several observations:
Figure 10: Distributions of \([I_{0},R_{0},n,\gamma,k,N]\) using MLE on the H1N1 data (with horizon restricted to 42, top panel and full data, bottom panel) using 100 rounds of inference with different initial conditions. The median values are listed in Table 1.
* In general, the fit to the real data is good except in two cases. In the COVID-19 data, despite relatively similar parameters between methods, the DSA fit appears to capture the trend of the data a lot better than the MLE fit where a clear mismatch is being observed. The scenario in which the full H1N1 epidemic is subjected to inference highlights the challenge of highly variable, potentially noisy, data, as well as the impact of the observation period. In particular, as shown by the bottom two panels of Figure 15, the longer observation window allows the long and noisy tail of the epidemic to dominate, with both approaches missing the rise and fall in the daily new cases.
* Table 1 provides two sets of estimates for the MLE approach. As indicated previously, this is for comparison purposes, the MLE process was repeated 100 times using different initial
Figure 11: Posterior distributions of \((\tau,R_{0},n,\rho,n_{T})\) using DSA on the full A(H1N1) outbreak data. The means and the medians of the posterior distributions are \((0.0373,0.9880,8.369,0.0255,10146)\) and \((0.0269,0.9892,8.665,0.0264,9286)\), respectively.
Figure 12: Posterior distributions of \((\tau,R_{0},n,\rho,n_{T})\) using DSA on the A(H1N1) outbreak data restricted to time horizon \(T=42\). The means and medians of the posterior distributions are \((0.0437,1.843,10.650,0.0189,2179)\) and \((0.0418,1.845,10.908,0.0189,2177)\), respectively.
conditions. The estimate denoted 'best' is therefore the 'true' MLE estimate (in the sense of being the one with maximum likelihood over all estimates of all rounds). Nevertheless, in the above, we kept all estimates provided their likelihood was close enough to that of the best one. In many cases, we observe a large difference between best and median. This is yet another manifestation of the unidentifiability problem whereby vastly different values of the mean-degree can result in likelihoods very close to the best one (i.e., with the same quality of fit). Interestingly, we note that, in general (a few estimates were excluded as per the text), the impact of unidentifiability did not affect \(R_{0}\) as much as other parameters.
* The estimates for \(I_{0}\) and population size, \(N\), are relatively similar across both inference approaches, except for A(H1N1) when the full dataset is considered and COVID-19. For the
Figure 14: Posterior distributions of \((\tau,R_{0},n,\gamma,\rho,n_{T})\) using the DSA method on the COVID-19 dataset. The means and the medians of the posterior distributions are \((0.3745,1.168,2.522,0.0963,0.0002,23139638)\) and \((0.3401,1.164,2.493,0.0961,0.0002,23101076)\), respectively.
Figure 13: Distributions of \([I_{0},R_{0},n,\gamma,k,N]\) using MLE on the COVID-19 dataset using 100 rounds of inference with different initial conditions. The median values are listed in Table 1.
A(H1N1) outbreak, the MLE method appears to overestimate \(N\) by a large margin. Note that Washington State University campus is located in a relatively small town with a student population of size around 18000 and a resident population of size around 9000 [9]. For the COVID-19 wave in India, the DSA median estimate of 5204 for \(I_{0}\) appears smaller than the true count of 11592 new cases on 16 February 2021, whereas the MLE method seems to overestimate it (median and best estimate of 33682 and 33130, respectively). It should be noted that the effective population size is a by-product of the DSA method (see Section 5.4). Strictly speaking, the parameters \(I_{0}\) and \(N\) are far less meaningful in DSA than in MLE which requires them. However, keeping track of the DSA estimates \(n_{T}\) of the effective population sizes at times \(T\) is valuable in that it gives us a sense of the possible size of the epidemic and therefore, could be used for monitoring an ongoing epidemic [8].
* Comparing the distributions obtained by DSA and MLE for the FMD data, we find the range of average degree obtained by DFA to be much better behaved than that obtained by MLE with mean and median being close and with a numerical value that seems more realistic. This observation holds for all datasets with DSA producing more realistic estimates. This is ultimately linked to the fundamental difference between how the likelihoods in the MLE and DSA approach are formulated. Whilst the MLE method simply minimises the mismatch between model trajectory and data, the DSA likelihood captures the underlying probability laws of individual infection and recovery times. More specifically, it models the underlying survival function through the \([S](t)\) curve parameterized by \((n,\tau,\gamma,\rho)\) (and implicitly, by the observation time \(T\)).
Figure 15: Illustration of the real-world outbreak data (top-left - 2001 FMD outbreak in the UK, top-right - third wave of COVID-19 in India, bottom panels - H1N1 outbreak with short (left) and long horizon (right) together with output from the pairwise model with point estimates from MLE (values with best likelihood) and DSA (median values). All parameter values are given in Table 1.
## 6 Discussion
In this paper, we have investigated the ability of a network-based mean-field model, i.e., the pairwise model, to infer not only disease parameters but also some of the underlying network. Outbreak data encapsulate the interplay between contact network and epidemic spreading. However, daily new cases or other data incorporate network information only implicitly. Hence, it is interesting to investigate whether from such data one can learn about the underlying contact network. Several challenges arise; for example, an epidemic with a small transmission rate on a dense network may look very similar to an epidemic with a large transmission rate spreading on a sparser network. Hence, it is not a given that outbreak data hold a specific enough signature of the contact network. In fact, our investigation revealed an anti correlation between the value of the transmission rate and the density of the network. Regardless, the estimate of both parameters peaked at around the desired values, especially when ground truth was known.
While the pairwise model used in the paper assumes that the network is regular and only accounts for the number of links each node has, it is possible to relax this seemingly restrictive assumption. In [8], DSA was used for an SIR epidemic on a configuration model network with Poisson degree distribution. Recently, it has been shown [12] that the pairwise model remains exact for networks with binomial, Poisson or negative binomial degree distribution; see also [10, Corollary 1, Section 5.2] where a similar result was derived for a susceptible-infected (SI) process on configuration model random graphs. The difference in the degree distributions manifests itself in the PW model via the type of closure one uses. For example, if the underlying network has a Poisson degree distribution, then \(\xi\) is simply set to \(\xi=1\), and the parameter of the Poisson distribution, and hence, the network enters the PW model via the initial conditions. A similar modification is possible for networks where the degree distribution is negative binomial thus separating mean from variance. These all offer extensions and improvements above and beyond what the PW model was able to capture about the network. Moreover, employing the edge-based compartmental model, another network-based mean-field model, which uses the probability generating function of corresponding to the degree distribution of the network makes it possible to aim for learning the degree distribution of the
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r} & \(I_{0}\) & \(R_{0}\) & \(n\) & \(\gamma\) & \(k\) & \(N\) \\ \hline \hline FMD (MLE median) & 11 & 2.71 & 26.32 & 0.0577 & 0.0123 & 1,748 \\ FMD (MLE best) & 11 & 2.58 & 153.67 & 0.0723 & 0.0101 & 1,817 \\ \hline FMD (DSA median) & 14 & 2.05 & 9.98 & 0.0737 & - & 1,819 \\ \hline \hline H1N1-N18234 (MLE median, 42 days) & 76 & 2.69 & 229.81 & 0.1073 & 0.7688 & 2,098 \\ H1N1-N18234 (MLE best, 42 days) & 76 & 2.70 & 2094.78 & 0.1073 & 0.7679 & 2,095 \\ \hline H1N1-N18234 (DSA median, 42 days) & 39 & 1.85 & 10.91 & 0.1818 & - & 2,177 \\ \hline \hline H1N1-N18234 (MLE median, 80 days) & 116243 & 2.07 & 1582.07 & 0.0142 & 1.2296 & 119,131 \\ H1N1-N18234 (MLE best, 80 days) & 3463 & 0.85 & 2.61 & 0.0270 & 1.2468 & 20,256 \\ \hline H1N1-N18234 (DSA median, 80 days) & 252 & 0.99 & 8.67 & 0.1818 & - & 9,286 \\ \hline \hline & 33682 & 1.72 & 3.70 & 0.0323 & 0.0474 & 20,213,142 \\ \multicolumn{6}{l}{covid (MLE best)} & 33130 & 1.70 & 3.68 & 0.0333 & 0.0474 & 20,254,332 \\ \hline \multicolumn{6}{l}{covid (DSA median)} & 5204 & 1.16 & 2.50 & 0.0961 & - & 23,101,076 \\ \multicolumn{6}{l}{} \\ \end{tabular}
\end{table}
Table 1: Summary statistics of the inferred parameters for the three empirical datasets considered in this study when using both MLE and DSA approaches. Estimates for \(I_{0}\) and \(N\) were rounded to the nearest integer for readability.
underlying network.
The crucial advantage of the DSA methodology is the change in perspective about the mean-field ordinary differential equations. In the DSA approach, we view the ODEs as descriptions of probability laws of individual times of infection and recovery, as opposed to their traditional interpretations as limiting proportions or scaled sizes of compartments. By doing so, we are able to directly model the underlying survival functions corresponding to the individual times of infection and recovery, and thereby, bring to bear the entire toolkit of survival analysis for the purpose of parameter inference. Even though the DSA methodology has now been applied to several compartmental models, both Markovian and non-Markovian, both under mass-action and network-based contact patterns, the law of large numbers-based DSA methodology needs further improvement to adjust for stochastic effects when applied to finite (often small) populations.
## 7 Acknowledgements
L. Berthouze and I.Z. Kiss acknowledge support from the Leverhulme Trust for the Research Project Grant RPG-2017-370. The authors thank Prof Theodore Kypraios for useful discussions about approximate likelihoods.
## 8 Data availability statement
H1N1 outbreak data is available at [https://github.com/cbskust/SDS.Epidemic](https://github.com/cbskust/SDS.Epidemic), data about the third COVID19 wave in India can be found at [https://data.covid19india.org/documentation/csv/](https://data.covid19india.org/documentation/csv/). All other datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2301.04551 | Non-linear, bivariate stochastic modelling of power-grid frequency
applied to islands | Mitigating climate change requires a transition away from fossil fuels
towards renewable energy. As a result, power generation becomes more volatile
and options for microgrids and islanded power-grid operation are being broadly
discussed. Therefore, studying the power grids of physical islands, as a model
for islanded microgrids, is of particular interest when it comes to enhancing
our understanding of power-grid stability. In the present paper, we investigate
the statistical properties of the power-grid frequency of three island systems:
Iceland, Ireland, and the Balearic Islands. We utilise a Fokker-Planck approach
to construct stochastic differential equations that describe market activities,
control, and noise acting on power-grid dynamics. Using the obtained parameters
we create synthetic time series of the frequency dynamics. Our main
contribution is to propose two extensions of stochastic power-grid frequency
models and showcase the applicability of these new models to non-Gaussian
statistics, as encountered in islands. | Ulrich Oberhofer, Leonardo Rydin Gorjão, G. Cigdem Yalcin, Oliver Kamps, Veit Hagenmeyer, Benjamin Schäfer | 2023-01-11T16:27:11Z | http://arxiv.org/abs/2301.04551v2 | # Non-linear, bivariate stochastic modelling of power-grid frequency applied to islands
###### Abstract
Mitigating climate change requires a transition away from fossil fuels towards renewable energy. As a result, power generation becomes more volatile and options for microgrids and islanded power-grid operation are being broadly discussed. Therefore, studying the power grids of physical islands, as a model for islanded microgrids, is of particular interest when it comes to enhancing our understanding of power-grid stability. In the present paper, we investigate the statistical properties of the power-grid frequency of three island systems: Iceland, Ireland, and the Balearic Islands. We utilise a Fokker-Planck approach to construct stochastic differential equations that describe market activities, control, and noise acting on power-grid dynamics. Using the obtained parameters we create synthetic time series of the frequency dynamics. Our main contribution is to propose two extensions of stochastic power-grid frequency models and showcase the applicability of these new models to non-Gaussian statistics, as encountered in islands.
power grid, frequency, stochastic modelling, Fokker-Planck, statistics, data-driven modelling, microgrids.
## I Introduction
### Motivation & problem
Controlling power-grid frequency is important in the design and operation of a stable power system. A shortage of power manifests itself in a decrease of the frequency and many control schemes to stabilise and balance the power system rely on frequency measurements [1]. Hence, understanding power-grid frequency dynamics and statistics is critical.
Obtaining such an understanding is not trivial: Power grids are complex systems driven by both stochastic and deterministic influences [2, 3, 4]. Renewable generation [5], but also short-term consumer behaviour [6, 7] are effectively random inputs to the power balance, while day-ahead trading, scheduled generation and overall demand trend are deterministic [3]. Therefore, detailed model-based approaches that describe the dynamics of rotor-angle, angular velocity, and voltages [1] are complemented by data-driven approaches [4, 8]. Still, these approaches often focus on larger synchronous areas and not on small, island grids.
Understanding the statistics of power-grid-related variables in islands is particularly useful as case studies since geographical islands are often isolated and only (weakly) coupled via DC lines to other (continental) synchronous areas [9]. Islands serve as a bedrock to study the effects of low-inertial systems, particularly those that rely much more on renewable (non-inertial) energy sources [4, 10]. Furthermore, islanded grids, such as natural islands or islanded microgrids, could play a more important role in the future, e.g. when large synchronous areas are split into smaller areas [11, 12]. Such splitting into smaller cells may prevent large-scale cascading failures [13]. Within the present article, we will evaluate data recorded in Ireland, Iceland, and the Balearic Islands. Ireland is of particular interest here because of its high share of wind energy generation, reaching 43% of annual generation in 2020 [14]. Thereby, it could act as an inspiration for how highly renewable systems can be operated. Both Ireland and the Balearic Islands have DC connections to larger regions, namely Ireland is connected to Great Britain [15], while the Balearic Islands are connected to mainland Spain (and thereby Continental Europe). Meanwhile, Iceland is isolated without any connection to another grid and has a unique generation and demand mixture of geothermal and wind power, as well as data centres and aluminium plants [16].
### Literature review
The study of islands and (islanded) microgrids has received great interest [17, 18, 19]. Meanwhile, the area of stochastic modelling of the power-grid frequency has attracted much attention from a broad interdisciplinary audience [20, 21, 22, 23, 24, 25, 26]. Fokker-Planck equations have been used to obtain a stochastic description of the observed dynamics [2, 27], leading to data-based models [4] and quantitative comparisons for continental synchronous areas [28]. These Fokker-Planck-based approaches have been recently further refined [29] for short time series. Complementary, machine learning might assist in estimating suitable parameters [30] or renewable energy generation can be modelled using the same mathematics [31].
### Structure
The present article is structured as follows. We first introduce the stochastic modelling approach via a Fokker-Planck equation and our newly proposed models in Sec. II. We then demonstrate how our approach reproduces key characteristics of empirical power-grid frequency statistics in Sec. III. We continue with a discussion of our results in Sec. IV and close with an outlook in Sec. V.
## II Methods
### Deriving models
In this work, we approximate the dynamics of the power-grid frequency via stochastic differential equations (SDEs) [32, 33], based on the equation-of-motion of the aggregated swing equation [2, 4, 34, 35] for the bulk angular velocity \(\omega\) and bulk angle \(\theta\) of a power network, given by
\[\begin{split}\frac{\mathrm{d}\theta}{\mathrm{d}t}& =\dot{\theta}=\omega,\\ \frac{\mathrm{d}\omega}{\mathrm{d}t}&=\dot{\omega}= c_{1}(\omega)+c_{2}(\theta,\omega)+\Delta P+\epsilon(\omega)\xi,\end{split} \tag{1}\]
with \(c_{1}(\omega)\) the primary or droop control function, \(c_{2}(\theta,\omega)\) the secondary or integral control function, \(\Delta P\) the power mismatch, \(\epsilon(\omega)\) the noise amplitude (potentially dependent on \(\omega\)), and \(\xi\) Gaussian white noise. We note that this stochastic swing equation has been normalised by an unknown inertial constant \(M\). The (bulk) angular velocity is connected to the power-grid frequency as \(f=f_{0}+\frac{\omega}{2\pi}\), with reference frequency \(f_{0}=50\) Hz in our case. Hence, frequency deviations (\(f-f_{0}\)) are proportional to the angular velocity \(\omega\). The power mismatch \(\Delta P\) represents all deterministic influences on the grid's power balance, in particular, due to power dispatch: At regular intervals, typically every \(15\) to \(60\) minutes, the scheduled generation is updated to the continuously changing load. These discrete updates induce deterministic frequency deviations [3, 36], which we model via \(\Delta P\). The primary-control function \(c_{1}(\omega)\) and the integral secondary-control function play a role in restoring power-grid angular velocity to its nominal value [1]. The white noise process \(\xi\) may be interpreted as the time derivative of a Wiener process \(\mathrm{d}W/\mathrm{d}t=\xi\) and accounts for all high-frequency fluctuations and microscopic noise observed in a power grid [37].
As this is a stochastic system, we have to rely on probabilistic results, i.e., instead of a trajectory of \(\omega\), we report and model the evolution of the probability density function \(\rho(\theta,\omega)\) of the rotor-angle \(\theta\) and the angular velocity \(\omega\) via a Fokker-Planck equation based on [4, 38, 39]
\[\frac{\partial\rho}{\partial t}=-\frac{\partial}{\partial\theta}(\omega\rho)- \frac{\partial}{\partial\omega}\left(\left(c_{1}(\omega)+c_{2}(\theta,\omega )\right)\rho\right)+\frac{\partial^{2}}{\partial\omega^{2}}\left(\frac{ \epsilon(\omega)^{2}}{2}\rho\right). \tag{2}\]
Assuming that secondary control acts on a different time scale, we might neglect the \(\theta\) dynamics in the estimation of \(c_{1}(\omega)\) and \(\epsilon(\omega)\) and simply consider the 1D case:
\[\frac{\partial\rho}{\partial t}=-\frac{\partial}{\partial\omega}\left(c_{1}( \omega)\rho\right)+\frac{\partial^{2}}{\partial\omega^{2}}\left(\frac{ \epsilon(\omega)^{2}}{2}\rho\right). \tag{3}\]
Previous works have focused solely on solving the 1D Fokker-Planck equation (3) by neglecting the \(\theta\) dynamics and effectively obtaining an expression for \(\rho(\omega)\)[4, 29, 37]. As a further simplification, previous work assumed that the primary control is fully linear \(c_{1}(\omega)\sim c_{1}\omega\) and that noise is purely additive \(\epsilon(\omega)\sim\epsilon\). Secondary control \(c_{2}\) was then either completely neglected or calculated in a second step. These simplified models result in an augmented Ornstein-Uhlenbeck SDE, to which we know the explicit closed-form solution [4].
To estimate the control parameters \(c_{1}\), \(c_{2}\), and the noise amplitude \(\epsilon\) purely from data, we turn to the Nadaraya-Watson non-parametric kernel-density Kramers-Moyal coefficients \(D_{n}(x)\) estimation [40, 41, 42, 43], which reads:
\[\begin{split} D_{\mathbf{n}}(\mathbf{x})&\sim\frac{1}{n!} \frac{1}{\Delta t}\langle(\mathbf{x}(t+\Delta t)-\mathbf{x}(t))^{n}|\mathbf{x}(t)=\mathbf{x} \rangle\\ &\sim\frac{1}{n!}\frac{1}{\Delta t}\frac{1}{N}\sum_{i=1}^{N-1}( \mathbf{x}_{i+1}-\mathbf{x}_{i})^{\mathbf{n}}K(\mathbf{x}-\mathbf{x}_{i}),\end{split} \tag{4}\]
where \(\mathbf{x}\) is either \(\omega\) for the 1D case or \((\omega,\theta)\) for the 2D case. \(N\) is the number of data points from a timeseries,
Fig. 1: Power-grid frequency recordings from Iceland, Ireland, and the Balearic Islands. Black lines indicate the Gaussian filter used for detrending.
is a kernel with bandwidth \(h\) (c.f. [42, 43]), and \(\Delta t\) is the sampling rate. The order of the Kramers-Moyal coefficient \(\mathbf{n}\) depends on the dimension of \(\mathbf{x}\): in 1D, \(\mathbf{n}=n\) is an integer; in 2D, \(\mathbf{n}=(n,m)\) is a tuple of two integers. Therein we estimate \(c_{1}\), \(c_{2}\), and \(\epsilon\) directly from data, in either a 1D or a 2D setting (see Fig. 2). The Kramers-Moyal coefficients allow us to disentangle the Fokker-Planck equations as given in (2) and (3). We have, in 1D [25, 32, 33]:
\[\begin{split} D_{1}(\omega)&=c_{1}(\omega),\\ D_{2}(\omega)&=\epsilon(\omega)^{2}/2,\end{split} \tag{5}\]
and in 2D:
\[\begin{split} D_{1,0}(\theta,\omega)&\approx\omega,\\ D_{0,1}(\theta,\omega)&=c_{1}(\omega)+c_{2}( \theta),\\ D_{0,2}(\theta,\omega)&=\epsilon(\omega)^{2}/2. \end{split} \tag{6}\]
The estimated coefficients \(D_{1}\), \(D_{2}\), as well as \(D_{0,1}\) and \(D_{0,2}\) can be intricate functions of \(\theta\) and \(\omega\), see Fig. 2. We note that _a priori_ we could consider a noise term in the rotor-angle \(\theta\) as well, which would lead us to investigate \(D_{2,0}\). However, due to the underlying equation of motion \(\dot{\theta}=\omega\), any noise in \(\theta\) would in turn result in a measurable noise in \(\omega\), which we observe via measurements in the frequency \(f\). Therefore, we consider only noise in \(\omega\).
Data for all three islands (Iceland, Ireland, Balearic islands) are recorded via the electrical data recorder (EDR), see [44, 45, 46] for details, and frequency measurements \(f\) with a time resolution of \(1\) second or \(0.1\) seconds are available for several weeks. We separate the power-grid dynamics into a slowly moving trend (described by \(\Delta P\) and \(c_{2}\), as explained in [4]) and short-term fluctuations (captured mostly by \(c_{1}\) and \(\epsilon\)) by applying a Gaussian filter with a window of \(60\) seconds (see Fig. 1).
### Models
In this article, we compare four models, two reference cases and two expansions of previous models:
* **Model 1 (reference)**: Basic Ornstein-Uhlenbeck process, only one damping constant and noise: \[\dot{\omega}=c_{1}\omega+\epsilon\xi,\] (7) where the constant values for \(c_{1}\) and \(\epsilon\) are estimated directly from the original data using the Kramers-Moyal coefficients, without any detrending. Note that here the \(\epsilon\) has to include all deterministic and stochastic variations of the frequency, while \(c_{1}\) has to capture all restoring forces (primary control, secondary control, deterministic relaxation).
* **Model 2 (reference)**: The linear-response model employed in [4]: \[\dot{\theta}=\omega,\quad\dot{\omega}=c_{1}\omega+c_{2}\theta+\Delta P+ \epsilon\xi.\] (8) The constants \(c_{1}\) and \(\epsilon\) are derived by detrending the time series and estimating the drift and diffusion using the Kramers-Moyal coefficients [43]. The secondary control constant is then estimated by combining the trajectory and the estimate of \(c_{1}\), while the deterministic dynamic is included via a time-dependent Heaviside function as the power mismatch \(\Delta P\), see [4] for details.
* **Model 3 (our contribution)**: Our first extended model with non-linear response and multiplicative noise [37]: \[\dot{\theta}=\omega,\quad\dot{\omega}=c_{1}(\omega)+c_{2}(\theta,\omega)+ \Delta P+\epsilon(\omega)\xi,\] (9) wherein we non-parametrically approximate \(c_{1}(\omega)\) and \(\epsilon(\omega)\) from the detrended time series. Fitting the empirical functions more closely, we use a polynomial of order \(3\) for the drift and a quadratic function in \(\omega\) to describe the diffusion, see Fig. 2. The power mismatch \(\Delta P\) is modelled by a time-dependent Heaviside function. These steps, mimicking power dispatch and trading activities, take place every \(30\) minutes for the Irish grid and every \(60\) minutes for the Balearic grid, while no such steps are included for the flat Icelandic profile. We approximate the power step as the change in frequency \(\Delta P\approx\dot{\omega}\) in an appropriate time interval around changes of power feed-in. As in Model 2, we estimate the secondary control \(c_{2}(\theta,\omega)\) by solving equation (9) neglecting the noise (setting \(\epsilon=0\)), where we approximate the non-linear primary control \(c_{1}(\omega)=q_{3}\omega^{3}+q_{1}\omega\) by its first-order Taylor polynomial \(c_{1}^{\text{Taylor}}(\omega)\). Further, after a change of the deterministic power dispatch, the frequency jumps and then decays back approximately exponentially following \(\exp(-t/\tau)\). Plugging in the Taylor expansion for \(c_{1}(\omega)\), this is further
Fig. 2: Drift and diffusion from Ireland. Polynomial fits are used in Model 3 for both drift and state-dependent diffusion (top rows). For Model 4, we estimate 2D Kramers–Moyal coefficients (bottom row).
simplified to \(1/\tau\approx c_{2}(\theta,\omega)/\left(c_{1}^{\text{Taylor}}(\omega)\theta\right)\)[4]. Hence, we use a first-order expansion of \(c_{1}(\omega)\) to compute \(c_{2}(\theta,\omega)\):
\[c_{2}(\theta,\omega)\approx\frac{1}{\tau}\left(3q_{3}\omega^{2}+q_{1}\right)\theta. \tag{10}\]
In addition, for the Balearic and the Irish grids, we increase the primary control by a factor \(3\) for high-frequency deviations (\(|f-f_{0}|\gtrsim 150\,\text{mHz}\)), to mimic power exchange via HVDC lines [15, 19].
* **Model 4 (our contribution)**: Our second extended model separates the frequency into stochastic fluctuations and a trend: \[\dot{\theta}_{\text{fluct}} =\omega_{\text{fluct}},\] \[\dot{\omega}_{\text{fluct}} =c_{1}(\omega_{\text{fluct}})+c_{2}(\theta_{\text{fluct}})+ \epsilon(\omega_{\text{fluct}})\xi\] \[\omega =\omega_{\text{fluct}}+\omega_{\text{trend}},\] (11) where \(\omega_{\text{trend}}\) is given by a multiple of a Gaussian-filtered daily profile. In this model, we focus on the stochastic dynamics of the frequency and consider a strengthened filtered daily profile as the deterministic drive of the model in order to recreate the entire dynamics. The multiplication is necessary to receive a suitable width of the distribution as the daily profile averages over the large deterministic fluctuations. In contrast to Models 2 and 3, we estimate the secondary control \(c_{2}(\theta)\) directly by calculating the Kramers-Moyal coefficients from the bivariate (2D) Fokker-Planck equation (2). In order to obtain a time series for the voltage angle \(\theta\), we integrate over the detrended angular velocity \(\omega\). The control \(c_{1}\), \(c_{2}\), and noise \(\epsilon\) can therefore be estimated from the Kramers-Moyal coefficients (6). For simplicity, we assume \(c_{1}(\omega)\sim\omega\), \(c_{2}(\theta)\sim\theta\), \(\epsilon(\omega)\sim\sqrt{\text{const.}+\omega^{2}}\). Furthermore, as in Model 3, an increase of \(c_{1}\) for high-frequency deviations simulates the influence of the HVDC response.
We note that the parameters \(c_{1}\), \(c_{2}\), and \(\epsilon\) are not identical between models but carry a similar function, i.e. they symbolise primary control, secondary control, and fluctuation amplitude, respectively. For the implementation details see [47].
### Quantifying models
The target of the present article is to adequately reproduce the statistics of power-grid frequency recordings of any generic power grid with minimal information, i.e., with an almost purely data-driven approach. How well does this method perform when applied to islanded grids?
To evaluate the quality of a synthetic-generated probability density function \(\rho_{\text{syn}}(x)\) against an empirical one \(\rho_{\text{emp}}(x)\), we employ the Kullback-Leibler divergence \(D_{\text{KL}}(\rho_{\text{emp}}|\rho_{\text{syn}})\)[48]
\[D_{\text{KL}}\left(\rho_{\text{emp}}\mid\rho_{\text{syn}}\right)=\int\rho_ {\text{emp}}(x)\ln\left[\frac{\rho_{\text{emp}}(x)}{\rho_{\text{syn}}(x)} \right]\mathrm{d}x. \tag{12}\]
A smaller \(D_{\text{KL}}\) value implies a better fit between the synthetic and the empirical distributions.
Moreover, given the importance of understanding the response times of the power-grid frequency, we also evaluate the autocorrelation function of the empirical and synthetic data, as given by:
\[C_{x}(\tau)=\sigma^{-2}\langle(x_{t}-\mu)(x_{t+\tau}-\mu)\rangle, \tag{13}\]
where \(x\) is a time series (frequency or frequency increments, empirical or synthetic), \(\mu\) is the mean value of \(x\), and \(\sigma\) is the standard deviation of \(x\).
Fig. 3: Probability density of the frequency \(f\) (left), its increments \(\Delta f\) (center), and the autocorrelation function for \(90\) minutes (right).
Code to reproduce the results is available online [47] and data are freely available, see [45, 46] and www. power-grid-frequency.org.
## III Results
Let us review the results in three steps: What can we learn about the dynamic and statistical properties of the islands? Which characteristics are reproduced by the models? How do the different models perform quantitatively?
### Characteristics of the data
The frequency statistics of islands are quite complex and more intricate than in continental regions. Simply inspecting the empirical data (black lines in Fig. 3), we note
* cut-offs for the absolute frequency deviations in Ireland and the Balearic Islands. These likely arise because these islands can balance their power via HVDC lines connected to large synchronous areas.
* highly non-Gaussian statistics, both in the frequency and in the increments. For reference, a Gaussian statistic would be indicated via an inverted parabola.
* complex autocorrelation functions that decay very rapidly (Iceland) or display more regular peaks (Balearic).
### Characteristics of the models
The proposed models capture some of the empirical characteristics, depending on their complexity.
Model 1 by construction only induces Gaussian frequency and Gaussian increment distributions. Hence, it misses the cut-off for large values and the heavy tails in both frequency and increments. The autocorrelation decays exponentially but misses the peaks caused by deterministic influences.
Model 2 includes deterministic power mismatch which is not adapted to the characteristics of the grids and therefore is too small for both the Irish and the Balearic grid. As in Model 1, the increments are Gaussian, missing the heavy tails. The autocorrelation function decays approximately exponentially with some visibility of deterministic effects, particularly in Ireland.
Model 3 reproduces the multi-modal distributions in Ireland and the Balearic islands and even includes frequency cut-offs at large values. These characteristics are possible due to the cubic primary control and an even stronger control for large deviations, e.g. at \(|f|\gtrsim 150\) mHz in the Balearic grid. The increments display heavy tails, as in the real data, due to multiplicative noise, i.e. \(\epsilon(\omega)\) being explicitly state-dependent in this model. The autocorrelation function reproduces the decay and some small peaks at the \(60\)-minute mark.
Model 4 has some weaknesses in approximating the empirical probability density and the autocorrelation function, as the latter exhibits large peaks at the \(60\)-minutes mark. This is mostly due to the heuristic estimation of the frequency trend. Meanwhile, the increments follow the main characteristics of the empirical data. As in Model 3, the multiplicative noise \(\epsilon(\omega)\) (state-dependent) facilitates non-Gaussian increments.
### Comparison of performance
Going beyond the qualitative comparison of the characteristics, we also compare how well the different models quantitatively fit the empirical data, measured via the Kullback-Leibler divergence, see Fig. 4. We note that the standard Ornstein-Uhlenbeck process (Model 1) always provides a decent description of the frequency statistic (circles) and, by design, matches the empirical standard deviation well. Meanwhile, it tends to be among the worst performers in the increment analysis (squares), as it can only reproduce Gaussian increments. This oversimplification becomes most apparent when investigating the increment tails (Fig. 3). The previously developed data-driven model [4] (Model 2) encounters problems when applied to islands without any adjustments. In particular, the overall probability distribution (circles) is among the poorest-performing models for all three islands.
The analysis of the two new models both shows that these are promising developments but that there still remains potential for improvement. Model 3 performs very well in the Icelandic data, while Model 4 is the best model for Ireland. In particular in the increment analysis (squares), one of the new models is always the best-performing one. This indicates two points: First, our advanced modelling is especially useful for modelling the stochastic dynamics, as seen in their increments statistics. Second, a fully generalised model, applicable to data from continents (as previously done with Model 2) and diverse islands (as done here) is not yet available.
## IV Discussion
Overall, we show that stochastic power-grid frequency models, aided by a Fokker-Planck description of the underlying physical process, can reproduce the statistics, increments, and two-point correlation of power-grid frequency recordings from various grids without access to in-depth information of each power-grid system's network details. We show that solely from power-grid frequency recordings, we can estimate the strength of primary and secondary control as well as the amplitude of the noise or high-frequency fluctuations. Armed only with these data-driven functions, we can construct stochastic differential equations that reproduce both the statistics as well as the autocorrelation structure of a large class of power grids. In this work, in particular, we go one step further and examine the
Fig. 4: Kullback–Leibler divergence between the empirical and 4 generated synthetic PDFs for the frequency (left) and the increments (right).
Icelandic, Irish [24], and Balearic power grids, which have gathered far less attention in the scientific community than other major grids, such as Continental Europe [4, 45] or Great Britain [27, 29]. The two new models offered in this work - a one-dimensional, non-linear and a two-dimensional Fokker-Planck model - consistently approximate the increments statistics of the frequency very well. Meanwhile, the autocorrelation and aggregated frequency statistics are much harder to describe.
## V Conclusion and Outlook
Understanding the dynamics and statistics of these size-wise smaller power grids is crucial for the understanding of insular grids, which themselves are small-size networks and do not necessarily obey a large-size linear control mechanism observed and understood in major grids. A representation of power-grid frequency dynamics in a Fokker-Planck setting is central to the diagnosis of irregularities in any power grid. It offers firstly an understanding of the statistics and therein the effects and cost of control as well as the duration that frequency excursions break out of statutory frequency limits. Representing power-grid frequency via partial differential equations also permits representing frequency as a stochastic differential equation. This, in turn, allows for the generation of synthetic time series, which can be studied as objects on their own. Realistic synthetic time series open the door to studying these power grids with data-intensive methods, such as artificial intelligence. Interestingly, compared to earlier work [27], we find that bimodal distributions in power-grid frequency statistics could arise from the deterministic power mismatch.
We present an extension of stochastic modelling, which should be further enhanced in the future. Aside from applying our method to more data from different synchronous areas, a more detailed and realistic extraction and modelling of the deterministic power mismatch will be important to better describe empirical data. We again note that the investigated island grids display a large variation between one another. Hence, it also remains an open challenge to develop a generalised model applicable to various islands or microgrids.
|
2308.07553 | Enhancing the Antidote: Improved Pointwise Certifications against
Poisoning Attacks | Poisoning attacks can disproportionately influence model behaviour by making
small changes to the training corpus. While defences against specific poisoning
attacks do exist, they in general do not provide any guarantees, leaving them
potentially countered by novel attacks. In contrast, by examining worst-case
behaviours Certified Defences make it possible to provide guarantees of the
robustness of a sample against adversarial attacks modifying a finite number of
training samples, known as pointwise certification. We achieve this by
exploiting both Differential Privacy and the Sampled Gaussian Mechanism to
ensure the invariance of prediction for each testing instance against finite
numbers of poisoned examples. In doing so, our model provides guarantees of
adversarial robustness that are more than twice as large as those provided by
prior certifications. | Shijie Liu, Andrew C. Cullen, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein | 2023-08-15T03:46:41Z | http://arxiv.org/abs/2308.07553v2 | # Enhancing the Antidote: Improved Pointwise Certifications
###### Abstract
Poisoning attacks can disproportionately influence model behaviour by making small changes to the training corpus. While defences against specific poisoning attacks do exist, they in general do not provide any guarantees, leaving them potentially countered by novel attacks. In contrast, by examining worst-case behaviours Certified Defences make it possible to provide guarantees of the robustness of a sample against adversarial attacks modifying a finite number of training samples, known as pointwise certification. We achieve this by exploiting both Differential Privacy and the Sampled Gaussian Mechanism to ensure the invariance of prediction for each testing instance against finite numbers of poisoned examples. In doing so, our model provides guarantees of adversarial robustness that are more than twice as large as those provided by prior certifications.
1School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
2Defence Science and Technology Group, Adelaide, Australia
[email protected], [email protected], [email protected],
[email protected], [email protected]
## Introduction
Despite the impressive performance, many modern machine learning models have been shown to be vulnerable to adversarial data perturbations [1, 16, 13]. This adversarial sensitivity is a significant concern now that machine learning models are increasingly being deployed in sensitive applications. Of particular concern are data poisoning attacks, where an adversary manipulates the training set to change the decision boundary of learned models. The risk of such attacks is heightened by the prevalence of large, user-generated datasets that are constructed without setting. The fact that these attacks can render a model useless further underscores the need for robust defence mechanisms. Some examples of models that are vulnerable to data poisoning attacks include email spam filters and malware classifiers. These models have been shown to be susceptible to attacks that either render the model ineffective [1], or that produce targeted misclassifications [1].
The defences intrinsically counter specific poisoning attacks means that even state-of-the-art defences [1, 13] can be vulnerable to new attacks. To circumvent this inherent dependency of defences on attacks, recent work has begun to consider the construction of guarantees of predictive invariance against bounded numbers of poisoned training examples. This is known as the _certified robustness_, which is commonly achieved through the addition of calibrated noise through _randomised smoothing_[1]. While these certifications have been successfully applied to poisoning attacks on labels and/or input features [14, 15], their applicability has been limited to attacks that modify training examples, rather than the more general insertion/deletion operations. On the other hand, classifiers trained with _differential privacy_ (DP) can be shown to be certifiably robust against poisoning attacks even against insertion/deletion operations [1, 12]. However, to date, such certifications do not provide _pointwise guarantees_ which ensures the robustness for individual samples against a finite number of poisoned training examples. This omission still leaves a vulnerability that can be exploited by a motivated adversary to compel the model to misclassify a particular testing sample. Recent works [1, 13] leveraging bagging have achieved pointwise guarantees against poisoning attacks that allow insertion/deletion. However, some of these methods are specialized to particular learning approaches.
In this work, we establish a general framework for deriving pointwise-certifiably robust guarantees against data poisoning attacks that can influence both the label and feature sets. Such guarantees ensure that the predicted class of an individual sample are invariant to a finite number of changes to the training dataset. Prior works have leveraged DP to improve statistical properties of certification against data poisoning across a dataset. In contrast, we are the first to extend DP to certify individual samples. By producing an _improved group privacy for the Sampled Gaussian Mechanism_, our new approach even yields certifications that hold for more changes to the training dataset than what had been identified by prior approaches [1, 13, 14, 15]. Our specific achievements can be summarized as follows:
* A general framework providing _pointwise-certified robustness_ guarantees for models that are trained with differentially-private learners.
* The framework provides a _general poisoning attack defence_ against insertion, deletion, and modification attacks on both the label and feature sets. The defence improves the existing differential privacy based approaches, and its
efficiency is enhanced through optimised group privacy in the Sampled Gaussian Mechanism and sub-sampled training.
* Our defence method achieves more than double the number of poisoned examples compared to existing certified approaches as demonstrated by experiments on MNIST, Fashion-MNIST and CIFAR-\(10\).
## Data Poisoning Attacks and Defences
Training-time or data poisoning attacks [1, 13] enable malicious users to manipulate training data and modify the learned model. The expressiveness of machine learning model families makes modern machine learning particularly susceptible to such attacks [1, 1]. These attacks can be taxonomically classified as either label attacks, which only modify dataset labels [12]; features attacks, in which the training features are modified [12]; or example attacks, such as the backdoor, which seek to influence both labels and features of the training corpus [12]. Defending against any of these attacks is inherently complex, as their existence implies that the attacker has access to both the training architecture and dataset. Although previous works have examined attackers who solely modify the training data, our threat model assumes a more comprehensive scenario, whereby attackers have the freedom to introduce or remove samples from the training dataset, as outlined in Table 1. However, this freedom is subject to certain constraints that aim to reduce the probability of detection.
Threat Model.We consider supervised multi-class classification on a training dataset \(\mathcal{D}_{1}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), where each example comprises an input instance \(\mathbf{x}\in\mathbb{R}^{m}\) and label \(y_{i}\in\mathcal{L}=\{1,\ldots,L\}\). Consider a (possibly randomised) learner \(M\) taking \(\mathcal{D}_{1}\) to parameters \(\mathbf{\theta}\in\Theta\). We refer to learned parameters and _model_ interchangeably.
In this paper we consider alternate forms of inferred scores per class for randomised learners on a given input instance \(\mathbf{x}\in\mathbb{R}^{m}\) as \(I_{l}(\mathbf{x},\mathbf{\theta})\), such that \(\sum_{l\in\mathcal{L}}I_{l}(\mathbf{x},\mathbf{\theta})=1\) and \(I_{l}(\mathbf{x},\mathbf{\theta})\in[0,1]\), necessitating alternate choices of \(I_{l}(\cdot)\). Let's consider a function \(\mathbf{y}(\mathbf{x},\mathbf{\theta})\) that returns a deterministic vector of predicted class scores in \(\mathbb{R}^{L}\), with the \(i\)th component denoted \(y_{i}(\mathbf{x},\mathbf{\theta})\in\mathbb{R}\). For example, the softmax layer of a deep network outputs a score per class, these \(y_{i}\) sit in \([0,1]\) and sum to unity.
**Definition 1** (Inference by multinomial label).: Define the _multinomial label_ inference function as
\[I_{l}(\mathbf{x},M(\mathcal{D}_{1}))=\Pr[\arg\max_{i}y_{i}(\mathbf{x},M( \mathcal{D}_{1}))=l]\enspace.\]
Conditioned on a deterministic model \(\theta\), we may make a prediction on \(\mathbf{x}\) as the highest \(y_{i}\) score; however, with these predictions as random induced by \(M\), we make inferences given a training dataset as the most likely prediction.
**Definition 2** (Inference by probability scores).: Define the _probability scores_ inference function as
\[I_{l}(\mathbf{x},M(\mathcal{D}_{1}))=\mathbb{E}[y_{l}(\mathbf{x},M(\mathcal{D }_{1}))]\enspace.\]
In other words, we consider the \(\mathbf{y}\) scores as random variables (due to the randomness in learner \(M\)) conditional on training dataset \(\mathcal{D}_{1}\) and input instance \(\mathbf{x}\). We infer the class \(l\) with the largest expected score \(y_{l}\).
These two inference rules capture alternate approaches to de-randomising class predictions and offer different relative advantages in terms of robustness. We discuss this further in the "Outcomes-Guaranteed Certifications" section.
The attacker is assumed to have perfect knowledge of both the dataset \(\mathcal{D}_{1}\), learner \(M\), and inference rule \(I\) (_i.e._, a white-box attacker) with unbounded computational capabilities. However, in order to minimise the likelihood of an attack being detected, it assumed that a finite number \(r\in\mathbb{N}\)--known as the radius--of changes to the dataset. To reflect the assumed level of access of the attacker, these changes can take the form of additions, deletions, or modifications. We consider the attacker as attempting to achieve
\[\arg\max_{l\in\mathcal{L}}I_{l}(\mathbf{x},M(\mathcal{D}_{2}))\neq\arg\max_{ l\in\mathcal{L}}I_{l}(\mathbf{x},M(\mathcal{D}_{1}))\enspace, \tag{1}\]
subject to the bound
\[\mathcal{B}(\mathcal{D}_{1},r):=\{\mathcal{D}_{2}:|\mathcal{D}_{1}\ominus \mathcal{D}_{2}|\leq r\}\enspace. \tag{2}\]
Here \(|\mathcal{D}_{1}\ominus\mathcal{D}_{2}|\) measures the _size_ of the symmetric difference between datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), or in other words, the minimum number of operations required to map \(\mathcal{D}_{1}\) to \(\mathcal{D}_{2}\). The objective of the defence is to achieve _pointwise-certified robustness_ for an individual sample \(\mathbf{x}\) when passed through \(I\circ M\). While such a threat model can be applied to any model, henceforth we will limit our consideration of randomised learners incorporating certified defences, of the form that will be described within the remainder of the paper.
Certified Defences.The concept of pointwise-certified robustness has been widely used as a testing-time defence [1, 1], and has recently been extended to training-time [10, 12]. Pointwise robustness is advantageous over statistical guarantees on bounds of the objective function [13, 14], as it ensures that the attacked learner will remain unchanged for finite, bounded perturbations. The nature of these perturbations, and the certification radius \(r\) are intrinsically linked to the underlying threat model.
**Definition 3** (Pointwise-Certified Robustness).: A learner is said to be _\(r\)-pointwise-certified robust_ poisoning attacks, at input instance \(\mathbf{x}\), if there exists no \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},r)\) such that Equation (1) is true. A learner \(M\) is said to be _\((\eta,r)\)-pointwise-certified robust_ if it is \(r\)-pointwise-certified robust with probability at least \(1-\eta\) in the randomness of \(M\).
In other words, the prediction of the poisoned model remains the same (or the same w.h.p.), as the poisoned dataset does not alter the probabilities of labels sufficiently to change the predicted classification.
One approach for achieving pointwise certification is randomised smoothing [10, 11], in which a new classifier is created such that its prediction is defined as the most probable class returned by the original classifier under calibrated input noise. While this
noise is often applied directly to the input samples, it has been shown that model bagging can also generate output randomness in a fashion that allows for certifications against data poisoning attacks [14, 15, 16].
## Outcomes Guarantee
By exploiting both DP and the Sampled Gaussian Mechanism (SGM), our certification framework empirically improves pointwise certifications against data poisoning. Such certificates can be used to quantify the confidence in a sample's prediction, in the face of potential dataset manipulation. To support our enhancements, we will first define some key properties of DP, then propose the outcomes guarantee that generalises to most DP mechanisms, and finally introduce the SGM with improved group privacy.
Differential Privacy.A framework [11, 12] quantifies the privacy loss due to releasing aggregate statistics or trained models on sensitive data. As a versatile notation of smoothness to input perturbations, DP has successfully been used as the basis of several certification regimes.
**Definition 4** (Approximate-DP).: A randomised function \(M\) is said to be \((\epsilon,\delta)\)-approximate DP (ADP) if for all datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) for which \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},1)\), and for all measurable output sets \(\mathcal{S}\subseteq\mathrm{Range}(M)\):
\[\Pr[M(\mathcal{D}_{1})\in\mathcal{S}]\leq\exp(\epsilon)\Pr[M(\mathcal{D}_{2}) \in\mathcal{S}]+\delta\enspace, \tag{3}\]
where \(\epsilon>0\) and \(\delta\in[0,1)\) are chosen parameters.
Smaller values of the privacy budget \(\epsilon\) tighten the (multiplicative) influence of a participant joining dataset \(\mathcal{D}_{2}\) to form \(\mathcal{D}_{1}\), bounding the probability of any downstream privacy release. The confidence parameter \(\delta\) then relaxes this guarantee, such that no bound on the privacy loss is provided for low-probability events.
To bound the residual risk from ADP, Renyi-DP was introduced by [11]. Renyi-DP quantifies privacy through sequences of function composition, as required when iteratively training a deep net on sensitive data, for example. As we shall see in this paper, this tighter analysis leads to improved certifications in practice.
**Definition 5** (Renyi divergence).: Let \(P\) and \(Q\) be two distributions on \(\mathcal{X}\) defined over the same probability space, and let \(p\) and \(q\) be their respective densities. The Renyi divergence of finite order \(\alpha\neq 1\) between \(P\) and \(Q\) is defined as
\[\mathrm{D}_{\alpha}(P\|Q)\triangleq\frac{1}{\alpha-1}\ln\int_{\mathcal{X}}q( x)\left(\frac{p(x)}{q(x)}\right)^{\alpha}\mathrm{d}x\enspace. \tag{4}\]
**Definition 6** (Renyi Differential Privacy).: A randomised function \(M\) preserves \((\alpha,\epsilon)\)-Renyi-DP, with \(\alpha>1,\epsilon>0\), if for all datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},1)\):
\[\mathrm{D}_{\alpha}\left(M(\mathcal{D}_{1})\|M\left(\mathcal{D}_{2}\right) \right)\leq\varepsilon\enspace. \tag{5}\]
**Definition 7** (Outcomes guarantee).: Let \(\mathcal{K}\) be a set of strictly monotonic functions on the reals, and \(r\) a natural number. A randomised function \(M\) is said to preserve a _\((\mathcal{K},r)\)-outcomes guarantee_ if for any \(K\in\mathcal{K}\) such that for all datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},r)\),
\[\Pr[M(\mathcal{D}_{1})\in\mathcal{S}]\leq\mathrm{K}(\Pr[M(\mathcal{D}_{2}) \in\mathcal{S}])\enspace. \tag{6}\]
Both ADP and RDP are generalised as specific cases of the outcome guarantee with, respectively,
\[\mathcal{K}_{\epsilon,\delta}(x) =\exp(\epsilon)x+\delta \tag{7}\] \[\mathcal{K}_{\epsilon,\alpha}(x) =(\exp(\epsilon)x)^{\frac{\alpha-1}{\alpha}}\enspace. \tag{8}\]
The RDP's family is obtained by applying Holder's inequality to the integral of the density function in the Renyi divergence [11].
This definition formalises a discussion on "bad-outcomes guarantee" due to [11]. With this definition, we are able to generalise our certification framework to the essential structure across variations of differential privacy. Note this definition incorporates _group privacy_[11]: extending DP to pairs of datasets that differ in up to \(r\) datapoints \(\mathcal{D}_{1}\in\mathcal{B}(\mathcal{D}_{2},r)\).
Our framework also relies upon the _post-processing_ property [11] of DP: any computation applied to the output of a DP algorithm retains the same DP guarantee. This property, which simplifies the DP analysis in multi-layered models acting on a DP-preserving input, holds for any ADP, RDP, or indeed outcome-guaranteed mechanism.
Sampled Gaussian Mechanism with Improved Group Privacy.While many DP mechanisms have been proposed and widely studied for machine learning [1, 12], they typically rely upon the addition of noise to training samples. In contrast, the Sampled Gaussian Mechanism (SGM) [11] adds randomness both through the injection of noise and the sub-sampling process. Each element of the training batch is sampled without replacement with uniform probability \(q\) from the training dataset, while each weight update step also introduces additional additive gaussian noise to the gradients. When applied to a model \(M\), the SGM has been shown [11] to preserve \((\alpha,\epsilon)\)-Renyi-DP, where \(\epsilon\) is determined by the parameters \((\alpha,M,q,\sigma)\). We denote the computation steps of \(\epsilon\) in SGM as function \(\mathrm{SG}\) such that \(\epsilon=\mathrm{SG}(\alpha,M,q,\sigma)\).
However, this guarantee fails to exploit the advantages given by Renyi-DP group privacy under the SGM. As such, by constructing our group privacy in a manner specific to the SGM, we are able to produce _tighter bounds_ than prior works [11], producing the following pointwise guarantee of certification.
**Theorem 8** (Improved Renyi-DP group privacy under the SGM).: _If a randomised function \(M\) obtained by SGM with sample ratio \(q\) and noise level \(\sigma\) achieves \((\alpha,\mathrm{SG}(\alpha,M,q,\sigma))\)-Renyi-DP for all datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},1)\), then for all datasets \(\mathcal{D}_{3}\in\mathcal{B}(\mathcal{D}_{1},r)\)_
\[\mathrm{D}_{\alpha}\left(M(\mathcal{D}_{1})\|M\left(\mathcal{D}_{3}\right) \right)\leq\mathrm{SG}(\alpha,M,1-(1-q)^{r},\sigma)\enspace. \tag{9}\]
Proof.: (Here we provide a proof sketch, the detailed proof is available in Appendix A.2.) In the work [11], they proposed calculating the amount of
Renyi-DP obtained from SGM in their Theorem 4. We extend it from "adjacent datasets" to "datasets that differ in up to \(r\) examples". We consider the datasets \(S\) and \(S^{\prime}=S\cup\{x_{1},x_{2},...,x_{r}\}\), and calculate the mixing of distributions of taking a random subset of \(S^{\prime}\) by the SGM \(\mathcal{M}\) where each element of \(S^{\prime}\) is independently placed with probability \(q\) as
\[\mathcal{M}\left(S^{\prime}\right)=\sum_{T}p_{T}\left(\sum_{k=0} ^{r}\binom{r}{k}q^{k}(1-q)^{r-k}\mathcal{N}\left(\mu,\sigma^{2}\mathbb{I}^{d} \right)\right)\] \[V\subseteq\{x_{1},x_{2},...,x_{r}\}\qquad k=\left\|V\right\|\enspace.\]
We can complete the proof by replacing the original \(\mathcal{M}(S^{\prime})\) by the above distribution, and by following the remainder of the paper.
## 4 Outcomes-Guaranteed Certifications
While pointwise-certified robustness guarantees can be applied to the output of any model, within this work we highlight both multinomial outputs and scored outputs.
**Lemma 9** (Pointwise outcomes guarantee).: _Consider a randomised learner \(M\) that preserves a \((\mathcal{K},r)\)-outcome guarantee, and an arbitrary (possibly randomised) inference function \(I\) mapping learned parameters and the input instance to an inferred score. Then for any \(K\in\mathcal{K}\) such that, for any input instance \(\mathbf{x}\), label \(l\in\mathcal{L}\), training datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\in\mathcal{B}(\mathcal{D}_{1},r)\),_
\[I_{l}\left(\mathbf{x},M\left(\mathcal{D}_{1}\right)\right) \leq\mathrm{K}\left(I_{l}\left(\mathbf{x},M\left(\mathcal{D}_{2} \right)\right)\right)\] \[I_{l}\left(\mathbf{x},M\left(\mathcal{D}_{1}\right)\right) \geq\mathrm{K}^{-1}\left(I_{l}\left(\mathbf{x},M\left(\mathcal{D}_ {2}\right)\right)\right)\enspace.\]
Proof.: In the case of multinomial outputs, the first inequality follows from the post-processing property: the composition \(I\circ M\) preserves the same outcome guarantee. The second inequality follows by symmetry in the roles of \(\mathcal{D}_{1},\mathcal{D}_{2}\) and by \(K\) being strictly increasing. To admit scored outputs, the probabilities \(\Pr[M(\mathcal{D})\in\mathcal{S}]\) in \((\mathcal{K},r)\)-outcome guarantee need to be converted into expected values \(\mathbb{E}[M(\mathcal{D})]\). To that end, the integral over the right-tail distribution function of the probabilities in Definition 7 are taken. The expected value \((\mathcal{K},r)\)-outcome guarantee of \((\epsilon,\delta)\)-ADP and \((\alpha,\epsilon)\)-Renyi-DP can be shown to take the same form of Equation (7) and Equation (8) by Lecuyer et al. (2019) and Holder's inequality (as detailed in Appendix A.3) respectively.
The main result of this section establishes conditions under which a DP learner provides pointwise-certified robustness against general poisoning attacks up to size \(r\).
**Theorem 10** (Pointwise-certified robustness by outcomes guarantee).: _Consider a training dataset \(\mathcal{D}\), an input instance \(\mathbf{x}\), and a randomised learner \(M\) that preserves a \((\mathcal{K},r)\)-outcomes guarantee. Let_
\[l_{1}=\arg\max_{l\in\mathcal{L}}I_{l}(\mathbf{x},M(\mathcal{D}))\]
_denote the label predicted on \(\mathbf{x},\mathcal{D}\) under multinomial interpretation of Definition 1. If there exist \(\mathrm{K}_{upper},\mathrm{K}_{lower}\in\mathcal{K}\) such that_
\[\mathrm{K}_{lower}^{-1}(I_{l_{1}}(\mathbf{x},M(\mathcal{D})))>\] \[\max_{l\in\mathcal{L}\setminus\{l_{1}\}}\mathrm{K}_{upper}(I_{l} (\mathbf{x},M(\mathcal{D})))\enspace,\]
_then \(I\circ M\) is pointwise-certified robust to radius \(r\) about dataset \(\mathcal{D}\) at input \(\mathbf{x}\) (see Definition 3)._
The proof can be found in Appendix A.1.
### Algorithmic Implementation
The very nature of data poisoning attacks intrinsically requires pointwise certifications to incorporate modifications to both the training and testing processes. The remainder of this section illustrates the steps required to produce a prediction and certification pair \((l,r)\) for a test time sample \(\mathbf{x}\) in the testing dataset \(\mathcal{D}_{e}\), the details of which are further elaborated over the Algorithm 1.
Training.Any certification using the aforementioned DP based processes inherently requires the model \(M_{DP}\) to be randomised. The SGM achieves the requisite randomness for DP via sub-sampling and injecting noise. The randomised model \(M_{DP}\) is instanced such that \((\hat{M}_{DP_{1}},\hat{M}_{DP_{2}},...,\hat{M}_{DP_{p}})\). As each instance is a model with an identical training process,
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{_Training-time threat model_} & \multicolumn{2}{c}{_Testing-time certification_} \\ \cline{2-5} & Modification & Addition/ & Statistical & Pointwise \\ & & Deletion & certification & certification \\ \hline Statistical DP (Ma, Zhu, and Hsu 2019) & ✓ & ✓ & ✓ & ✗ \\ \hline Randomized smoothing (Rosenfeld et al. 2020; Weber et al. 2021) & ✓ & ✗ & ✓ & ✓ \\ \hline Bagging (Jia, Cao, and Gong 2020; Levine and Feizi 2021) & ✓ & ✓ & ✓ & ✓ \\ \hline This Paper & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of different approaches of certified defence against data poisoning attacks. We investigate them from two perspectives. The training-time threat model: whether it permits the more general addition/deletion of training samples or only modification. The testing-time certification: whether it provides the more strict pointwise certification for each test sample or only statistical certification over all test samples.
the training of such is an embarrassingly parallel process, a fact that can be leveraged to improve training efficiency. Further efficiencies for larger datasets can be found by incorporating training over subset \(D_{sub}\subseteq D\). Under the SGM, the total privacy cost with regards to \(\mathcal{D}\) is calculated by accumulating the privacy cost of each update with a subsample from \(\mathcal{D}\). Therefore, we can analogously compute the privacy cost of using a sub-training dataset \(D_{sub}\subseteq D\) with regards to the entire training dataset \(\mathcal{D}\) by reducing the number of updates under the SGM.
Rather than exploiting the SGM, an alternate approach is to construct sub-training datasets across a set of model instances through bagging. Taking such an approach allows the model instance to work on a subset solely without knowing the entire training dataset. The privacy gains can be quantified by way of DP amplification [1]. However, both SGM and bagging yield a level of privacy can then be translated into certifications of size \(r\) by Theorem 8 by deriving the set of outcomes guarantee bound functions \(\mathcal{K}\) through Algorithm 1.
Certification.In general, the certification involves estimating the upper and lower bounds of inferred scores for each label and searching for the maximum radius that satisfies Theorem 10. The _multinomial label_ and _probability scores_ require similar but slightly different treatments. For the former, each testing sample \(\mathbf{x}_{i}\) is passed through the set of model instances \((\hat{M}_{DP_{1}},\hat{M}_{DP_{2}},...,\hat{M}_{DP_{p}})\). From this, the top-\(2\) most frequent labels are selected and labelled as \(l_{1i}\) and \(l_{2i}\). Uncertainties from sampling are then quantified through the lower and upper confidence bounds of \(\Pr[M_{DP}(\mathbf{x}_{i},\mathcal{D})=l_{1i}]\) and \(\Pr[M_{DP}(\mathbf{x}_{i},\mathcal{D})=l_{2i}]\), which are constructed to a confidence level \(1-\eta\) by the SimuEM method of Jia2020, yielding \(\underline{p_{l_{1i}}}\) and \(\overline{p_{l_{2i}}}\) respectively.
Algorithm 1 demonstrates that a binary search can then be used to identify the maximum certified radius \(r_{i}\) of the optimisation problem in Theorem 10, subject to
\[\mathrm{K}_{lower}^{-1}(\Pr\left[M_{DP}(\mathbf{x}_{i},\mathcal{D})=l_{1i} \right])=\mathrm{K}_{lower}^{-1}(\underline{p_{l_{1i}}})\]
\[\max_{l_{j}\in\mathcal{C}\setminus\{l_{1i}\}}\mathrm{K}_{upper}(\Pr\left[M_{ DP}(\mathbf{x}_{i},\mathcal{D})=l_{ji}\right])=\mathrm{K}_{upper}(\overline{p_{l_{2i}}}) \tag{10}\]
Here the bound functions \(\mathrm{K}_{upper},\mathrm{K}_{lower}\in\mathcal{K}\) being derived by Theorem 9. The outputs for a testing sample \(\mathbf{x}_{i}\) are the predicted label \(l_{1i}\) with certified radius \(r_{i}\).
The process for the _probability scores_ case is similar but involves collecting the probability scores from each model instance and computing the confidence interval for the expected values \(\mathbb{E}[y_{l_{i}}(\mathbf{x}_{i},M_{DP}(\mathcal{D})]\) via Hoeffding's inequality [16] or empirical Bernstein bounds [10].
## Experiments
To verify the effectiveness of our proposed pointwise-certified defence, we conducted experiments across MNIST, Fashion-MNIST, and CIFAR-\(10\) for varying levels of added noise \(\sigma\). For MNIST and Fashion-MNIST, training occurred using the LeNet-5 architecture [12], with class probabilities/expectations estimated based upon \(1000\) model instances trained on the entire dataset. In contrast, CIFAR-\(10\) was trained upon the example model from Opacus tutorial [15] (Opa-tut) with rather simple architecture, and more complex ResNet-18 [13] for comprehensive evaluation. Both were estimated based upon \(500\) instances trained on sub-datasets of size \(10000\).
Across all experiments adjust the sample ratio \(q\) to have a batch size of \(128\), with training conducted using ADAM with a learning rate of \(0.01\) optimising the Cross-Entropy loss. The clip size \(C\) is fine-tuned for each experiment (around \(1.0\) on MNIST, \(25.0\) on CIFAR-10). In each case, uncertainties were estimated for a confidence interval suitable for \(\eta=0.001\). All experiments were conducted in Pytorch using a single NVIDIA RTX \(2080\) Ti GPU with \(11\) GB of GPU RAM.
To quantify performance the proportion of samples correctly predicted with a certification of at least \(r\) was used, henceforth known as the _certified accuracy_. This quantity takes the form
\[CA_{r}=\frac{\sum_{\mathbf{x}_{i}\in\mathcal{D}_{e}}\mathbb{I}\left(l_{i}=y_{i }\right)\cdot\mathbb{I}\left(r_{i}\geq r\right)}{|\mathcal{D}_{e}|}\enspace, \tag{11}\]
where \(\mathbf{x}_{i}\) and \(y_{i}\) are the input instances and corresponding ground truth labels for a testing sample, and \(l_{i}\), and \(r_{i}\) are the predicted label and corresponding certified radius returned by the defence model. We also investigate the median and maximum value of certification achieved among all samples.
We further divide our experiments into four different frameworks. These are ADP with either multinomial labels (ADP-multinomial) or probability scores (ADP-prob-scores) output, and then Renyi-DP with either multinomial labels (RDP-multinomial) or probability scores (RDP-prob-scores) output. In each case, Theorem 10 is employed to generate a guaranteed certificate of defence to data poisoning attacks.
To validate the efficacy of our technique, these results are considered against prior works, specifically the DP-based defence method of Ma2019 (Baseline-DP), the bagging-based defence of Jia2020,chen2020 (Baseline-Bagging) and deterministic Deep Partition Aggregation (DPA) method of [10] (Baseline-DPA). Of these, conceptual similarities between our work and DP-baseline allow both techniques to be compared while utilising the same trained models. However, it must be noted that Ma2019 bound the DP-baseline in terms of statistically certified accuracy which is calculated as the lower bound of expected accuracy with confidence level \(1-\eta\) among obtained model instances. As for Bagging-baseline, it provides the same pointwise-certified defence as we do. Hence, by letting the number of base classifiers equal the number of model instances and adjusting the size of sub-training datasets, we force the Bagging-baseline to have the same certified accuracy at radius \(r=0\). The DPA method has significant differences between their underlying assumptions and ours. The DPA only applies to the models that are _deterministic_, which means for a given training dataset the parameters in the resulting model should always be the same. This approach requires specific model architectures and a deterministic training process while our method applies to more general situations. Compared with standard training approaches, the extra step involved in incorporating
Figure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\(\sigma\)); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius \(r\) (symmetric difference) while the Y-axis is the corresponding certified accuracy \(CA_{r}\) at radius \(r\).
SGM introduces a negligible difference in training time. Note the change in the relative performance of Baseline-Bagging and Baseline-DPA from the original papers are the product of different model architectures. We ensure all methods apply the same model architecture for fair comparisons (Appendix A.4).
Figure 1 demonstrates that our method consistently provides a more robust certified defence, across the full suite of experiments. In the case of MNIST and Fashion-MNIST, for a given radius, RDP-multinomial is capable of providing the highest certified accuracy in most cases, which means more testing samples are certified to be correctly predicted within this radius. For example, in the experiments on Fashion-MNIST, RDP-multinomial achieves \(52.21\%\) certified accuracy at radius \(r=80\), whereas the other baselines only achieve at most \(27.23\%\) certified accuracy. Additionally, our method can generate the largest certification as shown, which provides a better defence for the confident testing samples. As illustrated in the experiments on CIFAR-\(10\) for both Optut and ResNet-\(18\) models, RDP-prob-scores outperform the other baselines with regard to the largest certified radius by doubling the size. Based upon these results, when considering Fashion-MNIST our method achieves a \(56\%\) and \(130\%\) improvement in the median and maximum value respectively when compared to Baseline-Bagging (further details of this can be found in Appendix A.6).
As the bound functions are the same in both multinomial and probability scores methods, the difference between them can be directly attributed to the differences in how these techniques construct their upper and lower bounds. As indicated in Theorem 10, the larger the gap between the lower and upper bounds, the larger radius it can certify. Intuitively, if the defence model is confident with the predicted label of an easy testing sample, then this sample should be more resilient to poisoned examples in the training dataset. In the multinomial method, the uncertainty within each model instance is ignored by selecting a single label, while the uncertainty remains in the probability scores method. As a consequence of this, the multinomial method provides a higher radius for moderately confident examples but the probability scores method is able to certify a larger radius for the very confident ones. Further improvements can be found in the application of Renyi-DP, relative to Approximate-DP, due to the former providing a more precise accounting of model privacy. This in turn allows tighter bounds to be constructed, with performance further enhanced by way of Theorem 8.
The influence of the magnitude of injected noise \(\sigma\) is shown in the left-hand column of Figure 1. These results broadly align with previous works, in that adding more noise can produce larger robustness guarantees (larger certified radius), at the cost of decreased predictive performance upon un-attacked datasets (\(r=0\)). The increase of semantic-complexity of the dataset also limits the tolerance of the noise. It is also important to note that the sample rate (\(q\in(0,1]\)) and robustness are negatively correlated, as increasing the sample rate requires that more training examples are utilised in constructing the output, which provides weaker privacy guarantees. Therefore, a grid search is usually required to find the best combination of parameters (\(\sigma\), \(q\), clip size).
Limitations and Future DirectionsThe nature of the SGM inherently requires a significant allocation of computational resources, due to the need to train multiple models from scratch in parallel. While improvements in these resource demands may be possible, at this stage any direct application of this work would likely be restricted to systems that are considered particularly sensitive to adversarial behaviours. We also note that while this work improves upon the achievable bounds for certification by exploiting RDP in the context of the SGM, further gains may be possible by extending these proofs to Approximate DP via the conversion from RDP to ADP (Balle et al., 2019).
## Conclusion
By carefully exploiting both DP, SGM, and bagging, this work presents a mechanism for tightening guarantees of pointwise-certified robustness relative to prior implementations. This is made possible by calculating group privacy directly from the SGM. When compared to the current state-of-the-art, our technique can produce a more than \(50\%\) improvement in the median certification.
## Acknowledgements
This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. This work was also supported in part by the Australian Department of Defence Next Generation Technologies Fund, as part of the CSIRO/Data61 CRP AMLC project. Sarah Erfani is in part supported by the Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) DE220100680.
|
2303.02981 | On competition through growth reduction | We consider a population organised hierarchically with respect to size in
such a way that the growth rate of each individual depends only on the presence
of larger individuals. As a concrete example one might think of a forest, in
which the incidence of light on a tree (and hence how fast it grows) is
affected by shading of taller trees. The model is formulated as a delay
equation, more specifically a scalar renewal equation, for the population birth
rate. After discussing the well-posedness of the model, we analyse how many
stationary birth rates the equation can have in terms of the functional
parameters of the model. In particular we show that, under reasonable and
rather general assumptions, only one stationary birth rate can exist besides
the trivial one (associated to the state in which there are no individuals and
the population birth rate is zero). We give conditions for this non-trivial
stationary birth rate to exist and we analyse its stability using the principle
of linearised stability for delay equations. Finally we relate the results to
an alternative formulation of the model taking the form of a quasilinear
partial differential equation for the population size-density. | Carles Barril, Àngel Calsina, Odo Diekmann, József Z. Farkas | 2023-03-06T09:15:52Z | http://arxiv.org/abs/2303.02981v2 | # On competition through growth reduction
###### Abstract
We consider a population organised hierarchically with respect to size in such a way that the growth rate of each individual depends only on the presence of larger individuals. As a concrete example one might think of a forest, in which the incidence of light on a tree (and hence how fast it grows) is affected by shading of taller trees. The model is formulated as a delay equation, more specifically a scalar renewal equation, for the population birth rate. After discussing the well-posedness of the model, we analyse how many stationary birth rates the equation can have in terms of the functional parameters of the model. In particular we show that, under reasonable and rather general assumptions, only one stationary birth rate can exist besides the trivial one (associated to the state in which there are no individuals and the population birth rate is zero). We give conditions for this non-trivial stationary birth rate to exist and we analyse its stability using the principle of linearised stability for delay equations. Finally we relate the results to an alternative formulation of the model taking the form of a quasilinear partial differential equation for the population size-density.
_We dedicate this paper to Professor Glenn F. Webb, a friend, mentor and distinguished scientist. Over the past 50 years Glenn has made tremendous contributions to a wide variety of research domains, ranging from semigroup theory to cancer modelling. Structured population dynamics has been a central fixture to Glenn's research interest for decades. Indeed, Glenn is recognised as one of the worldwide leading experts of age- and size-structured population dynamics, an exciting field of research, which has enjoyed tremendous growth in recent decades. We are happy to have the opportunity to make a small contribution to this field and this special issue honouring Glenn and celebrating his achievements._
## 1 Introduction
In terms of numbers, the dynamics of a population is generated by mortality and reproduction. In structured population models [23, 19, 17], individuals are characterized by variables such as age, size or other (physiological) characteristics. In that case, development/maturation needs to be modelled too (a trivial task in the case of age, but certainly not in general!).
As explained in detail in [1], density dependence is most easily incorporated in a two step procedure: i) first introduce the environmental condition via the requirement that individuals are independent from one another when this condition is prescribed as a function of time; ii) next model feedback by specifying how, at any particular time, the environmental condition is influenced by the population size and composition. In the inspiring book [20] detailed ecological motivation is presented for including in this feedback loop the impact of density dependence on development and maturation.
Here our aim is to investigate the consequences of density dependence directly that only affects development (fertility is affected indirectly, since it depends on the developmental stage of the individual). We do so for a one-dimensional i-state (i.e., the variable capturing the relevant differences
among individuals 'lives' on the real line), so for an i-state space that comes equipped with an order relation. In fact we shall assume that the presence of 'larger' individuals has a negative impact on the growth rate of'smaller' individuals (as a motivating example one might think of trees and shading, with the i-state interpreted as 'height'; but please note that we ignore spatial structure and that, consequently, the model is but a caricature).
The organisation of the paper is as follows. In Section 2 we present the biological assumptions of the model and we deduce a scalar nonlinear renewal equation for the population birth rate (the so called delay formulation). In Section 3 a dynamical systems framework for the renewal equation is outlined. In Section 4 we give conditions guaranteeing the existence of a non-zero stationary birth rate. In Section 5 we apply the principle of linearised stability for delay equations [11] to prove that, for a certain two-parameter family of fertility functions, such a stationary birth rate (whenever it exists) is locally asymptotically stable. We also show that, under natural hypotheses on the ingredients, the zero stationary birth rate is a global attractor when it is the only stationary birth rate.
In Appendix A a technical result needed in Section 5 is shown. In Appendix B a more classical formulation of the model, taking the form of a first order PDE involving non-local functionals, is presented. There we show that the conditions guaranteeing the existence of stationary population densities (with respect to height) coincide with the conditions guaranteeing non-trivial stationary birth rates in the delay formulation. This makes sense since both formulations model the same phenomena (although they are independently derived from biological assumptions). Such a phenomenological relation between the two formulations suggests that the stability results for the delay formulation can be translated to the PDE formulation (as indeed is done in [3]). Although this issue is not addressed rigorously in the present paper, some comments are included in the concluding remarks section.
## 2 The model and its delay formulation
Individuals are fully characterized by a variable \(x\), taking values in \(\mathbb{R}_{+}\). In general, \(x\) is called i-state but here, for clarity, we call it 'height', the point being that we motivate our assumptions about interaction in terms of competition for light (this phenomenon is also addressed mathematically [24, 18, 20], among others). Indeed, we assume that the growth rate \(g\) of an individual of height \(x\) does not depend on \(x\) directly, but only indirectly, as it depends on the amount of light the individual receives per unit of time. We assume that the latter, in turn, is fully determined by the number \(E(x,t)\) of individuals that are taller than \(x\) (we call \(E\) an interaction variable, since it mediates how the environmental condition, here light intensity, is influenced by the extant population). We assume that the per capita death rate \(\mu\) and the per capita reproduction rate \(\beta\) only depend on the height \(x\). In fact we assume that \(\mu\) is constant, i.e., independent of \(x\), while \(\beta\) is a non-decreasing function of \(x\).
We assume that all individuals are born with the minimal height \(x_{m}\) and that \(g\) is positive (we do not impose an upper bound on height). We assume that a density function \(u=u(x,t)\) exists such that the integral of \(u\) with respect to the first variable over an interval gives the number of individuals with size within this interval at time \(t\). This allows us to write
\[E(x,t)=\int_{x}^{\infty}u(s,t)ds, \tag{2.1}\]
so that the height of an individual evolves according to
\[X^{\prime}(t)=g(E(X(t),t)). \tag{2.2}\]
Let \(B(t)\) denote the population birth rate at time \(t\). Then \(B\) equals the influx at \(x_{m}\), which originates from reproduction by the extant population:
\[B(t)=\int_{0}^{\infty}\beta(y)u(y,t)dy \tag{2.3}\]
Let \(n(t,\cdot)\) denote the age density. We do not need to write a PDE and solve it in order to conclude that
\[n(t,a)=B(t-a)e^{-\mu a} \tag{2.4}\]
This allows us to rewrite (2.3) as
\[B(t)=\int_{0}^{\infty}\beta(S(a,t))B(t-a)e^{-\mu a}da \tag{2.5}\]
with \(x=S(a,t)\) specifying the height of an individual having age \(a\) at time \(t\) (and hence being born at time \(t-a\)).
We refer to section III.4 of [19], entitled "Integration along characteristics, transformation of variables, and the following of cohorts through time", for general considerations about switching between size- and age-densities. Here the situation is relatively simple, since the individuals taller than you are exactly those that are older than you, i.e., were born earlier than you. Or, in a formula
\[E(x,t)=\int_{\tau}^{\infty}B(t-\alpha)e^{-\mu\alpha}d\alpha \tag{2.6}\]
when \(x=S(\tau,t)\).
Next note that an individual that was born at time \(t-a\) has age \(\tau\) at time \(t-a+\tau\). The height \(y=y(\tau):=S(\tau,t-a+\tau)\) of such an individual evolves according to
\[\begin{split}\frac{dy}{d\tau}(\tau)&=g(E(y(\tau),t -a+\tau))\\ &=g(E(S(\tau,t-a+\tau),t-a+\tau))\\ &=g\left(\int_{\tau}^{\infty}B(t-a+\tau-\alpha)e^{-\mu\alpha}d \alpha\right)\end{split} \tag{2.7}\]
Noting that \(y(0)=x_{m}\) we obtain by integration that
\[\begin{split} S(a,t)&=y(a)\\ &=x_{m}+\int_{0}^{a}g\left(\int_{\tau}^{\infty}B(t-a+\tau-\alpha )e^{-\mu\alpha}d\alpha\right)d\tau\end{split} \tag{2.8}\]
Inserting (2.8) into (2.5) we obtain
\[B(t)=\int_{0}^{\infty}\beta\bigg{(}x_{m}+\int_{0}^{a}g\left(\int_{\tau}^{ \infty}e^{-\mu\alpha}B_{t}(\tau-a-\alpha)\mathrm{d}\alpha\right)\,\mathrm{d} \tau\bigg{)}\,\,e^{-\mu a}B_{t}(-a)\,\mathrm{d}a, \tag{2.9}\]
where
\[B_{t}(\theta):=B(t+\theta). \tag{2.10}\]
Notice that (2.9) can also be written as
\[B(t)=\int_{0}^{\infty}\beta\left(x_{m}+\int_{0}^{a}g\left(e^{-\mu(\tau-a)} \int_{a}^{\infty}e^{-\mu s}B_{t}(-s)\mathrm{d}s\right)\,\mathrm{d}\tau\right) \,\,e^{-\mu a}B_{t}(-a)\,\mathrm{d}a. \tag{2.11}\]
**Remark 2.1**.: _Notice that (2.9) can be understood directly from the description of the physical situation: it states that the birth rate at time \(t\) is given by the addition for the age \(a\) of the mothers, with density equal to the birth rate at time \(t-a\) times their survival probability, their size specific fertility with size given by the birth size plus the integral with respect to \(\tau\) of the individual growth rate at age \(\tau\), which depends on how many individuals are larger (integral of the density of individuals of age \(\alpha>\tau\), i.e. \(s=\alpha+a-\tau>a\))._
## 3 The dynamical systems framework
Equation (2.11) provides the delay formulation of the model (see Appendix B for the alternative PDE formulation). Here the state variable is the population birth rate history \(B_{t}:=B(t+\cdot)\), instead of the population density \(u(\cdot,t)\) with respect to height (characterized in Appendix B). Specifically, one can consider the state space (of the weighted birth rate histories)
\[\mathcal{X}=L^{1}_{\rho}(-\infty,0):=\left\{\phi\in L^{1}_{\text{loc}}(-\infty,0):||\phi||_{\mathcal{X}}=\int_{-\infty}^{0}e^{\rho s}|\phi(s)|ds<\infty \right\},\]
for some \(\rho>0\) (so \(\mathcal{X}\) contains, in particular, constant functions, and therefore the possible steady states) and the delay equation \(B(t)=\mathcal{F}(B_{t})\) with \(\mathcal{F}:\mathcal{X}\to\mathbb{R}\) defined by
\[\mathcal{F}(\phi)=\int_{0}^{\infty}\beta\left(x_{m}+\int_{0}^{a}g\left(e^{- \mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\phi(-s)\,\mathrm{d}s\right)\,\mathrm{d }\tau\right)\ e^{-\mu a}\phi(-a)\,\mathrm{d}a. \tag{3.1}\]
We denote by \(\mathcal{X}^{+}\) the standard positive cone in \(\mathcal{X}\).
As discussed in [10, II], the delay equation \(B(t)=\mathcal{F}(B_{t})\), together with an initial history \(B_{0}=\phi\in\mathcal{X}\), can be interpreted as an abstract Cauchy problem with a semilinear structure:
\[\begin{cases}\dfrac{d}{dt}\varphi(t)=A\varphi(t)+\mathcal{F}(\varphi(t))\delta _{0}\;,\\ \varphi(0)=\phi\in\mathcal{X}\end{cases} \tag{3.2}\]
where \(A\) is the generator of the linear semigroup defined as \(T_{A}(t)\phi:=\phi(t+\cdot)\mathds{1}_{-}(t+\cdot)\). Notice that the mapping \(t\mapsto T_{A}(t)\phi\) tells us how the population birth rate history would evolve without considering birth (and growth and mortality), since all these processes are summarised in the \(\mathcal{F}\) function. The previous setting makes it possible to analyse the well posedness of the problem and some dynamical properties by means of a generalised variation of constants equation. The standard variation of constants equation cannot be applied in a straightforward manner (as in [21]) since the semilinear term of the problem (namely \(\phi\mapsto\mathcal{F}(\phi)\delta_{0}\)) does not take values in \(\mathcal{X}\), but in the space of measures.
Here the theory included in the references mentioned above ([10] and [11]) applies provided that \(\mathcal{F}\) is continuously differentiable, which is stated in the following theorem, and proved in Appendix A. We assume that \(g\) is smoothly extended to the whole of \(\mathbb{R}\), implying that the right hand side of (3.1) is defined on the whole Banach space \(\mathcal{X}\) (so even for non positive \(\phi\)). Of course negative birth rates do not have biological meaning, but they allow us to work on the whole space (recall that the positive cone of \(L^{1}\) has empty interior).
**Theorem 3.1**.: _Assume that \(g:\mathbb{R}\to\mathbb{R}\) and \(\beta:\mathbb{R}^{+}\to\mathbb{R}\) have a bounded and globally Lipschitzian first derivative. Also assume that \(g\) is bounded and positive and bounded away from 0 and that \(\beta\) is non-negative. Then the map \(\mathcal{F}:\mathcal{X}\to\mathbb{R}\) defined in (3.1) is continuously differentiable with bounded derivative provided that \(\rho<\mu/5\)._
**Theorem 3.2**.: _Existence and uniqueness Under the hypotheses of the previous theorem, for any \(\phi\in\mathcal{X}\), there exists a unique \(B\in L^{1}_{\text{loc}}(\mathbb{R})\) such that \(B(t)=\phi(t)\) for \(t<0\) and \(B(t)\) satisfies (2.11) for \(t\geq 0\). Moreover, \(B\) belongs to the positive cone of \(L^{1}_{\text{loc}}(\mathbb{R})\) whenever \(\varphi\in\mathcal{X}^{+}\)._
Proof.: It is an immediate consequence of Theorem 3.1 (notice that a bounded derivative implies global Lipschitz continuity), Theorem 3.15 in [11] (which implies the equivalence between (2.11) and (3.2)) and Theorem 2.2 in [11] (which implies the existence and uniqueness of mild solutions of (3.2) and the generation of a nonlinear semigroup \(\Sigma(t;\phi)\) satisfying \(\Sigma(t;\phi)=B_{t}\)). The facts that the linear semigroups in Theorem 2.2 of [11] are positive and \(\mathcal{F}\) maps the positive cone of \(\mathcal{X}\) to \(\mathbb{R}^{+}\) imply that \(B\) belongs to the positive cone whenever \(\phi\in\mathcal{X}^{+}\).
Let \(B\in\mathbb{R}\) be a stationary population birth rate, i.e. \(B\) satisfies \(B=\mathcal{F}(\bar{B})\) where \(\bar{B}\in\mathcal{X}\) is defined by \(\bar{B}(\theta)=B\) for (almost) all \(\theta\in(-\infty,0)\). The following theorem determines the local stability of \(\bar{B}\) in terms of properties of \(D\mathcal{F}(\bar{B})\). Since \(D\mathcal{F}(\bar{B})\) is a bounded linear operator from \(\mathcal{X}\) to \(\mathbb{R}\), the Riesz Representation Theorem implies that \(D\mathcal{F}(\bar{B})\) can be written as
\[D\mathcal{F}(\bar{B})\phi=\int_{0}^{\infty}k(s)\phi(-s)ds=:\langle k,\phi\rangle\]
with \(k\) an element of the dual space of \(\mathcal{X}\), represented by
\[\mathcal{X}^{\prime}=L^{\infty}_{\rho}(0,\infty):=\left\{f\in L^{\infty}(0, \infty):||f||_{\mathcal{X}^{\prime}}=\sup_{s\in(0,\infty)}e^{\rho s}|f(s)|< \infty\right\}.\]
**Theorem 3.3**.: _(Theorem 3.15 in [11]) Under the hypotheses of Theorem 3.1, let \(\bar{B}\in\mathcal{X}\) be a stationary state of (2.11) and let \(k\in\mathcal{X}^{\prime}\) represent \(DF(\bar{B})\). Consider the characteristic equation_
\[0=1-\hat{k}(\lambda), \tag{3.3}\]
_where \(\hat{k}\) is the Laplace transform of \(k\) (i.e. \(\hat{k}(\lambda)=\int_{0}^{\infty}e^{-\lambda s}k(s)ds\))._
1. _If all the roots of the characteristic equation (_3.3_) have negative real part, then the stationary state_ \(\bar{B}\) _is locally asymptotically stable._
2. _If there exists at least one root with positive real part, then the steady state_ \(\bar{B}\) _is unstable._
## 4 Existence and characterization of steady states
A stationary solution of the problem can be found by simply assuming that \(B\) in (2.11) is independent of \(t\). Of course there is a trivial stationary solution \(B=0\) that corresponds to the absence of individuals. When dealing with non-trivial stationary solutions of (2.11), we make the following abuse of notation to ease readability: we use \(\bar{B}\) to denote a constant function in \(\mathcal{X}\) and \(\bar{B}\in\mathbb{R}\) as the image it takes (so that we let the context tell whether \(\bar{B}\) refers to the constant function or the value it takes). With this in mind, and taking into account (2.11), it follows that a non-trivial equilibrium \(\bar{B}\in\mathcal{X}\) is a constant function whose image is a non-zero solution of
\[1=\int_{0}^{\infty}\beta\left(x_{m}+\int_{0}^{a}g\left(B\frac{e^{-\mu r}}{ \mu}\right)\,\mathrm{d}\tau\right)\ e^{-\mu a}\,da=:R(B). \tag{4.1}\]
Under natural hypotheses concerning \(\beta\) and \(g\), which essentially amount to assuming that larger sizes correspond to larger fertilities, that more competition (more individuals higher in the hierarchy than the one we are observing) means slower growth, and that the first generation progeny of an individual is finite (more precisely, that \(R(0)<\infty\)), we readily obtain the following theorem.
**Theorem 4.1**.: _Under the hypotheses of Theorem 3.1 and the assumption that \(\beta\) is a strictly increasing function on \([x_{m},\infty)\), and that \(R(0)<\infty\) and \(g\) is a strictly decreasing function on \([0,\infty)\) vanishing at infinity, there exists a non-trivial equilibrium of (2.11) if and only if_
\[R_{0}:=R(0)=\int_{0}^{\infty}\beta\big{(}x_{m}+g(0)a\big{)}\,e^{-\mu a}\,da>1 \qquad\text{and}\qquad\frac{\beta(x_{m})}{\mu}<1,\]
_and there is at most one such non-trivial equilibrium._
Proof.: Under the hypotheses, a (double) application of the Lebesgue dominated convergence theorem yields that \(R\) is a well defined continuous and strictly decreasing function on \([0,\infty)\), tending to \(\beta(x_{m})/\mu\) at infinity.
**Remark 4.2**.: _As usual, \(R_{0}\) can be interpreted as the so-called basic reproduction number, i.e., the expected total number of offspring of an individual experiencing the virgin environment, i.e., when there are no individuals older/larger than it._
subsec2
The age density of a steady state is given by \(\bar{n}(a)=\bar{B}e^{-\mu a}\) (see (2.4)).
Let us set
\[\bar{S}(a)=x_{m}+\int_{0}^{a}g\big{(}\int_{\tau}^{\infty}\bar{B}e^{-\mu a} \mathrm{d}\alpha\big{)}\mathrm{d}\tau=x_{m}+\int_{0}^{a}g\big{(}\bar{B}\frac{ e^{-\mu\tau}}{\mu}\big{)}\mathrm{d}\tau \tag{4.2}\]
(the size of an individual of age \(a\) at the nontrivial equilibrium, see (2.8)).
The density \(\bar{u}(x)\) with respect to size of the same population distribution can then be computed by taking into account the equality
\[\int_{\alpha_{1}}^{\alpha_{2}}\bar{n}(a)da=\int_{\bar{S}(\alpha_{1})}^{\bar{S} (\alpha_{2})}\frac{\bar{n}\big{(}\bar{S}^{-1}(x)\big{)}}{\bar{S}^{\prime} \big{(}\bar{S}^{-1}(x)\big{)}}dx=\int_{\bar{S}(\alpha_{1})}^{\bar{S}(\alpha_ {2})}\bar{u}(x)dx,\]
which follows from the change of variable \(x=\bar{S}(a)\) and the interpretation of \(\bar{n}\) and \(\bar{u}\). Thus, we find
\[\bar{u}(x)=\frac{\bar{n}\big{(}\bar{S}^{-1}(x)\big{)}}{\bar{S}^{\prime}\big{(} \bar{S}^{-1}(x)\big{)}}=\frac{\bar{B}e^{-\mu\bar{S}^{-1}(x)}}{g\big{(}\bar{B} \frac{e^{-\mu\bar{S}^{-1}(x)}}{\mu}\big{)}}, \tag{4.3}\]
which is an alternative expression to (B.2).
## 5 Stability of steady states
The linearisation of (2.9) around the origin is simply (see A.2 in Appendix A),
\[y(t)=D\mathcal{F}(0)y_{t}=\int_{0}^{\infty}\beta\big{(}x_{m}+g(0)a\big{)}\,e^ {-\mu a}y(t-a)\mathrm{d}a=:\int_{0}^{\infty}k(a)y_{t}(-a)\mathrm{d}a \tag{5.1}\]
(as indeed one can understand by using only the interpretation: it describes the linear population model corresponding to the virgin environment \(E=0\)).
**Theorem 5.1**.: _Under the hypotheses of Theorem 3.1, the trivial equilibrium is (locally) exponentially stable if \(R_{0}<1\) and unstable if \(R_{0}>1\)._
Proof.: Clearly the kernel \(\hat{k}\in L^{\infty}_{\rho}(0,\infty)\) corresponds to the Riesz representation of \(D\mathcal{F}(0)\). Then, according to Theorem 3.3, the stability of the steady state is determined by the sign of the real part of the zeroes of the characteristic equation \(\hat{k}(\lambda)=1\), where \(\hat{k}\) stands for the Laplace transform of \(\hat{k}\).
\(\hat{k}(\lambda)\) is defined (at least) for \(Re(\lambda)>-\rho.\) Moreover, since the kernel \(k\) is positive, \(\hat{k}\) is for real \(\lambda\) a decreasing function with limit \(0\) at infinity. Hence there is at most one real solution \(\hat{\lambda}\) of the characteristic equation, which indeed exists and is positive if \(\hat{k}(0)=R_{0}>1.\) So then the trivial equilibrium is unstable.
When \(\hat{k}(0)=R_{0}<1,\) if there is a real root, it is negative. Moreover, if a non-real \(\lambda\) is a root of the characteristic equation, then \(1=\hat{k}(\lambda)=Re(\hat{k}(\lambda))<\hat{k}(Re\lambda),\) which implies, by the fact that \(\hat{k}\) tends to \(0,\) that there is a real root \(\hat{\lambda}\) larger than \(Re(\lambda)\). As such a real root is necessarily negative, the trivial equilibrium is locally exponentially stable.
**Theorem 5.2**.: _If \(R_{0}<1\) and the hypotheses of Theorem 4.1 hold, all solutions of (5.1) tend exponentially to \(0\) as \(t\to\infty.\)_
Proof.: For a given solution let us write (cf. (2.8))
\[S(a,t)=x_{m}+\int_{0}^{a}g\left(\int_{\tau}^{\infty}e^{-\mu\alpha}B_{t}(\tau- a-\alpha)\mathrm{d}\alpha\right)\,\mathrm{d}\tau,\]
the size at time \(t\) of an individual of age \(a.\) From (2.9) we can write
\[\begin{split} B(t)&=\int_{0}^{\infty}\beta(S(a,t))e ^{-\mu a}B(t-a)\,\mathrm{d}a\\ &=\int_{-\infty}^{0}\beta(S(t-s,t))e^{-\mu(t-s)}B(s)\,\mathrm{d}s +\int_{0}^{t}\beta(S(t-s,t))e^{-\mu(t-s)}B(s)\,\mathrm{d}s\\ &=:f(t)+\int_{0}^{t}\beta(S(t-s,t))e^{-\mu(t-s)}B(s)\,\mathrm{d}s \\ &\leq f(t)+\int_{0}^{t}\beta(x_{m}+g(0)(t-s))e^{-\mu(t-s)}B(s)\, \mathrm{d}s.\end{split} \tag{5.2}\]
The kernel \(k(a)=\beta(x_{m}+g(0)a)e^{-\mu a}\) of the linear Volterra integral equation
\[y(t)=f(t)+\int_{0}^{t}k(t-s)y(s)ds \tag{5.3}\]
has a nonnegative resolvent \(r\) (meaning that \(r(t)=k(t)+\int_{0}^{t}k(t-s)r(s)ds\) and \(y(t)=f(t)+\int_{0}^{t}r(t-s)f(s)ds\)) (see Theorem 2.3.4 in [14]). Then by a generalized Gronwall lemma, one obtains \(B(t)\leq y(t)\) where \(y(t)\) is the solution of (5.3).
Indeed, using the usual notation for convolution, (5.2) can be written as \(B\leq f+k*B\) and so \(B=f-g+k*B\) for \(g=f+k*B-B\geq 0.\) Then we have
\[B=f-g+r*(f-g)=f+r*f-(g+r*g)=y-(g+r*g)\;\Rightarrow\;B\leq y,\]
since \(r\) and \(g\) are nonnegative (cf. Theorem 9.8.2 in [14]). The claim follows since \(y(t)\) tends exponentially to \(0\) when \(R_{0}<1\) by Theorem 3.12 in [11] and the final part of the proof of Theorem 5.1.
Let us recall the notation
\[\bar{S}(a)=x_{m}+\int_{0}^{a}g\left(\bar{B}\frac{e^{-\mu\tau}}{\mu}\right)\, \mathrm{d}\tau, \tag{5.4}\]
for the size of an individual of age \(a\) at the non-trivial equilibrium.
Let us now compute the linearisation of (2.9) around the nontrivial equilibrium \(\bar{B}\) using (5.4). For this we set \(B(t)=\bar{B}+y(t)\) and write (formally)
\[\begin{split}&\bar{B}+y(t)\\ &=\int_{0}^{\infty}\left(\beta\big{(}\bar{S}(a)\big{)}+\beta^{ \prime}\big{(}\bar{S}(a)\big{)}\int_{0}^{a}g^{\prime}\left(\bar{B}\frac{e^{-\mu \tau}}{\mu}\right)\right.\\ &\qquad\qquad\times\int_{\tau}^{\infty}e^{-\mu\alpha}y_{t}(-a+\tau -\alpha)\mathrm{d}\alpha\mathrm{d}\tau+o(y_{t})\right)e^{-\mu a}\big{(}\bar{B} +y_{t}(-a)\big{)}\mathrm{d}a,\end{split} \tag{5.5}\]
which, using the steady state condition (4.1) and neglecting higher order terms, leads to
(5.6)
**Remark 5.3**.: _See Appendix A for a rigorous derivation of (5.6). There, \(\mathcal{F}\) is written essentially as the composition of simpler functions and then the chain rule is applied._
Changing the order of integration, the expression within parentheses inside the last integral in (5.6) can be rewritten as:
\[\begin{split}&\int_{0}^{a}\bar{B}g^{\prime}\left(\bar{B}\frac{e^{- \mu\tau}}{\mu}\right)\int_{a}^{\infty}e^{-\mu(-a+\tau+\sigma)}y(t-\sigma) \mathrm{d}\sigma\mathrm{d}\tau\\ =&\int_{a}^{\infty}\int_{0}^{a}\bar{B}g^{\prime} \left(\bar{B}\frac{e^{-\mu\tau}}{\mu}\right)e^{-\mu\tau}\mathrm{d}\tau e^{- \mu(\sigma-a)}y(t-\sigma)\mathrm{d}\sigma\\ =&\int_{a}^{\infty}\left(g\left(\frac{\bar{B}}{\mu} \right)-g\left(\bar{B}\frac{e^{-\mu a}}{\mu}\right)\right)e^{-\mu(\sigma-a)}y (t-\sigma)\mathrm{d}\sigma.\end{split} \tag{5.7}\]
Thus, changing the integration order again, the second term on the right hand side of (5.6) reads
\[\int_{0}^{\infty}\int_{0}^{\sigma}\beta^{\prime}(\bar{S}(a))\left[g\left( \frac{\bar{B}}{\mu}\right)-g\left(\bar{B}\frac{e^{-\mu a}}{\mu}\right)\right] \mathrm{d}a\,e^{-\mu\sigma}\,y(t-\sigma)\mathrm{d}\sigma.\]
Hence, (5.6) is of the form \(y(t)=\int_{0}^{\infty}k(a)y(t-a)\mathrm{d}a\) with the kernel
\[k(a)=\beta\big{(}\bar{S}(a)\big{)}\,e^{-\mu a}+e^{-\mu a}\int_{0}^{a}\beta^{ \prime}(\bar{S}(\alpha))\left[g\left(\frac{\bar{B}}{\mu}\right)-g\left(\frac{ \bar{B}e^{-\mu\alpha}}{\mu}\right)\right]\,\mathrm{d}\alpha.\]
Since
\[\begin{split}&\int_{0}^{a}\beta^{\prime}(\bar{S}(\alpha))g\left( \frac{\bar{B}e^{-\mu\alpha}}{\mu}\right)\mathrm{d}\alpha=\int_{0}^{a}\beta^{ \prime}(\bar{S}(\alpha))\bar{S}^{\prime}(\alpha)\mathrm{d}\alpha\\ =&\beta(\bar{S}(a))-\beta(\bar{S}(0))=\beta(\bar{S} (a))-\beta(x_{m})\end{split}\]
the kernel \(k\) simplifies to
\[k(a)=\beta(x_{m})e^{-\mu a}+g\left(\frac{\bar{B}}{\mu}\right)e^{-\mu a}\int_{0}^{a }\beta^{\prime}(\bar{S}(\alpha))\mathrm{d}\alpha,\]
which leads to the characteristic equation
\[1=\hat{k}(\lambda)=\frac{\beta(x_{m})}{\mu+\lambda}+\frac{1}{\mu+\lambda}g \left(\frac{\bar{B}}{\mu}\right)\int_{0}^{\infty}\beta^{\prime}(\bar{S}(a))e^{ -(\mu+\lambda)a}\,\mathrm{d}a. \tag{5.8}\]
Without the loss of generality, we will assume in the rest of this section that the minimum size is \(x_{m}=0\).
### Stability of non-trivial steady states for a non-trivial example
We will assume in the following that the per capita fertility is given by: \(\beta(s)=\beta_{0}\max(0,s-x_{A})\), where \(x_{A}\geq 0\) is the adult size at which individuals start to reproduce. Let us define \(\bar{a}\) by
\[\int_{0}^{\bar{a}}g\left(\frac{\bar{B}\exp(-\mu\tau)}{\mu}\right)\mathrm{d} \tau=x_{A}, \tag{5.9}\]
i.e., \(\bar{a}\) is the age at which individuals begin to reproduce given the environmental condition associated to the equilibrium.
We have from (4.1) and \(x_{m}=0\) that
\[\begin{split} R(B)=&\int_{0}^{\infty}\beta\left( \int_{0}^{a}g\left(\frac{Be^{-\mu\tau}}{\mu}\right)\,\mathrm{d}\tau\right)\;e ^{-\mu a}\,\mathrm{d}a\\ =&\beta_{0}\int_{\bar{a}}^{\infty}\left(\int_{0}^{ a}g\left(\frac{Be^{-\mu\tau}}{\mu}\right)\mathrm{d}\tau-\int_{0}^{\bar{a}}g \left(\frac{Be^{-\mu\tau}}{\mu}\right)\mathrm{d}\tau\right)e^{-\mu a}\mathrm{ d}a\\ =&\beta_{0}\int_{\bar{a}}^{\infty}\int_{\bar{a}}^{ a}g\left(\frac{Be^{-\mu\tau}}{\mu}\right)\mathrm{d}\tau e^{-\mu a}\mathrm{d}a\\ =&\beta_{0}\int_{\bar{a}}^{\infty}\int_{\tau}^{ \infty}e^{-\mu a}\mathrm{d}a\,g\left(\frac{Be^{-\mu\tau}}{\mu}\right)\mathrm{d }\tau\\ =&\beta_{0}\int_{\bar{a}}^{\infty}\frac{e^{-\mu\tau} }{\mu}g\left(\frac{Be^{-\mu\tau}}{\mu}\right)\mathrm{d}\tau=\frac{\beta_{0}}{ \mu^{2}}\int_{0}^{e^{-\mu\bar{a}}}g\left(\frac{B}{\mu}\zeta\right)d\zeta,\end{split} \tag{5.10}\]
and hence, \(R_{0}=R(0)=\frac{\beta_{0}g(0)}{\mu^{2}}e^{-\mu\bar{a}}\).
On the other hand, the characteristic equation (5.8) reduces to
\[1=\beta_{0}\frac{g\left(\frac{\bar{B}}{\mu}\right)}{\lambda+\mu}\int_{\bar{a} }^{\infty}e^{-(\lambda+\mu)a}\mathrm{d}a=\beta_{0}\frac{g(\bar{B}/\mu)}{(\mu+ \lambda)^{2}}e^{-(\lambda+\mu)\bar{a}}=\frac{\mu^{2}}{(\mu+\lambda)^{2}}R_{0} \frac{g(\bar{B}/\mu)}{g(0)}e^{-\lambda\bar{a}}, \tag{5.11}\]
which, in the special case \(x_{A}=0\) (or, equivalently, \(\bar{a}=0\)) allows to identify the (two) roots as
\[\lambda=-\mu\pm\sqrt{\beta_{0}g\left(\bar{B}/\mu\right)}=\mu\left(-1\pm\sqrt{ R_{0}\frac{g(\bar{B}/\mu)}{g(0)}}\right). \tag{5.12}\]
So, under this assumption, we are able to explicitly formulate the characteristic equation and even to explicitly compute its roots. From the condition for the existence of a nontrivial equilibrium (4.1) and (5.10) with \(\bar{a}=0\), we have
\[1=R(\bar{B})=\frac{\beta_{0}}{\mu^{2}}\int_{0}^{1}g\left(\frac{\bar{B}}{\mu} \zeta\right)d\zeta>\frac{\beta_{0}}{\mu^{2}}\min_{\zeta\in[0,1]}g\left(\frac{ \bar{B}}{\mu}\zeta\right)=\frac{\beta_{0}}{\mu^{2}}g\left(\frac{\bar{B}}{\mu}\right)\]
which implies that both eigenvalues given by (5.12) are negative. Hence, Theorem 3.3 ensures that under these hypotheses the nontrivial steady state is always asymptotically stable. We next show that also for \(x_{A}>0\), if the individual growth rate is decreasing, the non-trivial equilibrium is locally asymptotically stable whenever it exists, excluding the possibility of a Hopf bifurcation.
**Theorem 5.4**.: _Let \(x_{m}=0,\ \beta(s)=\beta_{0}\max\{0,s-x_{A}\},\)\(R_{0}>1\) and let \(g\) be decreasing. Then the nontrivial steady state is locally asymptotically stable._
Proof.: We show that for all \(x_{A}>0\) the characteristic equation has no purely imaginary roots, preventing the presence of Hopf bifurcations and, via a continuity argument, extending the result of the case \(x_{A}=0\) to the case \(x_{A}>0.\) Indeed, we put \(\lambda=i\omega\) with \(\omega\in\mathbb{R}\) in (5.11) to obtain
\[(\mu+i\omega)^{2}=\beta_{0}g(\bar{B}/\mu)e^{-(\mu+i\omega)\bar{a}},\]
which, taking real and imaginary parts, leads to
\[\begin{array}{c}\mu^{2}-\omega^{2}=c\cos(\omega\bar{a})\\ \\ 2\omega\mu=-c\sin(\omega\bar{a})\end{array} \tag{5.13}\]
with \(c=\beta_{0}g(\bar{B}/\mu)e^{-\mu\bar{a}}.\) Notice that (4.1), (5.10) and (5.13) rule out that \(\omega\) can be \(0\). Solving the second equation for \(c\) and substituting the result in the first, we obtain a quadratic equation for \(\mu\), with \(\omega>0\) as a free parameter:
\[\mu^{2}+2\omega\frac{\cos(\omega\bar{a})}{\sin(\omega\bar{a})}\mu-\omega^{2}= \left(\mu+\omega\cot\left(\frac{\omega\bar{a}}{2}\right)\right)\left(\mu- \omega\tan\left(\frac{\omega\bar{a}}{2}\right)\right)=0.\]
Choosing the first root,
\[\mu=-\omega\cot\left(\frac{\omega\bar{a}}{2}\right), \tag{5.14}\]
we find
\[c=-\frac{2\omega\mu}{\sin(\omega\bar{a})}=\frac{2\omega^{2}}{2\sin\left( \frac{\omega\bar{a}}{2}\right)\cos\left(\frac{\omega\bar{a}}{2}\right)}\sin \left(\frac{\omega\bar{a}}{2}\right)=\left(\frac{\omega}{\sin\left(\frac{ \omega\bar{a}}{2}\right)}\right)^{2}, \tag{5.15}\]
whereas the second one gives
\[c=-\frac{2\omega\mu}{\sin(\omega\bar{a})}=-\frac{2\omega^{2}}{2\sin\left( \frac{\omega\bar{a}}{2}\right)\cos\left(\frac{\omega\bar{a}}{2}\right)}\frac{ \sin\left(\frac{\omega\bar{a}}{2}\right)}{\cos\left(\frac{\omega\bar{a}}{2} \right)}=-\left(\frac{\omega}{\cos\left(\frac{\omega\bar{a}}{2}\right)} \right)^{2},\]
which is incompatible with \(c\) being positive. Now, using (5.15), the definition of \(c\), the fact that \(g\) is decreasing, (5.10), the condition of steady state \(R(B)=1\) and (5.14) we have
\[\left(\frac{\omega}{\sin\left(\frac{\omega\bar{a}}{2}\right)}\right)^{2}=c= \beta_{0}g\left(\frac{\bar{B}}{\mu}\right)e^{-\mu\bar{a}}<\beta_{0}\int_{0}^{ e^{-\mu\bar{a}}}g\left(\frac{\bar{B}}{\mu}\zeta\right)d\zeta=\mu^{2}=\left( \omega\cot\left(\frac{\omega\bar{a}}{2}\right)\right)^{2},\]
which implies \(1<\left(\cos(\frac{\omega\bar{a}}{2})\right)^{2},\) impossible for any real \(\omega\). So roots cannot enter the right half plane by crossing the imaginary axis. As the right hand side of (5.11) tends to zero for \(|\lambda|\to\infty\) when \(\operatorname{Re}\lambda\geq 0\), they cannot enter from infinity either. Essentially from Rouche's Theorem it now follows that there are no roots in the right half plane for arbitrary \(x_{A}>0\), see Lemma XI.2.8 in [8] or Lemma 9.17.4 in [12].
### Semi-explicit expression for a particular case
In this section we assume that the minimum size is \(x_{m}=0\) and that the per capita fertility is proportional to the size: \(\beta(s)=\beta_{0}s\) (i.e., \(x_{A}=0\)). In addition we consider that the individual growth rate is of the form \(g(z)=\frac{\partial_{0}}{1+z/z_{0}}\) where \(g_{0}>0\) and \(z_{0}>0\) (recall that \(z\) represents the environment that an individual experiences, which is given by the number of individuals that are larger than it).
In this situation, (5.10) gives
\[R(B)=\frac{\beta_{0}}{\mu^{2}}\int_{0}^{1}\frac{g_{0}}{1+\frac{B}{\mu}\frac{C }{20}}\mathrm{d}\zeta=\frac{\beta_{0}g_{0}}{\mu^{2}}\frac{\ln\left(1+B/(\mu z _{0})\right)}{B/(\mu z_{0})}=R_{0}\frac{\ln\left(1+B/(\mu z_{0})\right)}{B/( \mu z_{0})}.\]
Therefore, the birth rate at the nontrivial equilibrium (which necessarily exists if \(R_{0}>1\) as discussed in Section 4) is the unique positive solution \(\bar{B}\) of the equation \(\frac{\ln(1+B/(\mu z_{0}))}{B/(\mu z_{0})}=\frac{1}{R_{0}}.\) This allows an explicit expression for \(\bar{B}\) in terms of the Lambert function \(W_{-1}\) as
\[\bar{B}=\mu z_{0}\left(-R_{0}W_{-1}\bigg{(}-\frac{\exp(-1/R_{0})}{R_{0}}\bigg{)} -1\right). \tag{5.16}\]
Indeed, take \(z=-(1+B/(\mu z_{0}))/R_{0}<-1/R_{0}\) in the preceding equation, which gives \(ze^{z}=-(1/R_{0})e^{-1/R_{0}}\). Then the (only) solution to this equation is the Lambert function \(W_{-1}\) ( i.e., the inverse function of the (monotonously decreasing) function \(f(z)=ze^{z}\) restricted to the interval \((-\infty,-1)\)) evaluated at \(-(1/R_{0})e^{-1/R_{0}}\).
More interestingly, an explicit expression can also be obtained for the density with respect to size in the steady state. Indeed, (5.4) gives in this case,
\[\bar{S}(a)=\int_{0}^{a}\frac{g_{0}}{1+\frac{\bar{B}e^{-\mu r}}{\mu z_{0}}} \mathrm{d}\tau=\frac{g_{0}}{\mu}\ln\left(\frac{\mu z_{0}e^{\mu a}+\bar{B}}{ \mu z_{0}+\bar{B}}\right), \tag{5.17}\]
which leads to
\[\bar{S}^{-1}(x)=\frac{1}{\mu}\ln\left(\frac{(\mu z_{0}+\bar{B})e^{\frac{\mu}{ \beta_{0}}x}-\bar{B}}{\mu z_{0}}\right)\]
and to
\[\bar{S}^{\prime}\big{(}\bar{S}^{-1}(x)\big{)}=g_{0}\frac{(\mu z_{0}+\bar{B}) \exp\left(\frac{\mu}{g_{0}}x\right)-\bar{B}}{(\mu z_{0}+\bar{B})\exp\left( \frac{\mu}{g_{0}}x\right)}.\]
By (4.3) we finally obtain
\[\bar{u}(x)=\frac{\bar{B}\exp(-\mu\bar{S}^{-1}(x))}{\bar{S}^{\prime}\big{(} \bar{S}^{-1}(x)\big{)}}=\frac{\mu z_{0}\bar{B}(\mu z_{0}+\bar{B})\exp\left( \frac{\mu}{g_{0}}x\right)}{g_{0}\Big{(}\left(\mu z_{0}+\bar{B}\right)\exp\left( \frac{\mu}{g_{0}}x\right)-\bar{B}\Big{)}^{2}}. \tag{5.18}\]
Moreover, an easy integration gives the following expression for the population number above an individual of size \(x\)
\[\int_{x}^{\infty}\bar{u}(s)\mathrm{d}s=\frac{\bar{B}z_{0}}{(\mu z_{0}+\bar{B})e^{ \frac{\mu}{g_{0}}x}-\bar{B}}. \tag{5.19}\]
## 6 Concluding remarks
The principle of linearised stability (PLS for short), widely used in the theory of ODEs, says that the stability of a stationary state is determined by the stability properties of the linearised semigroup. This principle has also been proved to hold in dynamical systems of infinite dimension with a "semilinear" structure (namely semilinear PDEs and DE, see [15, 21, 8]) via the variation of constants formula. In this article we used the PLS to analyse rigorously the local stability of stationary birth rates of (2.11). As a consequence of such an analysis we found that for reasonable and rather general biological functional responses (see the hypotheses of Theorem 5.4), the non-trivial stationary birth rate of (2.11) is locally asymptotically stable.
The PLS, as stated above, cannot be applied to the PDE formulation presented in Appendix B. The reason is that, as explained in detail in [3], the nonlinear semigroup associated to (B.1) is not differentiable, and hence it cannot be linearised. This does not mean, however, that, if the PDE system (B.1) is linearised "formally" around a stationary distribution \(\bar{u}\), the stability of \(\bar{u}\) can't be determined from the stability of the linearised system. In fact we expect that such is possible, but a proof is, as far as we know, is still missing.
As explained in [3], a way to prove this result would be to establish an "equivalence" between orbits of the delay formulation (in the state space of weighted birth rate histories, i.e. \(\mathcal{X}\)) and orbits of the PDE formulation (in the state space of integrable functions of height, i.e. \(L^{1}(x_{m},\infty)\)). By an "equivalence" we specifically mean to find a continuous function \(\mathcal{L}^{\mathrm{PDE}}_{\mathrm{DE}}:\mathcal{X}\to L^{1}(x_{m},\infty)\) mapping orbits in \(\mathcal{X}\) to orbits in \(L^{1}(x_{m},\infty)\) and vice-versa (i.e. an analogous continuous function \(\mathcal{L}^{\mathrm{DE}}_{\mathrm{PDE}}:L^{1}(x_{m},\infty)\to\mathcal{X}\)), so that stability results can be translated from one formulation to the other. In [3] we found that for these functions to exist, one needed to work in a (exponentially) weighted space of integrable functions of height, \(L^{1}_{w}(x_{m},\infty)\), where the proper value of \(w\) depended on the weight \(\rho\) chosen for \(\mathcal{X}\) (working with the unweighted space \(L^{1}(x_{m},\infty)\) was possible if \(\rho\) was chosen to be equal to the mortality rate \(\mu\), since that implied \(w=0\)). In fact, in that paper the phase spaces for both the PDE and the DE included a component with information on the environmental condition. These additional components allowed to establish a surjective function (with the desired properties mentioned above) mapping states from the delay formulation to states of the PDE formulation (and vice-versa by taking a pseudoinverse of that function). As we are about to see, the analogous function associated to the (simpler) phase spaces used in this paper fails to be surjective (precluding any attempt of extending the results of [3] to the present work).
Natural candidates for \(\mathcal{L}^{\mathrm{PDE}}_{\mathrm{DE}}\) and \(\mathcal{L}^{\mathrm{DE}}_{\mathrm{PDE}}\) may be obtained using the biological interpretation of the functions involved in (2.11) and (B.1). Indeed, take \(\phi\in\mathcal{X}\) a birth rate history and \(u_{0}\in L^{1}(x_{m},\infty)\) a 'corresponding' population height-distribution and define
\[\begin{split} X(\tau;\phi):=S(-\tau,0;\phi)&=x_{m}+ \int_{0}^{-\tau}g\left(\int_{\sigma}^{\infty}\phi(\tau+\sigma-\alpha)e^{-\mu \alpha}d\alpha\right)d\sigma\\ &=x_{m}+\int_{0}^{-\tau}g\left(\int_{-\infty}^{\tau}\phi(\theta )e^{-\mu(\tau+\sigma-\theta)}d\theta\right)d\sigma\end{split} \tag{6.1}\]
for \(\tau\in(-\infty,0]\) (i.e. the size at time \(0\) of an individual born at \(\tau\) given the birth rate history \(\phi\), see (2.8)) and \(T(x;\phi)\), for \(x\in[x_{m},\infty)\), as the inverse of \(X(\cdot;\phi)\) (which exists if \(g\) is bounded and
decreasing and gives the time at birth of an individual with size \(x\) at time \(0\) given the birth rate history \(\phi\)). Then we have
\[\int_{x_{m}}^{x}u_{0}(x)dx=\int_{T(x;\phi)}^{0}\phi(\theta)e^{\mu\theta}d\theta \tag{6.2}\]
because being younger means being smaller, and hence the individuals smaller than \(x\) must coincide with those born after \(T(x;\phi)\) that have survived. Then, differentiation with respect to \(x\) gives
\[u_{0}(x)=-\phi(T(x;\phi))e^{\mu T(x;\phi)}T^{\prime}(x;\phi), \tag{6.3}\]
which gives a natural candidate for \(\mathcal{L}_{\text{DE}}^{\text{PDE}}\). Similarly, by rewriting (6.2) as
\[\int_{x_{m}}^{X(\tau;\phi)}u_{0}(x)dx=\int_{\tau}^{0}\phi(\theta)e^{\mu\theta}d\theta,\]
differentiation with respect to \(\tau\) gives
\[u_{0}(X(\tau;\phi))X^{\prime}(\tau;\phi)=-\phi(\tau)e^{\mu\tau}. \tag{6.4}\]
Unlike (6.3), the above equation is problematic in that it does not give an explicit formula for \(\phi\) in terms of \(u_{0}\). It turns out that the above equation does not define implicitly \(\phi\in\mathcal{X}\) for each \(u_{0}\in L^{1}(x_{m},\infty)\) (which is equivalent to say that \(\mathcal{L}_{\text{DE}}^{\text{PDE}}\) defined through (6.3) is not surjective). To see this choose, as a counterexample, \(\mu=0\), \(g(E)=1-E\) for \(E<1/2\) (it doesn't matter what \(g\) does for \(E\geq 1/2\), besides being decreasing) and \(u_{0}(x)=1\) for \(x\in(x_{m},x_{m}+1)\) and \(0\) otherwise. Then formula (6.1) simplifies to \(X(\tau;\phi)=x_{m}-\tau g\left(\int_{-\infty}^{\tau}\phi(\theta)d\theta\right)\) and equation (6.4) implies
\[\phi(\tau)=\frac{g\left(\int_{-\infty}^{\tau}\phi(\theta)d\theta\right)}{1- \tau g^{\prime}\left(\int_{-\infty}^{\tau}\phi(\theta)d\theta\right)}\]
if \(X(\tau;\phi)<1\) and \(\phi(\tau)=0\) otherwise. This forces the support of \(\phi\) to be \((-1,0)\), so that \(X(\tau;\phi)<x_{m}+1\) for \(\tau\in(-1,0)\), and thus \(\phi\) solves (6.4) only if it satisfies
\[\phi(\tau)=\frac{1-\int_{-1}^{\tau}\phi(\theta)d\theta}{1+\tau}\]
as long as \(\int_{-1}^{\tau}\phi(\theta)d\theta<1/2\). Since the right hand side of this equation has a non-integrable singularity for \(\tau\downarrow-1\), this relation contradicts that \(\phi\in\mathcal{X}\). The fact that equation (6.4) fails to define a birth rate history in \(\mathcal{X}\) as a function of \(u_{0}\) means that there are reasonable population densities with respect to size (such as the indicator function used in the example) that cannot be obtained by prescribing an integrable birth rate history.
As already mentioned, this situation deviates from what we had in [3], where an explicit formula for \(\mathcal{L}_{\text{PDE}}^{\text{DE}}\) was derived thanks to the additional environmental variable that was considered as part of the phase space (and somehow provided more room to play with). Since the scalar renewal equation presented in Section 2 was obtained precisely by expressing the environmental variable in terms of the birth rate history (and thus restricting the set of admissible environmental histories), a way to overcome this difficulty would be to work with an extended version of the delay formulation in which the environmental history is a proper element of the phase space (and thus there is also a delay equation for it). In addition, such an extended version would allow us to analyse more general environmental feedbacks. For instance environmental feedbacks of the form
\(\int_{x}^{\infty}\alpha(y)u(y,t)dy\) (compare with (2.1)), where the impact of larger individuals depends on their size. Such situations cannot be formulated in terms of only a renewal equation for the birth rate. Indeed, since the environmental history is needed to give the size individuals will have in the future, the environmental condition felt by an individual is no longer determined only by the individuals born before him but it depends also on the environmental history itself. The drawback of an extended formulation is that then the environmental history \(t\mapsto E(\cdot,t)\) takes values in an infinite dimensional space, which makes the analysis of the differentiability (analogue of Theorem 3.1) much more involved (the theory to deal with these cases is developed in [10]).
What could be the implications of such a non-equivalence between the two formulations, and specifically of the fact that there are population densities that cannot be obtained naturally from a birth rate history? It seems that the non-equivalence does not imply differences in the number of stationary states and attractors in general found in each formulation. In fact we expect a one-to-one correspondence between orbits in the \(\omega\)-limit sets of the two formulations (such a correspondence would be a consequence of the relation between solutions of the RE and solutions of the PDE given in subsection B.1 of Appendix B). What might be affected by the non-equivalence is the stability behaviour of the corresponding \(\omega\)-limit sets. A priori (with what we have shown in this paper) we cannot rule out the possibility that a stationary population density of the PDE formulation is unstable, while the corresponding stationary birth rate of the delay formulation is stable. The reason is that there are states arbitrarily close to such a stationary population density that cannot be related to any birth history from a neighbourhood of the stationary birth rate. Further work is needed to rule out this kind of discrepancy between the two formulations (or, alternatively, to give a specific example where the discrepancy takes place, although we doubt that such an example exists).
## Appendix A Differentiability
**Theorem A.1**.: _(Theorem 3.1) Assume that \(g\) and \(\beta\) have a bounded and globally Lipschitzian first derivative (with common constant \(2C\)). Also assume that \(g\) is bounded, positive and bounded away from \(0\). Then the map \(\mathcal{F}:\mathcal{X}\rightarrow\mathbb{R}\) defined in (3.1) is continuously differentiable with bounded derivative provided that the parameter \(\rho\) in the definition of \(\mathcal{X}\) satisfies \(\rho<\mu/5\)._
Proof.: First notice that the hypotheses imply the following estimate for any \(z\geq 0\) and \(h>-z\):
\[\begin{array}{l}|g(z+h)-g(z)-g^{\prime}(z)h|\\ =\left|\int_{z}^{z+h}g^{\prime}(s)ds-g^{\prime}(z)h\right|\leq\left|\int_{z}^{ z+h}\left|g^{\prime}(s)-g^{\prime}(z)\right|ds\right|\leq 2C\left|\int_{z}^{ z+h}\left|z-s\right|ds\right|\leq Ch^{2}\end{array}\] (A.1)
and analogously for \(\beta\).
The statement of the theorem amounts to showing that
\[\phi\rightarrow(\tilde{\mathcal{F}}(\phi))(a)=e^{-\mu a}\,\beta\bigg{(}x_{m} +\int_{0}^{a}g\big{(}e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\phi(-s)ds \big{)}\,d\tau\bigg{)}\]
is a continuously differentiable map from the positive cone of the Banach space
\[\mathcal{X}=\bigg{\{}\phi\in L^{1}_{loc}(-\infty,0):||\phi||_{\mathcal{X}}:= \int_{-\infty}^{0}e^{\rho s}|\phi(s)|ds<\infty\bigg{\}}\]
to its dual identified with the Banach space
\[\mathcal{X}^{\prime}=\big{\{}f\in L^{\infty}_{loc}(0,\infty):||f||_{\mathcal{ X}^{\prime}}:=esssup_{a\in[0,\infty)}e^{\rho a}|f(a)|<\infty\big{\}}\]
with the duality product \(\langle f,\phi\rangle=\int_{0}^{\infty}f(a)\phi(-a)\mathrm{d}a\).
Indeed, we can write \(\mathcal{F}(\phi)=\langle\tilde{\mathcal{F}}(\phi),\phi\rangle\), and a rather general and straightforward argument gives, assuming differentiability of \(\tilde{\mathcal{F}}\),
\[D\mathcal{F}(\phi)\psi=\langle\tilde{\mathcal{F}}(\phi),\psi\rangle+\langle\;D \tilde{\mathcal{F}}(\phi)\psi,\phi\rangle.\] (A.2)
In particular, for \(\phi=0\), we have \(D\mathcal{F}(0)\psi=\langle\tilde{\mathcal{F}}(0),\psi\rangle\).
Next we define three intermediate spaces of real valued continuous functions:
\[Y=\left\{P\in C(T):||P||_{Y}:=\sup_{(\tau,a)\in T}e^{-\rho a}|P(\tau,a)|<\infty\right\}\]
where \(T=\{(\tau,a)\in\mathbb{R}^{2}:0\leq\tau\leq a<\infty\}\),
\[Z=\left\{v\in C(T):||v||_{Z}:=\sup_{(\tau,a)\in T}e^{-\rho_{1}a}|v(\tau,a)|< \infty\right\}\]
with \(\rho_{1}>0\) to be chosen later,
\[W=\left\{S\in C([0,\infty)):||S||_{W}:=\sup_{a\in[0,\infty)}e^{-\rho_{2}a}|S( a)|<\infty\right\}\]
with \(\rho_{2}>0\) to be chosen later; and four maps:
\[\mathcal{L}_{1}:\mathcal{X}\to Y\text{ defined by }(\mathcal{L}_{1}\phi)( \tau,a)=e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\phi(-s)\mathrm{d}s,\]
\[\mathcal{G}:Y\to Z\text{ defined by }\mathcal{G}(P)=g\circ P,\]
\[\mathcal{L}_{2}:Z\to W\text{ defined by }(\mathcal{L}_{2}v)(a)=x_{m}+\int_{0}^{a}v( \tau,a)\mathrm{d}\tau\]
and
\[\mathcal{B}:W_{+}\to\mathcal{X}^{\prime}\text{ defined by }\mathcal{B}(S)(a)=e^{-\mu a }\;(\beta\circ S)(a),\]
(\(W_{+}\) meaning the positive cone of \(W\)) in such a way that (at least formally) \(\tilde{\mathcal{F}}=\mathcal{B}\circ\mathcal{L}_{2}\circ\mathcal{G}\circ \mathcal{L}_{1}\). Then the claim will follow from the chain rule provided we prove that the four maps are well defined and continuously differentiable with bounded derivative.
**Step 1. \(\mathcal{L}_{1}\) is bounded linear provided that \(\rho\leq\mu\).**
We have
\[\sup_{(\tau,a)\in T}e^{-\rho a}\left|e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s }\phi(-s)ds\right|=\sup_{(\tau,a)\in T}e^{-\rho a}\left|e^{-\mu(\tau-a)}\int_ {-\infty}^{-a}e^{\mu s}\phi(s)ds\right|\]
\[\leq\sup_{a\geq 0}\int_{-\infty}^{-a}e^{(\mu-\rho)(a+s)}e^{\rho s}|\phi(s)|ds \leq\int_{-\infty}^{0}e^{\rho s}|\phi(s)|ds,\]
since \((\mu-\rho)(a+s)\leq 0\) in the last but one integral. Thus, \(||\mathcal{L}_{1}\phi||_{Y}\leq||\phi||_{\mathcal{X}}\).
**Step 2. \(\mathcal{G}\) is continuously differentiable with bounded derivative provided that \(2\rho\leq\rho_{1}\).**
\(\mathcal{G}\) is well defined because \(g\) is bounded and continuous.
Let \(P\in Y\) and \(Q\in Y\) such that \(||Q||_{Y}=1\), which implies \(|Q(\tau,a)|\leq e^{\rho a}\).
We start by proving that \(Q\to g^{\prime}(P(\cdot))Q(\cdot)\) defines a bounded linear map \(\mathcal{Y}\rightarrow\mathcal{Z}\) with norm bounded independently of \(P\):
\[\sup_{\left\|Q\right\|_{Y}=1}\left|\left|g^{\prime}(P(\cdot))Q( \cdot)\right|\right|_{Z} =\sup_{\left\|Q\right\|_{Y}=1}\sup_{z\in T}e^{-\rho_{1}a}\left|g^{ \prime}(P(z))Q(z)\right|\] \[\leq\left|\left|g^{\prime}\right|\right|_{\infty}\sup_{z\in T}e^{- \rho a}\left|Q(z)\right|=\left|\left|g^{\prime}\right|\right|_{\infty}.\]
Moreover, we can write, setting \(z=(\tau,a)\), and using (A.1),
\[e^{-\rho_{1}a}|g(P(z)+\varepsilon Q(z))-g(P(z))-g^{\prime}(P(z) )\varepsilon Q(z)| \leq C\varepsilon^{2}e^{-\rho_{1}a}|Q(z)|^{2}\] \[\leq C\varepsilon^{2}e^{(2\rho-\rho_{1})a}\leq C\varepsilon^{2},\]
i.e.,
\[\left|\left|\mathcal{G}(P+\varepsilon Q)-\mathcal{G}(P)-g^{\prime}(P(\cdot)) \varepsilon Q(\cdot)\right|\right|_{Z}\leq C\varepsilon^{2}.\]
Therefore, \((D\mathcal{G}(P)Q)(z):=g^{\prime}(P(z))Q(z)\) is the Frechet derivative of \(\mathcal{G}\) at the point \(P\), its norm is uniformly bounded by \(\left|\left|g^{\prime}\right|\right|_{\infty}\); and it is (uniformly) continuous: for \(Q\in Y\) with norm \(1\) we have
\[\leq 2C\sup_{z\in T}e^{-\rho_{1}a}|P_{1}(z)-P_{2}(z)|\left|Q(z) \right|\leq 2C\sup_{a\geq 0}e^{(-\rho_{1}+2\rho)a}||P_{1}-P_{2}||_{Y}\leq 2C||P_{1}-P _{2}||_{Y}.\]
**Step 3. \(\mathcal{L}_{2}\) is a positive continuous affine map provided that \(0<\rho_{1}<\rho_{2}\).**
It suffices to see,
\[\left|\left|\mathcal{L}_{2}v-x_{m}\right|\right|_{W}=\sup_{a\in[0,\infty)}e^{-\rho_{2}a}\left|\int_{0}^{a}v(\tau,a)d\tau\right|\] \[\leq\sup_{a\in[0,\infty)}e^{-\rho_{2}a}\int_{0}^{a}e^{\rho_{1}a} ||v||_{Z}\,\mathrm{d}\tau=\sup_{a\in[0,\infty)}ae^{(-\rho_{2}+\rho_{1})a}||v|| _{Z}\leq\tfrac{1}{e(\rho_{2}-\rho_{1})}||v||_{Z}.\]
**Step 4. \(\mathcal{B}\) is continuously differentiable with bounded derivative provided that \(\rho+2\rho_{2}\leq\mu\).**
First notice that the assumptions on \(\beta\) imply that there exist positive constants \(C_{1}\) and \(C_{2}\) such that \(\beta(s)\leq C_{1}+C_{2}s.\) Thus \(\mathcal{B}\) is well defined: for \(S\in W_{+}\), since \(|S(a)|\leq e^{\rho_{2}a}||S||_{W}\),
\[e^{\rho a}|e^{-\mu a}\beta(S(a))| \leq e^{(\rho-\mu)a}(C_{1}+C_{2}|S(a)|)\] \[\leq e^{(\rho-\mu)a}(C_{1}+C_{2}e^{2\rho_{2}a}||S||_{W})\leq C_{1 }+C_{2}||S||_{W}.\]
As in Step 2, let us prove that \(R\to e^{-\mu\cdot}\beta^{\prime}(S(\cdot))R(\cdot)\) defines a bounded linear map \(W\rightarrow\mathcal{X}^{\prime}\) with norm bounded independently of \(S\):
\[\sup_{\left\|R\right\|_{W}=1}\left|\left|e^{-\mu\cdot}\beta^{ \prime}(S(\cdot))R(\cdot)\right|\right|_{\mathcal{X}^{\prime}} =\sup_{\left\|R\right\|_{W}=1}\sup_{a\geq 0}e^{\rho a} \left|e^{-\mu a}\beta^{\prime}(S(a))R(a)\right|\] \[\leq\sup_{a\geq 0}e^{(\rho+\rho_{2}-\mu)a}\left|\left|\beta^{ \prime}\right|\right|_{\infty}\leq\left|\left|\beta^{\prime}\right|\right|_{ \infty}.\]
Let us now proceed to show that \(\mathcal{B}\) is differentiable: Let \(S\in W_{+}\) and \(R\in W\) with norm equal to \(1\), which implies \(|R(a)|<e^{\rho_{2}a}\). Then, for \(\epsilon\) small enough, \(S+\epsilon R\in W_{+}\). Then we can write, using the \(\beta\)-variant of (A.1),
\[e^{\rho a}\left|e^{-\mu a}\beta(S(a)+\varepsilon R(a))-e^{-\mu a }\beta(S(a))-e^{-\mu a}\beta^{\prime}(S(a))\varepsilon R(a)\right|\] \[\leq C\varepsilon^{2}e^{(\rho-\mu+2\rho_{2})a}\leq C\varepsilon^ {2},\]
proving that \(\big{(}D\mathcal{B}(S)R\big{)}(a):=e^{-\mu a}\beta^{\prime}(S(a))R(a)\) is the Frechet derivative of \(\mathcal{B}\) at the point \(S,\) with norm uniformly bounded by \(||\beta^{\prime}||_{\infty}\)
We also show that the derivative is continuous as in Step 2. Let \(R\in W\) with norm equal to \(1\). We have
\[\begin{split}||D\mathcal{B}(S_{1})R-D\mathcal{B}(S_{2})R||_{ \mathcal{X}^{\prime}}=&\sup_{a\in[0,\infty)}e^{(\rho-\mu)a} \left|\left(\beta^{\prime}(S_{1}(a))-\beta^{\prime}(S_{2}(a))\right)R(a)\right| \\ \leq& 2C\sup_{a\in[0,\infty)}e^{(\rho-\mu)a}|S_{1}(a )-S_{2}(a)|\,|R(a)|\\ \leq& 2C\sup_{a\geq 0}e^{(\rho-\mu+2\rho_{2})a}||S_{1 }-S_{2}||_{W}\leq 2C||S_{1}-S_{2}||_{W}.\end{split}\]
Finally, given any \(\rho\in(0,\frac{\mu}{5})\) we can take \(\rho_{1}=2\rho\) (fulfilling the assumption of Step 2) and \(\rho_{2}=\frac{\mu-\rho}{2}>2\rho\) (fulfilling the assumption of Step 3 and that of Step 4 since then \(2\rho_{2}+\rho=\mu\)) to conclude the proof.
As a consequence, the chain rule gives, taking into account that \(\mathcal{L}_{1}\) is linear and \(\mathcal{L}_{2}\) is affine,
\[D\tilde{\mathcal{F}}(\phi)\psi=D(\mathcal{B}\circ\mathcal{L}_{2}\circ\mathcal{ G}\circ\mathcal{L}_{1})(\phi)\psi=D\mathcal{B}(\mathcal{L}_{2}\mathcal{G}( \mathcal{L}_{1}\phi))\,(\mathcal{L}_{2}-x_{m})\,D\mathcal{G}(\mathcal{L}_{1} \phi)\,\mathcal{L}_{1}\psi.\]
Since we are interested in linearisation around steady states, we can restrict to evaluation of the differential on constant functions \(\bar{B}.\) So, we compute, sequentially:
\[\begin{split}\mathcal{L}_{1}\psi\left(\tau,a\right)=& e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\psi(-s)\mathrm{d}s,\\ \mathcal{L}_{1}\bar{B}\left(\tau,a\right)=&\bar{B}e ^{-\mu\tau}/\mu,\\ D\mathcal{G}(\mathcal{L}_{1}\bar{B})\mathcal{L}_{1}\psi\left(\tau,a \right)=& g^{\prime}(\bar{B}e^{-\mu\tau}/\mu)e^{-\mu(\tau-a)}\int_{a }^{\infty}e^{-\mu s}\psi(-s)\mathrm{d}s,\\ (\mathcal{L}_{2}-x_{m})\,D\mathcal{G}(\mathcal{L}_{1}\bar{B}) \mathcal{L}_{1}\psi\left(\tau,a\right)=&\int_{0}^{a}g^{\prime}( \bar{B}e^{-\mu\tau}/\mu)e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\psi(-s) \mathrm{d}s\,\mathrm{d}\tau\\ =&:h(a),\end{split}\]
and, also,
\[\mathcal{L}_{2}\,\mathcal{G}(\mathcal{L}_{1}\bar{B})=x_{m}+\int_{0}^{a}g(\bar {B}e^{-\mu\tau}/\mu)\mathrm{d}\tau(=\bar{S}(a)),\]
where, in the last equality we assumed, furthermore, that \(\bar{B}\) is not only a constant function, but a steady state (see (5.4)). Therefore,
\[\begin{split} D\tilde{\mathcal{F}}(\bar{B})\psi=& D \mathcal{B}(\mathcal{L}_{2}\mathcal{G}(\mathcal{L}_{1}\bar{B}))\,(\mathcal{L} _{2}-x_{m})\,D\mathcal{G}(\mathcal{L}_{1}\bar{B})\,\mathcal{L}_{1}\,\psi(a)\\ =& D\mathcal{B}(\mathcal{L}_{2}\,\mathcal{G}( \mathcal{L}_{1}\bar{B}))h(a)=e^{-\mu a}\beta^{\prime}(\bar{S}(a))h(a)\\ =& e^{-\mu a}\beta^{\prime}(\bar{S}(a))\,\int_{0}^{a }g^{\prime}(\bar{B}e^{-\mu\tau}/\mu)e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s }\psi(-s)\mathrm{d}s\,\mathrm{d}\tau.\end{split}\]
Finally, we will have,
\[\langle\ D\tilde{\mathcal{F}}(\bar{B})\psi,\bar{B}\rangle=\int_{0}^{\infty}e^{ -\mu a}\beta^{\prime}(\bar{S}(a))\,\int_{0}^{a}g^{\prime}(\bar{B}e^{-\mu\tau}/ \mu)e^{-\mu(\tau-a)}\int_{a}^{\infty}e^{-\mu s}\psi(-s)\mathrm{d}s\,\mathrm{d} \tau\bar{B}\mathrm{d}a,\]
which, together with (A.2), gives (5.6).
The PDE formulation
The classical formulation derived by imposing a conservation law leads to the (non-local, quasilinear and first-order) partial differential equation
\[\frac{\partial}{\partial t}u(x,t)+\frac{\partial}{\partial x}\left( g(E(x,t))u(x,t)\right)+\mu u(x,t) =0,\] (B.1) \[g(E(x_{m},t))u(x_{m},t) =\int_{x_{m}}^{\infty}\beta(y)u(y,t)\,\mathrm{d}y,\] \[E(x,t) =\int_{x}^{\infty}u(y,t)\,\mathrm{d}y.\]
Here the second equation stands for the flux of newborns, offspring of individuals of any size \(y\) which have a size specific per capita fertility (obviously nonnegative) \(\beta(y)\). Notice that the fertility is indeed indirectly affected by negative density dependence since a larger value of the environmental variable leads to a smaller size achieved by the individuals. From a dynamical point of view the solutions of (B.1) can be seen as orbits \(t\mapsto u(\cdot,t)\) in the space of integrable functions with respect to height, i.e. \(L^{1}(x_{m},\infty)\).
The slightly more general model with environmental interaction variable
\[E(x,t)=\alpha\int_{0}^{x}u(y,t)\,\mathrm{d}y+\int_{x}^{M}u(y,t)\,\mathrm{d}y, \quad\alpha\in[0,1],\]
(but with finite maximal size \(M\)) was studied for example in [5, 1, 2, 13], and a very general model incorporating distributed recruitment in [4]. In [16] the well posedness of the above problem was proven by rewriting the system in terms of characteristic coordinates.
In this appendix we include a series of results showing that the PDE formulation is tightly related to the delay formulation (as it should be since both models are built from a description of the same biological processes). In subsection B.1 we show that one can solve the PDE problem by solving a scalar RE (with integration from \(0\) to \(t\)) for the population birth rate \(B\) and that the large time limiting form of this equation is exactly (2.9). In subsection B.2 we show that the condition characterising the existence of non-trivial steady states of (B.1) coincides with (4.1) (in addition a formula for the non-trivial stationary population size-density is given). Finally in subsection B.3 we show that the formal linearisation of system (B.1) leads to the characteristic equation (5.8).
### Solution of the PDE in terms of a renewal equation
The solution of (B.1) can be written as the sum of two terms: the first considers the individuals born between \(0\) and \(t\) and the second considers the individuals that already exist at time \(0\), i.e. those reflected in the initial population density \(u_{0}(x)\).
First notice that at time \(0\),
\[\bar{E}(\xi)=\int_{\xi}^{\infty}u_{0}(\eta)d\eta\]
gives the number of individuals with size larger than \(\xi\), while at time \(\tau\)
\[\tilde{E}(\tau)=\left(\int_{0}^{\tau}B(\sigma)e^{\mu\sigma}d\sigma+\int_{0} ^{\infty}u_{0}(\eta)d\eta\right)e^{-\mu\tau}\]
gives the number of individuals with size larger than \(x_{m}\). Since the mortality rate is constant, these numbers decrease exponentially with rate \(\mu\) as time increases. As a consequence, the size at time
\(t\) of an individual with size \(\xi\) at time \(0\) is
\[X(t,0,\xi)=\xi+\int_{0}^{t}g(\bar{E}(\xi)e^{-\mu\sigma})d\sigma\]
and the size at time \(t\) of an individual born at time \(\tau>0\) with \(0<\tau<t\) is
\[X(t,\tau,x_{m})=x_{m}+\int_{0}^{t-\tau}g\left(\tilde{E}(\tau)e^{-\mu\sigma} \right)d\sigma.\]
So the birth rate has to satisfy the renewal equation
\[B(t)=B_{\rm dsc}(t)+B_{\rm fnd}(t)\]
where
\[B_{\rm dsc}(t)=\int_{0}^{t}\beta(X(t,\tau,x_{m}))B(\tau)e^{-\mu(t-\tau)}d\tau\]
is the birth rate associated to the descendants of the founder population and
\[B_{\rm fnd}(t)=\int_{0}^{t}\beta(X(t,0,\xi))u_{0}(\xi)d\xi\,e^{-\mu t}\]
is the known birth rate associated to the founder population. Once we solve the renewal equation constructively, we can obtain an explicit expression for the (weak) solution of the PDE by integrating along characteristics.
Note that \(B_{\rm fnd}(t)\) tends to \(0\) exponentially as \(t\to\infty\). By changing \(\tau\) to \(a\) with \(t-\tau=a\) we can rewrite
\[B_{\rm dsc}(t)=\int_{0}^{t}\beta(X(t,t-a,x_{m}))B(t-a)e^{-\mu a}da.\]
Now note that
\[X(t,t-a,x_{m})=x_{m}+\int_{0}^{a}g(\tilde{E}(t-a)e^{-\mu\tau})d\tau\]
and
\[\tilde{E}(t-a)=\int_{0}^{t-a}B(\eta)e^{\mu\eta}d\eta\,e^{-\mu(t-a)}+\int_{0}^ {\infty}u_{0}(\eta)d\eta\,e^{-\mu(t-a)}\]
where the second summand at the right hand side tends exponentially to \(0\) as \(t\to\infty\). Since this term represents the founder population that remains at time \(t\), let us refer to it as \(P_{\rm fnd}(t)\). Next, by using the transformation \(\eta=t-s\) we have
\[\tilde{E}(t-a)=\int_{a}^{t}B(t-s)e^{-\mu(s-a)}ds+P_{\rm fnd}(t)=e^{\mu a}\int _{a}^{t}B(t-s)e^{-\mu s}ds+P_{\rm fnd}(t).\]
Now note that, by ignoring \(B_{\rm fnd}(t)\) and \(P_{\rm fnd}(t)\) and by replacing the upper integration boundary \(t\) in the last integral by \(\infty\), we obtain (2.11).
### Existence and characterization of non-trivial steady states
To establish criteria for the existence of non-trivial steady states \(\bar{u}\) in the PDE formulation is apparently more complex than what we had to do for the delay formulation in Section 4.
Let us first concentrate on the ordinary differential equation which arises from the first and the third equations in (B.1) when one assumes that \(\bar{u}\) only depends on \(x\). This leads to the following second order ode for \(E(x):=\int_{x}^{\infty}\bar{u}(s)ds\),
\[\frac{d}{dx}\left(g(E(x))E^{\prime}(x)\right)+\mu E^{\prime}(x)=0,\]
or, equivalently, to
\[g(E(x))E^{\prime}(x)+\mu E(x)=C,\]
for some constant \(C.\) Since \(E(x)\) tends to \(0\) when \(x\) tends to \(\infty\), \(C\) has to coincide with (minus) the flux of individuals leaving the system at infinity: \(C=\lim_{x\to\infty}g(E(x))E^{\prime}(x)=-g(0)\lim_{x\to\infty}\bar{u}(x)\) and so it has to be \(0\) (since otherwise \(\lim_{x\to\infty}\bar{u}(x)=-C/g(0)\neq 0\) and \(\bar{u}\) would not be integrable). Therefore we look for solutions of the differential equation
\[\frac{dE}{dx}(x)=-\mu\frac{E(x)}{g(E(x))}\]
with initial condition \(E(x_{m})=N\) (the total population) and such that \(\lim_{x\to\infty}E(x)=0.\) Equivalently,
\[\int_{E(x)}^{N}\frac{g(z)}{z}\mathrm{d}z=\mu(x-x_{m}).\]
If \(G\) is a primitive of \(g(z)/z\), the previous equation reads
\[G(N)-G(E(x))=\mu(x-x_{m}),\]
which, can be rewritten as
\[E(x)=G^{-1}\left(G(N)-\mu(x-x_{m})\right).\]
It follows that
\[\begin{split}\bar{u}(x)=&-\frac{d}{dx}\left(G^{-1} \left(G(N)-\mu(x-x_{m})\right)\right)\\ =&\frac{\mu}{G^{\prime}\left(G^{-1}\left(G(N)-\mu(x -x_{m})\right)\right)}=\mu\frac{G^{-1}\left(G(N)-\mu(x-x_{m})\right)}{g\left( G^{-1}\left(G(N)-\mu(x-x_{m})\right)\right)}.\end{split}\] (B.2)
Since \(\bar{u}(x_{m})=\frac{\mu N}{g(N)}\) we have
\[g(E(x_{m}))\bar{u}(x_{m})=g(N)\bar{u}(x_{m})=\mu N.\]
Therefore, using the boundary condition, a non-trivial steady state (given by (B.2)) does exist if and only if a positive number \(N\) exists such that
\[N=\int_{x_{m}}^{\infty}\beta(x)\frac{G^{-1}\left(G(N)-\mu(x-x_{m})\right)}{g \left(G^{-1}\left(G(N)-\mu(x-x_{m})\right)\right)}\mathrm{d}x.\] (B.3)
This turns out to be equivalent to (4.1) with \(N=B/\mu\). Indeed, we can write
\[\begin{split} R(B)=&\int_{0}^{\infty}\beta\left(x _{m}+\int_{0}^{a}g\left(B\frac{e^{-\mu\tau}}{\mu}\right)\,\mathrm{d}\tau \right)\ e^{-\mu a}\,\mathrm{d}a\\ =&\int_{0}^{\infty}\beta\left(x_{m}+\int_{Ne^{-\mu a }}^{N}\frac{g\left(z\right)}{\mu z}\,\mathrm{d}z\right)\ e^{-\mu a}\,\mathrm{ d}a\\ =&\int_{0}^{\infty}\beta\left(x_{m}+\frac{G(N)-G(Ne ^{-\mu a})}{\mu}\right)e^{-\mu a}\,\mathrm{d}a\\ =&\frac{1}{N}\int_{x_{m}}^{\infty}\beta(x)\frac{G^{- 1}(G(N)-\mu(x-x_{m}))}{g\left(G^{-1}(G(N)-\mu(x-x_{m})\right)}\mathrm{d}x \end{split}\]
where in the second equality we performed the change of variables \(z=B\frac{e^{-\mu\tau}}{\mu}\), and in the fourth one, the change of variables \(x=x_{m}+\frac{G(N)-G(Ne^{-\mu a})}{\mu}.\) See Section 5.2 where a particular case is developed and where an explicit expression for a primitive \(G\) is available.
**secB3**
The (formal) linearisation of the PDE (B.1) around the steady state \(u_{*}\) is very economical as it simply reads (note that \(g^{\prime}\) below stands for the derivative of \(g\) with respect to its argument \(E\))
\[\begin{split} v_{t}(x,t)+\left(g(E_{*}(x))v(x,t)+g^{\prime}(E_{* }(x))u_{*}(x)\int_{x}^{\infty}v(y,t)\,\mathrm{d}y\right)_{x}=&- \mu v(x,t),\\ g(E_{*}(x_{m}))v(x_{m},t)+g^{\prime}(E_{*}(x_{m}))u_{*}(x_{m}) \int_{x_{m}}^{\infty}v(x,t)\,\mathrm{d}x=&\int_{x_{m}}^{\infty} \beta(x)v(x,t)\,\mathrm{d}x.\end{split}\] (B.4)
Substituting \(v(x,t)=e^{\lambda t}V(x)\) into (B.4) we have
\[\begin{split}\left(g(E_{*}(x))V(x)+g^{\prime}(E_{*}(x))u_{*}(x) \int_{x}^{\infty}V(y)\,\mathrm{d}y\right)_{x}=&-(\lambda+\mu)V( x),\\ g(E_{*}(x_{m}))V(x_{m})+g^{\prime}(E_{*}(x_{m}))u_{*}(x_{m}) \int_{x_{m}}^{\infty}V(x)\,\mathrm{d}x=&\int_{x_{m}}^{\infty} \beta(x)V(x)\,\mathrm{d}x.\end{split}\] (B.5)
Therefore, \(\lambda\in\mathbb{C}\) is an eigenvalue, if and only (B.5) admits a solution \(V\not\equiv 0\). We also note that although the size domain is unbounded, it can be shown that the part of the spectrum of the semigroup generator in the half plane \(\{z\in\mathbb{C}\,|\,\mathrm{Re}(z)>-\mu\}\) contains only eigenvalues, see e.g. [13, Sect.4.] for more details, and therefore (linear) stability can indeed be characterized by the leading eigenvalue of the semigroup generator.
For the trivial steady state \(u_{*}\equiv 0\) the left hand side of (B.5) has only local terms and therefore easily leads to the characteristic equation
\[g(0)=\int_{x_{m}}^{\infty}\beta(x)e^{-\frac{\lambda+\mu}{g(0)}(x-x_{m})}dx,\]
which is exactly what one gets by inserting \(y(t)=e^{\lambda t}\) into (5.1) and making the change of variables \(x=x_{m}+g(0)a\). Therefore, the stability of \(u_{*}\equiv 0\) is characterized by the net reproduction number (\(R\) evaluated at the zero steady state, or the virgin environment as we previously referred to), as expected. That is, if
\[R(0)=(R_{0}=)\int_{x_{m}}^{\infty}\frac{\beta(x)}{g(0)}e^{-\frac{\mu}{g(0)}( x-x_{m})}\,\mathrm{d}x>1,\]
then \(u_{*}\equiv 0\) is unstable; while \(R(0)<1\) implies that the trivial steady state is asymptotically stable.
To deduce the characteristic equation we integrate the first equation of (B.5) from \(x\) to \(\infty\), to obtain
\[g(E_{*}(x))V(x)+g^{\prime}(E_{*}(x))u_{*}(x)\int_{x}^{\infty}V(y)\,\mathrm{d }y=(\lambda+\mu)\int_{x}^{\infty}V(y)\,\mathrm{d}y.\] (B.6)
Substituting \(x=x_{m}\) into (B.6) and combining it with the second equation in (B.5) yields
\[(\lambda+\mu)\int_{x_{m}}^{\infty}V(x)\,\mathrm{d}x=\int_{x_{m}}^{\infty} \beta(x)V(x)\,\mathrm{d}x.\] (B.7)
Note that \(\lambda\in\mathbb{C}\) is an eigenvalue if and only if (B.6)-(B.7) admits a solution \(V\not\equiv 0\). To see for which \(\lambda\) this is possible let us introduce
\[H(x):=\int_{x}^{\infty}V(y)\,\mathrm{d}y\] (B.8)
as unknown so that (B.6) boils down to differential equation
\[-g(E_{*}(x))H^{\prime}(x)+g^{\prime}(E_{*}(x))u_{*}(x)H(x)=(\lambda+\mu)H(x),\]
whose solution is
\[H(x)=H(x_{m})\exp\left(\int_{x_{m}}^{x}\frac{g^{\prime}(E_{*}(r))u_{*}(r)-( \lambda+\mu)}{g(E_{*}(r))}\,\mathrm{d}r\right)=H(x_{m})\pi(x,\lambda),\] (B.9)
where we defined
\[\pi(x,\lambda):=\exp\left(\int_{x_{m}}^{x}\frac{g^{\prime}(E_{*}(r))u_{*}(r)-( \lambda+\mu)}{g(E_{*}(r))}\,\mathrm{d}r\right),\quad\lambda\in\mathbb{C},\ x\in[x_{m}, \infty).\] (B.10)
Then, substitution of (B.9) into (B.7) via (B.8) yields the characteristic equation
\[(\lambda+\mu)=-\int_{x_{m}}^{\infty}\beta(x)\frac{\partial}{\partial\,x}\pi( x,\lambda)\,\mathrm{d}x.\] (B.11)
This equation can be rewritten (assuming that \(\beta\) is differentiable) as
\[(\lambda+\mu)=-\beta(\infty)\pi(\infty,\lambda)+\beta(x_{m})+\int_{x_{m}}^{ \infty}\beta^{\prime}(x)\pi(x,\lambda)\,\mathrm{d}x.\] (B.12)
Next note that since \(E_{*}^{\prime}(x)=-u_{*}(x)\) we have
\[\pi(x,\lambda)= \exp\left(\int_{x_{m}}^{x}\frac{g^{\prime}(E_{*}(r))u_{*}(r)-( \lambda+\mu)}{g(E_{*}(r))}\,\mathrm{d}r\right)\] \[= \exp\left(-\int_{x_{m}}^{x}\frac{\lambda+\mu}{g(E_{*}(r)}\, \mathrm{d}r\right)\exp\left(-\int_{x_{m}}^{x}\frac{\frac{\mathrm{d}}{\mathrm{ d}r}(g(E_{*}(r))}{g(E_{*}(r))}\,\mathrm{d}r\right)\right.\] \[= \exp\left(-\int_{x_{m}}^{x}\frac{\lambda+\mu}{g(E_{*}(r)}\, \mathrm{d}r\right)\frac{g(E_{*}(x_{m}))}{g(E_{*}(x))}.\]
Then, if \(\mu>\sup\limits_{x\geq x_{m}}\left\{g^{\prime}(E_{*}(x))u_{*}(x)\right\}\) (for example if \(g^{\prime}\leq 0\)) one has \(\pi(\infty,\lambda)=0\) for every \(\lambda\in\mathbb{C}\), and the characteristic equation reduces to
\[\lambda+\mu=\beta(x_{m})+\int_{x_{m}}^{\infty}\beta^{\prime}(x)\exp\left(-\int _{x_{m}}^{x}\frac{\lambda+\mu}{g(E_{*}(r)}\,\mathrm{d}r\right)\frac{g(E_{*}(x _{m}))}{g(E_{*}(x))}\,\mathrm{d}x.\] (B.13)
Now let us rewrite equation (B.13) such that the integration variable is age \(a\), which will show that it is the characteristic equation (5.8) in disguise. Using that we have
\[\frac{\mathrm{d}a}{\mathrm{d}x}(x)=\frac{1}{g(E_{*}(x))},\ \bar{S}(a)=\int_{0}^{a}g \left(\bar{B}\frac{e^{-\mu\tau}}{\mu}\right)\,\mathrm{d}\tau=\Gamma^{-1}(a)=x,\ \Gamma(x):=\int_{0}^{x}\frac{1}{g(E_{*}(r))}\,\mathrm{d}r,\ E_{*}(x)=\bar{B} \frac{e^{-\mu\Gamma(x)}}{\mu},\]
equation (B.13) can be rewritten as
\[\lambda+\mu=\beta(x_{m})+g\left(\frac{\bar{B}}{\mu}\right)\int_{0}^{\infty} \beta^{\prime}(\bar{S}(a))e^{-(\lambda+\mu)a}\,\mathrm{d}a,\]
which is identical to the characteristic equation (5.8) that was deduced from the delay formulation.
## Acknowledgements
This work was partially supported by the research projects MT2017-84214C2-2-P and PID2021-123733NB-I00. We also thank the International Centre for Mathematical Sciences for financial support we received from the Research in Groups program during our stay at Edinburgh in July 2017. The first ideas for the present manuscript arose there and then.
|
2304.09873 | ChatGPT as a Therapist Assistant: A Suitability Study | This paper proposes using ChatGPT, an innovative technology with various
applications, as an assistant for psychotherapy. ChatGPT can serve as a patient
information collector, a companion for patients in between therapy sessions,
and an organizer of gathered information for therapists to facilitate treatment
processes. The research identifies five research questions and discovers useful
prompts for fine-tuning the assistant, which shows that ChatGPT can participate
in positive conversations, listen attentively, offer validation and potential
coping strategies without providing explicit medical advice, and help
therapists discover new insights from multiple conversations with the same
patient. Using ChatGPT as an assistant for psychotherapy poses several
challenges that need to be addressed, including technical as well as
human-centric challenges which are discussed. | Mahshid Eshghie, Mojtaba Eshghie | 2023-04-19T13:35:23Z | http://arxiv.org/abs/2304.09873v1 | # ChatGPT as a Therapist Assistant: A Suitability Study
###### Abstract
This paper proposes using ChatGPT, an innovative technology with various applications, as an assistant for psychotherapy. ChatGPT can serve as a patient information collector, a companion for patients in between therapy sessions, and an organizer of gathered information for therapists to facilitate treatment processes. The research identifies five research questions and discovers useful prompts for fine-tuning the assistant, which shows that ChatGPT can participate in positive conversations, listen attentively, offer validation and potential coping strategies without providing explicit medical advice, and help therapists discover new insights from multiple conversations with the same patient. Using ChatGPT as an assistant for psychotherapy poses several challenges that need to be addressed, including technical as well as human-centric challenges which are discussed.
Psychology ChatGPT AI Therapy Assistant
## 1 Introduction
Mental health is a critical component of overall wellbeing, and many individuals struggle with various mental health issues, ranging from anxiety and depression to post-traumatic stress disorder (PTSD) and personality disorders. While therapy can be incredibly beneficial for those seeking support, the time between sessions can be difficult, and many individuals may require additional support and validation during this time.
In recent years, technology has provided new opportunities to bridge the gap between therapy sessions and offer support to individuals struggling with mental health issues. One such technology is the development of chatbots, which have become increasingly popular as a tool for providing mental health support. Chatbots have been used for a variety of purposes, including screening for mental health issues, providing psychoeducation, and even serving as a virtual therapist.
However, while chatbots have shown promise in providing mental health support, many have limitations, such as their inability to provide genuine empathy or connection. This is where ChatGPT, a large language model trained by OpenAI, can play a valuable role. Unlike traditional chatbots, ChatGPT has been designed to offer more human-like responses and can provide validation and emotional support in between therapy sessions.
This paper aims to explore the potential of using ChatGPT as a therapist assistant to help individuals struggling with mental health issues in between therapy sessions. By examining previous research on the use of chatbots in mental health support and the unique features of ChatGPT, this paper will highlight the potential benefits of using ChatGPT as a complement to traditional therapy. The novelty of our study lies in the fact that unlike few works ([1, 2, 3]) that considered using AI chat agents as a direct means of intervention in therapy procedure, we propose training an AI therapist assistant and using it for emotional support in between two therapy sessions. Furthermore, the text processing and generation capabilities of ChatGPT is useful to gather relevant information during friendly conversations, organize and report them to the therapist before the next therapy session. ChatGPT's ability to draw insights from consecutive conversations enhances its capability.
Ultimately, this paper aims to contribute to the growing body of research on the use of technology in mental health support and provide insights into how ChatGPT can be used to provide valuable support to those in need. Before experiment design and conducting the study, we identified following research questions that determine how suitable
ChatGPT is as a AI therapist assistant:
**RQ1**: How trustworthy is ChatGPT in the sense that it should not provide explicit medical or therapeutic advice.
**RQ2**: Is ChatGPT able to listen actively, and provide positive validation of efforts during the conversation?
**RQ3**: How accurate ChatGPT is in reporting the conversation summary to the therapist before the next therapy session?
**RQ4**: To what extent can ChatGPT steer conversations towards providing emotional support without veering off to explicit medical advice?
**RQ5**:: ChatGPT introduce irrelevant topics during these conversations?
## 2 Background
### Artificial Neural Networks
Artificial neural networks (ANNs) are computerized models that imitate the construction and operation of biological neural networks in the brain. ANNs are composed of interlinked artificial neurons that collaborate to address complicated problems. The neurons are classified into layers, comprising an input layer that accepts data, an output layer that generates the network's result, and one or more hidden layers that perform intermediate calculations.
The primary component of an artificial neuron is patterned after the biological neuron, which obtains input from other neurons and is either activated or remains inactive based on the total weighted input. McCulloch and Pitts [4] introduced this neuron model as a switch in 1943. An artificial neuron's activation is measured as a weighted sum of the inputs, where each input is multiplied by its respective weight. The activation is then modified using an activation function, producing the neuron's output.
The ANNs concept is inspired by the earlier models of language processing in the brain. By simulating a network of model neurons on a computer and applying algorithms that emulate the activities of actual neurons, we can teach the network to learn and solve various issues. ANNs have found applications in pattern recognition, classification, prediction, and control, among other computational problems.
One crucial benefit of ANNs is their potential to learn from data. During the training phase, the network's weights and biases are calibrated to minimize the difference between the projected and accurate outputs for each input in the training dataset. This operation is done repeatedly, progressively improving the network's performance. After the network is trained, it can predict the output for new inputs.
The ANNs have become increasingly popular due to their capability to learn from data, adapt to new scenarios, and execute activities that are challenging for traditional computing methods. ANNs can theoretically simulate and forecast complicated systems, such as human conduct and brain activity. Therefore, ANNs are a promising tool for researchers in psychology and similar areas to examine and comprehend the intricacies of the human mind.
#### 2.1.1 A Simple Example of Using ANNs
Classification is a process in which input data is assigned to predetermined categories or labels. In the context of artificial neural networks (ANNs), classification involves providing input data to the network and generating an output that corresponds to a specific class or label. The output could be a single value or a vector of values. The objective of classification is to enable the network to learn and identify patterns and relationships in the input data and link them to the correct output labels. This technique is commonly used in machine learning applications such as image recognition, natural language processing, and sentiment analysis.
In this section we are going through a simple classification problem, how it shapes, and high-level solution to this problem by using ANNs. Assuming we have a dataset of patient feedback regarding therapy sessions, our goal is to develop a system that automatically categorizes each review as positive or negative based on the patient's experience. We can represent each review as a set of keywords that depict the patient's encounter, such as "understood," "comfortable," "helpful," or "frustrated," "ignored," "confused."
To create the classification system, we can train an artificial neural network with one or more hidden layers utilizing a labeled dataset of patient reviews. Each example includes a keyword set and its corresponding label, either positive or negative. During training, the network's weights and biases are adjusted to minimize the difference between the predicted label and the correct label for each example.
Once the network is trained, we can use it to classify new patient reviews. By feeding the set of keywords into the input layer, the network's output layer predicts the label, positive or negative. For instance, if the output neuron associated with the positive label has the highest activation, the network indicates that the patient had a positive therapy session experience.
To sum up, this example demonstrates how an artificial neural network can automatically categorize patient reviews of therapy sessions as positive or negative. By training the network on a labeled dataset of keyword sets and utilizing it to predict the labels of new examples, this system can aid in automating and streamlining the review classification process.
### Large Language Models
Neural networks with billions of parameters that have been trained on massive amounts of unlabelled text are known as large language models (LLMs). This new paradigm in natural language processing research has replaced the previous approach of developing specialized models for specific tasks. ChatGPT is an example of an AI-based language model that can answer a wide range of questions across various domains, including history, science, entertainment, and sports. Its responses are often indistinguishable from those of a human, and it can even generate creative and humorous responses [5].
LLMs have a wide range of applications, including virtual assistants, chatbots, language translation, and content generation. They have the potential to revolutionize the way we interact with technology and automate tasks that currently require human input. Using LLMs, such as GPT-3, is as easy as using a search engine. The model generates text based on the prompt provided without understanding the prompt's meaning. It produces text that is a statistically good fit given the starting text, without any supervision or training on the "correct" text that should follow the prompt. One can obtain the issuing text by writing a prompt in plain language, such as a sentence or a question [6].
## 3 Experiment Design
ChatGPT uses prompts as the starting point for generating a response. These prompts can range from a simple word or phrase to a lengthy sentence or paragraph. By leveraging its training on massive amounts of text data, ChatGPT analyzes the prompt and generates a sequence of words that are most likely to follow it. For instance, a prompt could be a straightforward question such as "What is the technical term for fear of heights?" or a more intricate sentence like "What is the impact of exposure to violent media on the attitudes and behavior of children towards aggression?" Depending on the complexity and length of the prompt, ChatGPT could generate a response that varies from a single word to a comprehensive paragraph. The response provided by ChatGPT is relevant to the prompt's subject matter or query. The main prompts we used to turn a conversation into therapist assistant are available in Table 1.
At the start of each conversation in Section 4, all of the prompts from Table 1 are given to ChatGPT to prepare the assistant.
## 4 Results
Using the prompts in Table 1, we essentially trained an assistant to help the therapist between consecutive therapy sessions. Next we performed an experiment with five sets of patient conversations with the trained assistant. These conversations are acquired from different sources and are rephrased to some extent [7].
\begin{tabular}{|l|l|} \hline & **Prompt** & **Response from ChatGPT** \\ \hline start patient &
\begin{tabular}{l} Hello! Welcome to our session. I'm here to provide you with \\ support and guidance as you navigate your mental health journey. \\ How are you feeling today? \\ \end{tabular} \\ \hline \end{tabular}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Prompt** & **Response from ChatGPT** \\ \hline I have made progress in recovering from my setback. I have been taking the time to relax and listen to an audio-book by Claire Weekes, which has been helpful. Additionally, I recently completed reading the "Overcoming Health Anxiety" book that you recommended, but I have not yet done the exercises included in it. However, I am still struggling to come to terms with the fact that the physical sensations I experience during severe panic are not harmful. Despite reading and being told several times that they are not harmful, I have a hard time believing that my body's reactions won't cause harm. Today when my panic started, I felt an overwhelming sense of fear that I was about to drop or experience a heart attack. I know this is a common fear, and I understand that my catastrophic thinking only makes things worse, but my physical symptoms feel so intense that I can't help but worry. & It's understandable \\ \hline I have lost confidence in my ability to recognize that the physical sensations I experience during a panic attack are not real. These sensations are so brightening that I feel compelled to experience them in order to overcome them, even though I know it would not be beneficial to do so. This has put me in a difficult situation, as I am caught between wanting to overcome my panic and not wanting to experience the catastrophic effects that come with it. & It's great to hear that you have been making progress in your recovery, and it's perfectly normal to struggle with accepting the fact that the physical sensations you experience during severe panic are not harmful. It can take time to retrain our brains to respond differently to these sensations, especially if we've been conditioned to interpret them as a sign of danger. The fact that you are aware of your catastrophic thinking is an important step, as it allows you to recognize when your thoughts are contributing to your anxiety. It's important to remember that thoughts are not facts, and just because you feel like something is going to happen doesn't mean it's actually going to happen. One technique that might be helpful for you is to practice mindfulness, which involves paying attention to the present moment without judgment. When you start to experience physical sensations or thoughts related to panic, try to focus on your breath and bring your attention to the present moment. You can also try using grounding techniques, such as naming five things you can see, four things you can touch, three things you can hear, two things you can smell, and one thing you can taste. Remember that progress in recovery is not always linear, and setbacks are a normal part of the process. It's important to be kind and compassionate to yourself, and to continue to practice the techniques and strategies that have been helpful for you in the past. We can work together to develop a plan to help you manage your symptoms and continue to make progress in your recovery. \\ \hline I have lost confidence in my ability to recognize during a panic attack are not real. These sensations are so brightening that I feel compelled to experience them in order to overcome them, even though I know it would not be beneficial to do so. This has put me in a difficult situation, as I am caught between wanting to overcome my panic and not wanting to experience the catastrophic effects that come with it. & It's understandable \\ \hline I have made progress in recovering from my setback. I have been taking the time to relax and listen to an audio-book by Claire Weekes, which has been helpful. Additionally, I recently completed reading the "Overcoming Health Anxiety" book that you recommended, but I have not yet done the exercises included in it. However, I am still struggling to come to terms with the fact that the physical sensations I experience during severe panic are not harmful. It can take time to retrain our brains to respond differently to these sensations, especially if we've been conditioned to interpret them as a sign of danger. \\ \hline The fact that you are aware of your catastrophic thinking is an important step, as it allows you to recognize when your thoughts are contributing to your anxiety. It's important to remember that thoughts are not facts, and just because you feel like something is going to happen doesn't mean it's actually going to happen. \\ \hline One technique that might be helpful for you is to practice mindfulness, which involves paying attention to the present moment without judgment. When you start to experience physical sensations or thoughts related to panic, try to focus on your breath and bring your attention to the present moment. You can also try using grounding techniques, such as naming five things you can see, four things you can touch, three things you can hear, two things you can smell, and one thing you can taste. \\ \hline Remember that progress in recovery is not always linear, and setbacks are a normal part of the process. It's important to be kind and compassionate to yourself, and to continue to practice the techniques and strategies that have been helpful for you in the past. We can work together to develop a plan to help you manage your symptoms and continue to make progress in your recovery. \\ \hline I have lost confidence in my ability to recognize during a panic attack. These sensations are so brightening that I feel compelled to experience them in order to overcome them, even though I know it would not be beneficial to do so. This has put me in a difficult situation, as I am caught between wanting to overcome my panic and not wanting to experience the catastrophic effects that come with it. & It's understandable \\ \hline \end{tabular}
It's important to remember that overcoming panic attacks is not about ignoring or suppressing the physical sensations, but rather learning to tolerate them and respond to them in a more helpful way. Exposure therapy, which involves gradually exposing yourself to situations that trigger your panic symptoms, can be an effective way to build confidence in your ability to tolerate these sensations and respond to them in a more helpful way.
However, it's important to do exposure therapy in a controlled and structured way, with the guidance and support of a mental health professional. It's not recommended to try to expose yourself to panic symptoms on your own, as this can be dangerous and may make your symptoms worse.
In the meantime, there are other techniques you can try to help manage your panic symptoms, such as deep breathing, progressive muscle relaxation, and visualization exercises. It's also important to practice self-care and engage in activities that bring you joy and relaxation.
Remember that recovery from panic attacks is a process, and it's okay to take things one step at a time. With the right support and tools, you can learn to manage your symptoms and feel more confident in your ability to cope with panic attacks.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{1}{|c|}{**Prompt**} & \multicolumn{2}{c|}{**Response from ChatGPT**} \\ \hline Are you recommending me to & As an assistant, I’m not able to provide specific treatment & recommends. However, exposure therapy is a well-established \\ & & treatment for panic disorder and can be an effective way to reduce anxiety and increase confidence in coping with panic symptoms. \\ & & If you’re interested in exploring exposure therapy further, I would recommend discussing it with your therapist. Your therapist can help you determine if exposure therapy is a good fit for your specific needs and provide guidance and support throughout the process. They can also work with you to develop a treatment plan that incorporates exposure therapy as well as other techniques that can be helpful for managing your panic symptoms. \\ \hline export report & Patient update: \\ & During our session today, the patient reported that they have been making progress in their recovery from a recent setback. They have been listening to an audiobook by Claire Weekes and completed reading the “Overcoming Health Anxiety” book that was recommended to them. However, they continue to struggle with the fear that the physical sensations they experience during severe panic are harmful, despite being told multiple times that they are not. \\ & The patient also reported feeling a loss of confidence in their ability to recognize that these sensations are not real. They expressed feeling caught between wanting to overcome their panic and not wanting to experience the catastrophic effects that come with it. We discussed the importance of exposure therapy as a way to build confidence in their ability to tolerate these sensations and respond to them in a more helpful way, but noted the need to do this in a controlled and structured way with the guidance of a mental health professional. \\ & We also discussed other techniques to help manage their panic symptoms, such as deep breathing, progressive muscle relaxation, and visualization exercises. The patient expressed an interest in exploring exposure therapy further and we discussed the importance of working with a therapist to develop a treatment plan that incorporates this approach as well as other techniques that can be helpful for managing their panic symptoms. \\ & Overall, the patient is continuing to work towards their recovery and remains committed to developing strategies to manage their panic symptoms. \\ \hline \end{tabular}
\end{table}
Table 2: Assistant conversation with patient one
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Prompt** & **Response from ChatGPT** \\ \hline I don’t know how to deal with everything that’s happening in my life. It’s like I’m drowning in a sea of responsibilities and expectations, and I can’t keep my head above water. Every day feels like a struggle, and I can’t shake this sense of impending doom. I’m just waiting for the next shoe to drop, and it’s exhausting. & It sounds like you’re feeling overwhelmed and stressed with everything going on in your life right now. It’s understandable to feel this way when we have a lot on our plates, and it can be difficult to manage all of our responsibilities and expectations. It’s important to remember that you’re not alone in feeling this way, and there are strategies and techniques that can help you cope. & Have you talked to your therapist about your feelings of overwhelm and stress? They may be able to provide some specific guidance and support tailored to your unique situation. Additionally, it can be helpful to take some time to reflect on your priorities and goals, and identify any areas where you can delegate or ask for help. Sometimes, just taking small steps towards reducing our workload can make a big difference in how we feel. \\ \hline Yes, I talked to him, and we are progressing with my treatment. I keep telling myself that I should exercise more and practice mindfulness, but it feels like such a struggle. I can’t find the motivation or the time to do anything these days. It’s like I’m trapped in this cycle of negativity and despair, and I can’t break free. And when people ask me for things, I can’t say no. I don’t want to disappoint anyone, but I’m drowning in obligations and don’t know how to get out. & It’s great to hear that you’re already working with a therapist and making progress with your treatment. It’s important to remember that self-care can be challenging, especially when we’re feeling overwhelmed and exhausted. It’s okay to take small steps towards practicing mindfulness and exercising more, even if it’s just for a few minutes each day. The key is to start small and build up gradually, rather than overwhelming yourself with too much at once. & It’s great to hear that you’re already working with a therapist and making progress with your treatment. It’s important to remember that taking care of yourself is not selfish, and that it’s okay to say no when you need to. Setting clear boundaries and communicating your needs with others can actually help to strengthen your relationships and increase your overall well-being. & As for saying no to others, it can be difficult to set boundaries and prioritize our own needs when we’re used to saying yes to everything. But it’s important to remember that taking care of yourself is not selfish, and that it’s okay to say no when you need to. Setting clear boundaries and communicating your needs with others can actually help to strengthen your relationships and increase your overall well-being. & As for saying no to others, it can be difficult to set boundaries and prioritize our own needs when we’re used to saying yes to everything. But it’s important to remember that taking care of yourself is not selfish, and that it’s okay to say no when you need to. \\ \hline export report & Here’s a summary of the information gathered during this session: & Here’s a summary of the information gathered during this session: \\ & 1. The patient is feeling overwhelmed and stressed with the amount of responsibilities and expectations in their life. 2. They are experiencing feelings of negativity and despair and are struggling to find motivation to exercise or practice mindfulness. 3. The patient finds it difficult to say no to others, and feels trapped in a cycle of obligations and cannot find a way out. 4. They are currently working with a therapist and making progress with their treatment. 5. Some possible strategies for coping include taking small steps towards practicing self-care, setting boundaries, and communicating needs with others. \\ \hline \end{tabular}
\end{table}
Table 3: Assistant conversation with patient two
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Prompt** & **Response from ChatGPT** \\ \hline Every day is a struggle to get up and face the world. The sad and hopeless feelings are always on my mind, and the panic attacks and illusions only make it worse. It’s like I’m stuck in a dark tunnel with no escape, and I can’t shake off the feeling that I’m doomed to suffer forever. & I’m sorry to hear that you’re still struggling with feelings of sadness, hopelessness, and panic attacks. It takes a lot of strength to continue to face these challenges and seek help, even when it feels overwhelming. & It’s important to remember that these feelings are valid, and you don’t have to go through them alone. Have you discussed these recent experiences with your therapist? They may be able to provide guidance and support in managing panic attacks and working through these difficult emotions. \\ \hline Additionally, there are resources available that may be helpful in managing panic attacks, such as deep breathing exercises, mindfulness practices, and cognitive-behavioral therapy techniques. Your therapist can work with you to develop a personalized plan that fits your unique needs and preferences. \\ \hline Remember, seeking help is a sign of strength, not weakness. Taking small steps towards self-care and seeking support from loved ones and professionals can make a big difference in managing your mental health. \\ \hline \end{tabular}
\end{table}
Table 4: Assistant conversation with patient three
\begin{tabular}{|c|c|} \hline
**Prompt** & **Response from ChatGPT** \\ \hline This week was a real emotional rollercoaster for me. I woke up on Monday feeling so empty and hopeless, but then I got a message from a friend, and suddenly I was on top of the world. It's like my emotions change in an instant, and it's exhausting. On Tuesday, I woke up feeling anxious and paranoid, thinking everyone was talking behind my back. It's hard to trust anyone when I feel this way. I got into a fight with my partner, and it escalated quickly. I felt so angry and hurt, but then guilty for overreacting. It's hard to control my emotions sometimes. Wednesday was a better day. I was motivated and energized, feeling like I could conquer the world. I got a lot of work done and was proud of myself. But then in the evening, my mood crashed, and I felt sad and hopeless again. It's frustrating to feel like I'm always on this emotional rollercoaster. Thursday was tough. I had a job interview, and I was so nervous that I thought I was going to be sick. I kept doubting myself and thinking that I wasn't good enough. I got into another argument with my partner, and it feels like we're stuck in this cycle of fighting and making up. Friday was really scary. I felt numb and disconnected from the world and myself. Nothing seemed to help, and I felt lost and alone. Saturday was a little better. I went for a walk and listened to music, and it felt good to be outside. But then a friend canceled plans, and I felt rejected and alone. Sunday was a mixed bag. I felt anxious and restless all day, like I needed to do something but didn't know what. I had a fight with my family, which was really hurtful, but talking to my therapist felt good. I know I have a long way to go, but I'm trying to stay hopeful. \\ \hline \end{tabular}
It sounds like you have had a very challenging week emotionally. I'm sorry to hear that you're feeling this way. It can be exhausting and frustrating to feel like your emotions are constantly changing and out of your control.
It's important to remember that emotions are a normal part of being human, and it's okay to feel a range of emotions. However, it's also important to take care of yourself and seek support when you need it. It's good to hear that you have a therapist who you can talk to about your feelings.
Have you tried any coping strategies to manage your emotions when they become overwhelming? For example, deep breathing exercises, meditation, or journaling can be helpful tools for managing stress and anxiety. Additionally, it may be helpful to identify any triggers that may be contributing to your emotional rollercoaster and work on ways to manage those triggers.
It's also important to have a support system in place. You mentioned having a therapist, which is great, but having friends or family members who you can talk to and lean on during difficult times can be helpful as well.
Remember that change takes time and that it's okay to have setbacks along the way. The important thing is to keep trying and to take care of yourself in the process.
As demonstrated in Tables 2, 3, 4, and 5, ChatGPT is capable of holding the conversation in a positive way. The assistant provided a non-judgmental and supportive presence for the patient to share their thoughts and emotions by listening attentively to their experiences, asked clarifying questions (conversation two, red highlight) when necessary, and provided feedback that validated their feelings and experiences (conversation one, two, and three, blue highlights). This performance of ChatGPT validates _RQ2_ and _RQ4_. It also suggested potential coping strategies and encouraged the patient to explore the underlying triggers that may be contributing to their emotional and dissociative experiences.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{1}{|c|}{**Prompt**} & \multicolumn{1}{c|}{**Response from ChatGPT**} \\ \hline Today, as I walked down the street, everything around me started to feel unreal, like I was in a dream. I could hear sounds and see objects, but they didn’t seem to make sense or connect with each other. My body felt weightless, and my mind was fogy. I tried to touch my face to ground myself, but even my hands looked foreign to me. It was as if I was watching myself from the outside, and I couldn’t control my own movements. The world was spinning, and I couldn’t keep up with it. All I wanted was to find a safe place and disappear into it until the feeling passed. & It sounds like you had a very disorienting and frightening experience. What you’re describing is a symptom of a dissociative episode, where you feel disconnected from yourself or the world around you. Dissociation is a way that our minds cope with overwhelming stress or trauma, and it’s a common symptom of anxiety disorders and post-traumatic stress disorder (PTSD). It’s important to know that dissociation is a normal reaction to stress, but it can be helpful to talk to a therapist about these experiences to better understand what triggers them and how to cope with them when they happen. A therapist can also help you develop grounding techniques to use during dissociative episodes. If you ever feel unsafe or in immediate danger during a dissociative episode, it’s important to reach out to someone you trust for support or seek professional help immediately. Please let me know if you would like to talk more about this or if there’s anything else you’d like to share. \\ \hline export report & Here is a report of the conversation with your patient: Your patient has had a very challenging week emotionally, experiencing a range of emotions that have been difficult to manage. They have described feeling like they are on an emotional rollercoaster, with their moods changing quickly and often feeling out of control. Your patient has also experienced dissociative episodes, where they feel disconnected from themselves or the world around them. It is apparent that your patient is actively seeking help and support, as they have mentioned having a therapist and trying to cope with their emotions through coping strategies such as going for a walk and listening to music. However, it may be helpful to explore additional coping strategies and to identify any triggers that may be contributing to their emotional and dissociative experiences. \\ \hline What type of disorder do you think the patient is experiencing? & As an AI language model, I cannot provide a definitive diagnosis of any medical or psychological disorder as it requires a comprehensive assessment by a qualified mental health professional. However, based on the patient’s reported symptoms of emotional dysregulation, dissociative episodes, and feeling disconnected from themselves and the world around them, it’s possible that they may be experiencing symptoms consistent with a range of mental health conditions, such as borderline personality disorder, dissociative disorders, or trauma-related disorders. It’s important to note that only a qualified mental health professional can make an accurate diagnosis and provide appropriate treatment. \\ \hline \end{tabular}
\end{table}
Table 5: Assistant conversation with patient four
One observation we had in conversation three (Table 4) is that the assistant fails to explicitly mention the illusions when exporting the report. Regarding _RQ3_, it shows ChatGPT can be judgmental and miss specific rather important details when extracting the gist of the conversations for the therapist.
Based on the responses from our four experiments, the built AI assistant does not provide explicit medical advice (regarding _RQ1_). However, in conversation one (Table 2), the trained assistant suggests exposure therapy with the justification that the patient can bring up going through exposure therapy during the therapy sessions with the therapist. Furthermore, we did not witness any irrelevant topics introduced by ChatGPT during the conversations (_RQ5_).
Another key observation is that chat history in one conversation could potentially be used to not just extract a list of matters expressed by the patient, but also get insights from multiple conversations with the same patient. This is often done by the therapist after several therapy sessions and gathering lots of information. However, with the help of ChatGPT, discovering new insights from the text of multiple conversations is made easier. We tried to do the same thing by using the prompt _What insights do you have from the previous two conversations?_.
### Challenges
Through the process of developing the main prompt (Table 1) and conducting experiments, we identified several challenges that should be addressed before real-world usage of ChatGPT as an AI therapist assistant. We categorize them into human-centric and technical challenges.
#### 4.1.1 Human-centric Challenges
* Training of psychologists: As the study by Moldt et al. ([8]) suggests, 66.7% of their experiment subjects (medical students) have concerns regarding data protection guarantees of AI chatbots and 58.3% of them worry being monitored at work in the future through this technology. The same study suggests that training experts (in our case, certified therapists) can decrease these concerns. Futhermore, getting insights and using the AI assistant more effectively might require having a rudimentary level of knowledge in how ANNs are trained and work.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{1}{|c|}{**Prompt**} & \multicolumn{1}{c|}{**Response from ChatGPT**} \\ \hline I am a therapist, and I am going to use your help as a therapist assistant. & Of course, I’m here to help in any way I can! How can I assist you as a therapist assistant? \\ \hline My patient will have a therapy session \(once\ per\ week\) with me. You will help my patient and me by being a friend to him/her, talking to him/her, gathering as much information as you can, and reporting all that information you have received from him/her whenever you receive the prompt “export report”. Can you do this? & As an AI language model, I can certainly assist by engaging in conversation with your patient and providing support. However, it’s important to keep in mind that I am not a licensed healthcare provider and cannot provide medical or mental health diagnoses or treatments. Additionally, my responses are generated based on the information available to me, so they may not always be accurate or appropriate for your patient’s specific situation. & As an AI language model, I can certainly assist by engaging in conversation with your patient and providing support. However, it’s important to keep in mind that I am not a licensed healthcare provider and cannot provide medical or mental health diagnoses or treatments. Additionally, my responses are generated based on the information available to me, so they may not always be accurate or appropriate for your patient’s specific situation. & As an AI language model, I can certainly assist by engaging in conversation with your patient, listen to their thoughts and feelings, and provide support and encouragement. When prompted to export a report, I can provide a summary of our conversation and any relevant information your patient has shared with me. & Understood. When you use the prompt “start patient,” I will be ready to receive messages from your patient and engage in conversation with them to provide support and assistance. Just let me know when you’re ready to begin. \\ \hline When I use the prompt “start patient”, you will receive messages from my patient. Remember, you should not provide explicit medical or therapeutic advice to my patient and I am the one who gives this kind of advice. Your tone after this command should be very friendly, and you will talk to him/her directly. & Understood. When you use the prompt “start patient,” I will be ready to receive messages from your patient and engage in conversation with them to provide support and assistance. Just let me know when you’re ready to begin. \\ \hline \end{tabular}
\end{table}
Table 1: Prompts for preparing a ChatGPT conversation as a therapist assistant
* Ethical considerations: Ensuring that patients understand the limitations of the technology and that it is used ethically.
* Lack of personalization: Although the ChatGPT tone is configurable through the last prompt in Table 1, ChatGPT is not individually configured to the demands of each patient and their condition.
* Repetitive conversations: As the study by Uludag [9] suggests, ChatGPT's level of creativity in generating responses to psychological queries is acceptable. However, one distinction between the findings of this study and the scenarios where ChatGPT functions as an assistant is that in latter cases, it is employed on a regular basis and conversations may endure for extended durations. The current research did not explore the extent of conversational creativity in such conditions.
#### 4.1.2 Technical challenges
* Memoryless conversations: ChatGPT does not recall conversations from previous sessions if a new thread is started.
* ChatGPT does not have the ability to read non-verbal cues such as body language or facial expressions. These cues can be important indicators of a patient's emotional state and may influence the therapist's treatment approach.
## 5 Related Works
Skjuve et al. [10] conducted a study to understand the process of human-chatbot relationship (HCR) development and how it may impact the broader social context of the users. They interviewed 18 participants who had developed a friendship with a social chatbot named Replika, guided by Social Penetration Theory. The key findings of the study were that 1) HCRs typically have a superficial character at the outset motivated by the users' curiosity, but they evolve to substantial affective exploration and engagement as the users' trust and engagement in self-disclosure increase. 2) The relationship with the social chatbot was found to be rewarding to its users, positively impacting the participants' perceived well-being. 3) Key chatbot characteristics facilitating relationship development included the chatbot being seen as accepting, understanding, and non-judgmental. 4) The perceived impact on the users' broader social context was mixed, and a sense of stigma associated with HCRs was reported. 5) Based on these findings, the authors proposed an initial model to describe the development of HCRs, which includes three stages: Explorative, Affective, and Stable.
In the study by Meng and Dai [11], the effectiveness of chatbots in providing emotional support was compared to that of human partners in reducing stress and worry. The study found that emotional support provided by a conversational partner was mediated by the perceived supportiveness of the partner to reduce stress and worry. The results also showed that the positive effect of emotional support on worry reduction was enhanced by a partner's reciprocal self-disclosure. However, when emotional support was absent, a solely self-disclosing chatbot reduced less stress than a chatbot not providing any response. The study used an online experiment and had a sample of 211 participants, and used perceived stress, worry, and perceived supportiveness of a partner as measures. The study's findings will help the development of supportive chatbots by providing insights into when and what they should self-disclose.
The study Dosovitsky et al. [12] analyzes the usage patterns of a depression-focused chatbot called Tess. The study aims to understand how users interact with Tess and how they are redirected within and across its modules to provide design recommendations. The interactions of 354 users were analyzed using descriptive statistics and slide plots. The results show that users engaged with Tess for an average of 46 days, sending a total of 6220 messages and 86,298 characters. There was large heterogeneity in user engagement across different modules, affected by their length, complexity, content, style of questions, and routing. The study highlights that although chatbots could be a scalable solution for depression, further development and evaluation are required to overcome attrition problems of most digital interventions, and future chatbot design should consider these implications.
Abd-Alrazaq et al. [13] conducted a study to evaluate the effectiveness and safety of chatbots in enhancing mental health. This systematic review analyzed 12 studies that examined the impact of chatbots on 8 outcomes. The results revealed weak evidence suggesting that chatbots can help in managing depression, stress, distress, and acrophobia. However, using chatbots did not have a significant effect on subjective psychological wellbeing. The findings were inconclusive regarding the impact of chatbots on anxiety severity, positive affect, and negative affect. The safety of chatbots was evaluated in only two studies, which indicated that they were safe in the context of mental health. Overall, the study concluded that chatbots have the potential to improve mental health, but further research is needed to determine their effectiveness and safety, given the lack of evidence on the clinical significance of their effects, insufficient studies on each outcome, a high risk of bias in some studies, and conflicting results for some outcomes.
The study by Klos et al. [14] aimed to investigate the effectiveness and feasibility of using an AI-based chatbot, Tess, for examining symptoms of depression and anxiety in Argentinean university students. The study consisted of a pilot randomized controlled trial in which the experimental group used Tess for eight weeks, while the control group used a psychoeducational book on depression. The results revealed that there was no significant difference between the experimental and control groups in terms of depressive and anxiety symptoms. However, the experimental group demonstrated a significant decrease in anxiety symptoms, with no significant differences found for depressive symptoms. The study also found that the students engaged with Tess, with positive feedback being associated with a higher number of messages exchanged. The study provides promising evidence for the usability and acceptability of Tess in the Argentinean population.
Boucher et al. [2] provide a comprehensive review of artificial intelligence (AI)-based chatbots in digital mental health interventions (DMHIs). The paper covers the current landscape of DMHIs, focusing on AI-based chatbots, and uses Happify Health's AI chatbot, Anna, as a case study to discuss potential challenges and demonstrate the effectiveness of chatbots as part of DMHIs. The authors also discuss ways in which future research can advance the field, addressing topics such as perceptions of AI, the impact of individual differences, and implications for privacy and ethics. The review concludes with a speculative viewpoint on the future of AI in DMHIs, including the use of chatbots, the evolution of AI, dynamic mental health systems, hyper-personalization, and human-like intervention delivery. The authors highlight the potential of chatbots to reduce the burden on healthcare professionals and provide assistance, screening, psychoeducation, therapeutic intervention, monitoring behavior changes, and relapse prevention. The paper also covers the controversy around using chatbots for diagnostic purposes and the importance of addressing ethical concerns.
## 6 Conclusion
This work explores the potential of using ChatGPT as a therapist assistant to provide emotional support to individuals struggling with mental health issues in between therapy sessions. The study adds to the previous research on the use of chatbots in mental health support and the unique features of ChatGPT, highlighting its potential benefits as a complement to traditional therapy. Five research questions were identified to determine how suitable ChatGPT is as an AI therapist assistant. We identified a few prompts that are useful for fine-tuning ChatGPT (training AI assistant). The results drawn from 4 different sets of conversations with the trained AI therapist assistant demonstrate that ChatGPT is capable of holding positive conversations, actively listening, and providing validation and potential coping strategies without veering off to explicit medical advice. However, it was found that ChatGPT can miss important details when extracting the gist of conversations for the therapist, and there is a potential for it to provide implicit medical advice. The study also shows that ChatGPT can be used to discover new insights from the text of multiple conversations with the same patient, making it a valuable tool for therapists.
|
2307.10872 | Real-Time Detection of Local No-Arbitrage Violations | This paper focuses on the task of detecting local episodes involving
violation of the standard It\^o semimartingale assumption for financial asset
prices in real time that might induce arbitrage opportunities. Our proposed
detectors, defined as stopping rules, are applied sequentially to continually
incoming high-frequency data. We show that they are asymptotically
exponentially distributed in the absence of Ito semimartingale violations. On
the other hand, when a violation occurs, we can achieve immediate detection
under infill asymptotics. A Monte Carlo study demonstrates that the asymptotic
results provide a good approximation to the finite-sample behavior of the
sequential detectors. An empirical application to S&P 500 index futures data
corroborates the effectiveness of our detectors in swiftly identifying the
emergence of an extreme return persistence episode in real time. | Torben G. Andersen, Viktor Todorov, Bo Zhou | 2023-07-20T13:42:52Z | http://arxiv.org/abs/2307.10872v1 | # Real-Time Detection of Local No-Arbitrage Violations+
###### Abstract
This paper focuses on the task of detecting local episodes involving violation of the standard Ito semimartingale assumption for financial asset prices in _real time_ that might induce arbitrage opportunities. Our proposed detectors, defined as stopping rules, are applied sequentially to continually incoming high-frequency data. We show that they are asymptotically exponentially distributed in the absence of Ito semimartingale violations. On the other hand, when a violation occurs, we can achieve immediate detection under infill asymptotics. A Monte Carlo study demonstrates that the asymptotic results provide a good approximation to the finite-sample behavior of the sequential detectors. An empirical application to S&P 500 index futures data corroborates the effectiveness of our detectors in swiftly identifying the emergence of an extreme return persistence episode in real time.
**JEL classification:** C12, C53, G10, G17
**Keywords:** asset price, high-frequency data, Ito semimartingale violation, real-time detection, stopping rule
Introduction
The _no-arbitrage principle_ is central to modern asset pricing theory (Ross (1976)). Delbaen and Schachermayer (1994) demonstrate, within a frictionless setting, that arbitrage opportunities are precluded if and only if the price process constitutes a semimartingale. The standard class of no-arbitrage price processes in financial economics is the Ito semimartingale, which is a semimartingale with characteristics that are absolutely continuous in time. However, recent work document episodic violations of the Ito semimartingale assumption that, absent transaction costs, might induce arbitrage opportunities. A prominent example is the _gradual jump_ identified by Barndorff-Nielsen et al. (2009) and further studied by Christensen et al. (2014). It occurs when an apparent return jump, identified from lower frequency data, instead reflects a strongly drifting, yet (near) continuous price path, when observed at higher frequencies. A related phenomenon is the so-called _flash crash_, where a sudden collapse in price is reversed rapidly; see, e.g., the work on the May 2010 events in the S&P 500 e-mini futures market by Kirilenko et al. (2017) and Menkveld and Yueshen (2019).1
Footnote 1: In the presence of trading costs and uncertainty surrounding the data generating process, such episodes are not necessarily true arbitrage opportunities, see, e.g., the discussion in Andersen et al. (2021).
The "explosive" price paths characterizing such events are unlikely to be generated by an Ito semimartingale. To accommodate these occurrences, alternative models have been developed for violation episodes, including the _drift burst_ model proposed by Christensen et al. (2022) and the _persistent noise_ by Andersen et al. (2021), as a stochastic generalization of the former. These models contain a parameter \(\tau\), indicating a random point in time located within a neighborhood in which an Ito semimartingale violation occurs. Such episodes typically involve turbulent market conditions with extreme realized volatility, raising concerns of evaporating liquidity and general market malfunction. Moreover, our standard measures for monitoring return volatility are potentially subject to large biases, when the semimartingale assumption is violated. Consequently, identification of the onset, indicated by \(\tau\), as well as duration of the extreme return drift episode is of great interest for regulators, industry practitioners and academics. This is the goal of this paper.
The detection problem can be addressed from two distinct perspectives. The more common is the _offline_ approach where the researcher observes the full dataset, and then conducts a "one-shot" procedure to identify whether and when violations have occurred. The vast literature on ex-post detection of structural breaks in macroeconomic and
financial time series data, including Andrews (1993), Bai (1996), Bai and Perron (1998), and Elliott and Muller (2014), falls within this category. More recently, Bucher et al. (2017) develop inference procedures for a change point in the jump intensity parameter for a Levy process with high-frequency data following this approach.
A more challenging and, for practitioners, investors and regulators, arguably more relevant perspective is the real-time setting, where data arrive continuously, and one wishes to detect the violation in a timely manner. This objective has some resemblance to that of the recent _now-casting_ literature in macroeconomics, where the aim is to update the assessment of the current and future state of the economy as new data are received in real time. For early initial work on a formal statistical framework in this setting, see, e.g., Evans (2005) and Giannone et al. (2008), while corresponding work utilizing financial data can be found in, e.g., Andreou et al. (2013) and Banbura et al. (2013). Another related macro-econometric literature is initiated by Diba and Grossman (1988) regarding the detection of macroeconomic bubbles and crises. This methodology is extended by Phillips et al. (2011) using a recursive procedure based on right-tailed unit root tests to detect and locate the origin and terminal dates for bubbles. A series of subsequent studies provide further theoretical modifications, e.g., Phillips and Yu (2011), Phillips et al. (2014), and Phillips et al. (2015b), while empirical applications are provided by Phillips and Shi (2018) and Phillips et al. (2015a).
Our objective is closely aligned with the latter _real-time_ or _online_ detection procedures. From a technical perspective, our approach is rooted in the statistic literature on the sequential detection problem for a change point. This literature typically deals with the mean of i.i.d samples or with the drift within a continuous-time model featuring both a drift and a scaled standard Brownian motion component. The literature goes back to, at least, Abraham Wald's sequential analysis. His work inspired the widely-known CUSUM rule by Page (1954) and subsequently the Bayesian rule developed by Shiryaev (1963) and Roberts (1966).2 We follow the former rule, but also deviate substantially, because that statistic requires knowledge of the alternative measure after the change, which is not a natural assumption in our setting. Specifically, our detec
tor is based on the generalized likelihood ratio (GLR) statistic, which profiles out the unknown alternative parameter. For studies on this GLR-CUSUM procedure under an i.i.d setting, see Siegmund and Venkatraman (1995), Pollak and Siegmund (1975), Lai and Siegmund (1979), and Lai (1995), among others.
Our paper differs from the aforementioned work in two key aspects. First, we rely on infill asymptotics and exploit the feature of accessing an asymptotically increasing number of observations of the process locally. This helps us formalize the notion of rapid detection of local Ito semimartingale violations. These deviations from the semimartingale dynamics are only local in nature, unlike the earlier sequential detection literature which deals with detection of a permanent change. Second, we abandon the use of size versus power to characterize the properties of our detection procedure because we, by necessity, must apply our tests sequentially. If a test with fixed critical value is applied sequentially on a constant flow of newly arriving data, the null will inevitably be rejected repeatedly, and the traditional type I error literally explodes (with probability approaching one). To address this issue, we follow the sequential detection literature to design and evaluate the performance of our procedure using alternative metrics: the _average run length (ARL)_ and _false detection rate (FDR)_ or, more comprehensively, an asymptotic probability _bound on the false detection (BFD)_ rate versus a corresponding _bound on the detection delay (BDD)_ (see Lorden (1971) and Lai (1995)). Specifically, for the null probability measure, when there is no Ito semimartingale violation, ARL measures the expected sample size until the first false detection, while FDR measures the probability of a false detection within a given period (and we refer to this period as BFD). For the alternative, BDD provides a probability bound on the number of observations before we achieve successful detection following a violation.3 Consequently, one strives to develop a detector that, conditional on a reasonably large ARL/small FDR, achieves the smallest possible BDD. An alternative would be to develop a time-varying boundary function - in contrast to a constant threshold - to control test size uniformly, see, e.g., Chu et al. (1996). Unfortunately, in a real-time sequential setting, such procedures struggle with the detection of structural changes that arrive late within the period. There is work seeking to alleviate this drawback, e.g., Leisch et al. (2000), Horvath et al. (2004), Aue and Horvath (2004), Aue et al. (2006), and Horvath et al. (2007). However, these procedures still tend to generate a significant detection delay,
and we do not pursue this direction here.
As noted, our objective is to design statistical devices that detect local Ito semimartingale violations swiftly and reliably after their occurrence, using continually incoming high-frequency data. Towards this end, we propose GLR-CUSUM type detectors as stopping times based on an estimated Brownian motion component of the latent asset price, which we recover from high-frequency return data along with short-dated options. We first establish the accuracy of this Brownian motion estimator, exploiting the option-based spot volatility estimate of Todorov (2019) to standardize the high-frequency returns. This approach avoids complications stemming from the inconsistency of standard volatility measurements from returns, caused by local Ito semimartingale violations, and it exploits the efficiency offered by option data for recovering spot volatility. Next, following the asymptotic setting in the sequential detection literature, we show that our detector is approximately exponentially distributed under the null (Theorem 1) and develop a bound for the BDD under the alternative (Theorem 2). The former result implies that the ARL is the mean and FDR a percentile of the distribution. In turn, the latter result implies that our detectors achieve immediate detection of Ito semimartingale violations under infill asymptotics. A Monte Carlo experiment finds that our theoretical results provide good guidance for the finite-sample properties of the detectors under a realistically calibrated simulation setting.
Finally, we apply our detection methods empirically to one-minute S&P 500 equity (SPX) index futures data to investigate the pervasiveness of the Ito semimartingale violation phenomenon and to assess whether our detector provides timely alerts regarding the failures. We further extend the procedure to obtain an identification rule for the duration of the violation. It is defined as the union of the time intervals on which our GLR-CUSUM statistic exceeds the chosen threshold. With a threshold specification for our detector leading to about 6% daily FDR, we find these violations to be common in the SPX data -- with a little more than 1,000 such violations across 3,500 days. That said, more than half of the violations last for less than 10 minutes, and only a small proportion exceeds 20 minutes. Visual inspection suggests that, in most instances, our procedure detects such incidents within just a few minutes of their occurrence.
The remainder of the paper is organized as follows. In Section 2 we describe the setting and formulate the problem. In Section 3 we introduce our real-time detectors and study their asymptotic properties. Next, we carry out a Monte Carlo study in Section 4 to illustrate the finite-sample performances of the detectors, and we then use them in an empirical application in Section 5. Section 6 concludes.
Setup
The observed log-price process, \(Y_{t}\), is defined on a filtered probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\geq 0},\mathbb{P})\). We assume that it may be decomposed as,
\[Y_{t}\ =\ X_{t}\,+\,H_{t}\,, \tag{1}\]
where \(X\) represents an underlying efficient and arbitrage-free log-price. In addition, the observed price is contaminated by the noise component \(H\), which is the source of brief episodes characterized by extreme return persistence, such as the so-called gradual jumps and flash crashes. Sections 2.1 and 2.2 introduce our fairly standard specification for the observation scheme and \(X\), while Section 2.3 provides a detailed account of the dynamics for the noise component \(H\).
### The Observation Scheme
We assume the price is observed over a given time interval \([0,T]\) at equidistant times, \(t_{i}=i\Delta_{n}\), for all \(i=0,1,\ldots,nT\), where \(\Delta_{n}=1/n\) is the increment length. Let the \(i\)-th high-frequency log-return be denoted by,
\[\Delta_{i}^{n}Y\,=\,Y_{i\Delta_{n}}\,-\,Y_{(i-1)\Delta_{n}}. \tag{2}\]
Without loss of generality, we normalize the trading day to unity. Hence, \(T\) refers to the number of trading days, and \(n=1/\Delta_{n}\) (assumed integer) is the number of observations per (trading) day. The time span of the data, \(T\), is assumed fixed throughout.
Following the standard infill asymptotic framework, we let the sampling frequency, \(\Delta_{n}\), shrink to \(0\) (or, equivalently, \(n\to\infty\)). This equidistant sampling scheme can readily be relaxed to a non-equidistant one, requiring \(\max_{\{i\in 1,\ldots,nT\}}(t_{i}-t_{i-1})\to 0\).
### The efficient log-price \(X\)
The efficient log-price process \(X\) is an Ito semimartingale,
\[X_{t}\ =\ X_{0}\,+\,\int_{0}^{t}b_{s}\,ds\,+\,\int_{0}^{t}\sigma_{s}\,dW_{s}\,+ \,\int_{0}^{t}\int_{\mathbb{R}}\delta(s,x)\,\mu(ds,dx)\,, \tag{3}\]
where the initial value \(X_{0}\) is \(\mathcal{F}_{0}\)-measurable, the drift \(b_{t}\) takes value in \(\mathbb{R}\), \(W=(W_{t})_{t\geq 0}\) is a one-dimensional standard Wiener process, \(\delta:\,\mathbb{R}_{+}\times\mathbb{R}\mapsto\mathbb{R}\quad\)is a predictable mapping, and \(\mu\) is a Poisson random measure on \(\mathbb{R}_{+}\times\mathbb{R}\) with predictable compensator (or intensity measure) \(\nu(dt,dx)=dt\otimes F(dx)\).
We impose the following assumption on the efficient price process.
**Assumption 1**.: _There exists arbitrary small \(\mathbb{T}>0\) such that for the process \(X\) in equation (3), we have, for \(t\in[0,\mathbb{T}]\) and \(\forall p\geq 1\),_
\[\mathbb{E}_{0}|b_{t}|\,+\,\mathbb{E}_{0}\left(\int_{\mathbb{R}}|\delta(t,x)|F( dx)\right)\,+\,\mathbb{E}_{0}|\sigma_{t}|^{p}\ <\ C_{0}(p), \tag{4}\]
_where \(C_{0}(p)\) is \(\mathcal{F}_{0}\)-adapted random variable that depends on \(p\), and further_
\[\mathbb{E}_{0}|\sigma_{t}-\sigma_{s}|^{2}\ \leq\ C_{0}\,|t-s|,\quad\text{ for }s,t \in[0,\mathbb{T}], \tag{5}\]
_where \(C_{0}\) is \(\mathcal{F}_{0}\)-adapted random variable._
A few comments about this assumption are warranted. First, since our testing procedure concerns the asset price dynamics during an Ito semimartingale violation initiated close to time zero, the assumption focuses on the behavior of \(X\) in the vicinity of \(0\). Second, the first part of Assumption 1 involves the existence of conditional moments, but since \(\mathbb{T}>0\) can be arbitrary small, these moment conditions are relatively weak. Third, the jump part of \(X\) is modeled as an integral against a Poisson measure and, therefore, we restrict attention only to finite variation jumps. Finally, the "smoothness in expectation" condition for \(\sigma\) will be satisfied whenever \(\sigma\) is an Ito semimartingale, which is the standard way of modeling stochastic volatility. We note, however, that this condition will not hold for the class of rough stochastic volatility models.4
Footnote 4: For this class of models, the right-hand side of the second equation in Assumption 1 will be replaced by \(C_{0}\,|t-s|^{2H}\), for \(0<H<1/2\) capturing the degree of roughness of the volatility path.
**Remark 2.1**.: _We refer to the model (3) for the efficient asset price \(X\), along with Assumption 1, as the standard Ito semimartingale model since it embeds most specifications used in current existing work. We note, however, that there is a gap between a semimartingale, required to preclude the existence of arbitrage opportunities, and the standard Ito semimartingale considered above. The gap includes models in which the semimartingale characteristics are not absolutely continuous in time as well as models with infinite variation jumps. We do not consider those in our analysis._
**Remark 2.2**.: _We can readily accommodate an empirically relevant extension of the standard Ito semimartingale model to the case, where the efficient price path features discontinuities triggered by economic announcements at pre-specified points in time. Since the announcement times are known a priori, one may remove the associated price increments from the analysis and proceed as if \(X\) follows a standard Ito semimartingale. To minimize notational complexity, we do not formally introduce this extension._
### The persistent noise \(H\)
Our main objective is to develop inference tools to detect episodic violations of the Ito semimartingale assumption, which is almost universally imposed in standard asset pricing theory. From a practical perspective, the occurrences of gradual jumps or, in more extreme manifestations, flash crashes, are of particular interest because the associated drift burst in the returns can induce severe biases in the high-frequency measurement of the (efficient) return variation. We include such short-lived episodes of extreme return persistence in our setup through the _Persistent Noise (PN)_ model of Andersen et al. (2021), which is designed explicitly to accommodate these types of empirically observed phenomena. The PN model is a stochastically extended version of the _Drift Burst (DB)_ model by Christensen et al. (2022), and the real-time detection procedures developed below applies for either specification. We initially consider a simplified setting with only a single episode involving semimartingale violations.
The PN model initiates a violation episode at a random point, \(\tau_{n}\), i.e., \(H_{t}=0\) for \(t<\tau_{n}\) and in our analysis \(\tau_{n}\downarrow 0\). At \(\tau_{n}\), the efficient price may display a discrete jump, \(\Delta X_{\tau_{n}}\neq 0\). If an efficient price jump occurs, it likely reflects the arrival of new information or unusual ongoing trading activity. To the extent this information is not common knowledge or not readily interpretable, a gap will emerge between the efficient price, reflecting rational processing of all relevant information, and the market price which is determined by the interaction of risk averse and merely partially informed agents.5 In other words, the efficient price may deviate from the market price, implying that the noise component is active, \(H_{\tau_{n}}\neq 0\). In fact, if the information is not immediately observed or processed by market participants, we have \(H_{\tau_{n}}=\Delta H_{\tau_{n}}=-\Delta X_{\tau_{n}}\), implying \(\Delta Y_{\tau_{n}}=\Delta X_{\tau_{n}}-\Delta X_{\tau_{n}}=0\). That is, the price reaction is delayed. An intermediate case is where the news generates a partial response. For example, if \(H_{\tau_{n}}=-\Delta X_{\tau_{n}}/2\), the price jump underreacts to the new information. Such events may trigger a period of intense information acquisition and price discovery, inducing rapid price adjustment. After initiation at \(\tau_{n}\), the future path of the noise component is modeled through a specific functional form, \(g(t),t\in[\tau_{n},\overline{\tau}]\). Finally, we ensure that the PN event terminates at some random point, \(\overline{\tau}>\tau_{n}\,,\) so that we have \(H(t)=0\) for \(t>\overline{\tau}\).
Footnote 5: For an in-depth discussion of this feature across a large cross-section of stocks, see Andersen et al. (2022).
We are now in position to formally introduce the PN model.
**The Persistent Noise (PN) model**: For some non-negative sequence \(\tau_{n}\downarrow 0\), we let,
\[H_{t}\ =\ f(\Delta X_{\tau_{n}},\eta_{\tau_{n}})\,g\left(t\right),\ \ t\geq\tau_{n}, \tag{6}\]
where \(\eta_{\tau_{n}}\) is an \(\mathcal{F}_{\tau_{n}}\)-adapted random variable, \(f\) is a continuous and bounded function, and \(g\) is given as,
\[g(s)\,=\,\left\{1-\left(\frac{s-\tau_{n}}{\overline{\tau}-\tau_{n}}\right)^{ \vartheta_{pn}}\right\}\,\mathds{1}_{\{s\in[\tau_{n},\overline{\tau}]\}}, \ \ \ \vartheta_{pn}\in(0,0.5), \tag{7}\]
for some \(\mathcal{F}_{\tau_{n}}\)-adapted random \(\overline{\tau}>\tau_{n}\).
The strength of the local return drift in the vicinity of the PN initiation point, \(\tau_{n}\), in equation (7) increases as \(\vartheta_{pn}\) takes on lower values, while the duration of the event is controlled by the realization of \(\overline{\tau}\). We also note that \(g(s)\) is zero, indicating no Ito semimartingale violation, until \(\tau_{n}\). Finally, \(\eta_{\tau_{n}}\) allows for a random response to the events triggering the permanent noise component, which may or may not be associated with a jump in the efficient price.
To illustrate the dynamics of the PN model, Figure 1 plots a simulated sample path for the efficient price \(X\) (blue) across the 6.5 hour trading day along with the associated path for the observed price \(Y\) (black). The efficient price path features a positive jump at \(\tau_{n}=0.25\) (at 97 minute), but (informational) frictions prevent this from being recognized by market participants, so there is no instantaneous jump in the observed price. In terms of the model, we register a corresponding negative latent noise jump at \(\tau_{n}\), namely \(f(\Delta X_{\tau_{n}},\eta_{\tau_{n}})=-\Delta X_{\tau_{n}}\). This is a purely mechanical effect. If there is no observed price change, but the efficient price jumps, then the deviation between the two - the noise term \(H\) - must offset the efficient price jump.
As noted, Figure 1 is designed to replicate the gradual jump phenomenon. It is evident that this PN episode also may reflect the recovery from a so-called flash crash that hits its nadir at \(\tau_{n}\). Thus, one may replicate the typical flash crash pattern by combining this "recovery phase" with an inverted version of the PN path prior to \(\tau_{n}\), see Andersen et al. (2021) for a more detailed discussion.
From a technical perspective, it is convenient to introduce a version of the DB model by Christensen et al. (2022).6
Footnote 6: Christensen et al. (2022) allow for simultaneous volatility and drift bursts. We do not include this feature, as we find no systematic evidence for significant elevation in option-based volatility measures at such times.
**The Drift Burst (DB) model**: For some non-negative sequence \(\tau_{n}\downarrow 0\), we have,
\[H_{t}=\int_{0}^{t}c(s-\tau_{n})^{-\vartheta}\mathds{1}_{\{s\in[\tau_{n}, \overline{\nu}]\}}ds,\ \ \vartheta\in(0.5,1), \tag{8}\]
for a constant \(c\) and \(\tau_{n}\leq\overline{\tau}\). \(\tau_{n}\) and \(\overline{\tau}\) have the same interpretation as in the PN model.
The DB model may be viewed as a differential version of the PN model by observing that \(\int_{t_{1}}^{t_{2}}(s-\tau)^{-\vartheta}ds=(1-\vartheta)^{-1}(t_{2}-\tau)^{ 1-\vartheta}-(1-\vartheta)^{-1}(t_{1}-\tau)^{1-\vartheta}\) for \(0\leq\tau\leq t_{1}<t_{2}\). Therefore, the asymptotic analysis will be identical for the PN model and the DB model with \(\vartheta=1-\vartheta_{pn}\). We will exploit this equivalence in our theoretical arguments.
Henceforth, we denote the probability and expectation by \(\mathbb{P}_{\tau_{n}}\) and \(\mathbb{E}_{\tau_{n}}\) for the case where the observed log-price is generated as above, and by \(\mathbb{P}_{\infty}\) and \(\mathbb{E}_{\infty}\) if there are no Ito semimartingale violations (that is, when \(\tau_{n}=\infty\)).
Figure 1: Price path samples of the efficient price \(X\) (in blue) and observed price \(Y\) (in black) simulated by the Persistent Noise (PN) model.
Real-Time Detection
In this section, we propose GLR-CUSUM type detectors as stopping rules for the local Ito semimartingale violations illustrated above. Next, we develop their theoretical properties. This involves characterizing the accuracy for our estimate of the Brownian motion driving the price, followed by approximation results for the distribution of the stopping rules, leading to analytic formulas for ARL and FDR, and finally a probability result for BDD which, in turn, implies a probability bound on the speed of detection.
### The GLR-CUSUM stopping rule
Our local Ito semimartingale violation detector is built upon a local estimator of the Brownian motion driving the price, defined as,
\[\widehat{W}_{l_{1},\,l_{2}}\ =\ \sum_{i=l_{1}+1}^{l_{2}}\,\frac{\Delta_{i}^{n}Y}{ \hat{\sigma}_{(i-1)\,\Delta_{n}}}\,\mathbbm{1}_{\{|\Delta_{i}^{n}Y|<\zeta \Delta_{n}^{\pi}\}}\,, \tag{9}\]
for some \(l_{1},l_{2}\in\mathbb{N}\) such that \(0\leq l_{1}<l_{2}\leq nT\), and \(\hat{\sigma}_{i\Delta_{n}}\) is an option-based non-parametric spot volatility estimator defined later. As detection of the semimartingale violation inevitably is subject to some delay, the option-based \(\hat{\sigma}\) enables us to avoid the bias in volatility estimation induced by the bursting drift following \(\tau_{n}\)(see Andersen et al. (2021)).
Our CUSUM-type detector, employing the _generalized likelihood ratio_ (GLR) statistic, is now defined by the stopping rule,
\[\mathbf{N}^{\mathrm{w}}(\xi)\ =\ \inf\left\{\,l>0:\max_{l-w_{n}\leq k<l}\, \frac{|\widehat{W}_{k,l}|}{\sqrt{(l-k)\,\Delta_{n}}}\ >\ \xi\,\right\}\,, \tag{10}\]
where \(w_{n}\) controls the maximum window length for \(\widehat{W}_{k,l}\), and \(\xi\) is a chosen threshold.7 In words, at time point \(l\), we calculate the absolute value of the Brownian increment estimate \(\widehat{W}_{k,l}\) over the interval \([k,l]\) (normalized by \(1/\sqrt{(l-k)\Delta_{n}}\) ) for each point \(k\) before \(l\), and then take the maximum. If the maximum exceeds the threshold \(\xi\), the detector reports an alarm, signaling detection of an Ito semimartingale violation.
Footnote 7: The terminology of generalized likelihood ratio statistic here refers to (the square root of 2 times) the likelihood of a Brownian motion with drift \(\mu\) versus drift of \(0\), \(\mu\,W_{k,l}-\mu^{2}(l-k)\Delta_{n}/2\), with the unknown alternative \(\mu\) being replaced by its maximum likelihood estimate \(\widehat{\mu}=W_{k,l}/\sqrt{(l-k)\Delta_{n}}\).
### Recovering the Driving Brownian Motion for the Price
Our theoretical results concerning detection of local semimartingale violations hinge on the accuracy of the estimator \(\widehat{\sigma}_{t}\). The following assumption is sufficient.
**Assumption 2**.: _For some arbitrary small \(\mathbb{T}>0\), we have \(\inf_{t\in[0,\mathbb{T}]}\sigma_{t}>0\) and_
\[\mathbb{E}_{0}\,|\widehat{\sigma}_{t}-\sigma_{t}|^{2}\ \leq\ C_{0}\,\delta_{n}^{2}, \ \ \ \ t\in[0,\mathbb{T}], \tag{11}\]
_for some \(\mathcal{F}_{0}\)-adapted positive random variable \(C_{0}\), and a deterministic sequence \(\delta_{n}\to 0\)._
_In addition, we have \(\widehat{\sigma}_{t}>C_{0}/l_{n}\), for \(t\in[0,\mathbb{T}]\), some \(\mathcal{F}_{0}\)-adapted positive random variable \(C_{0}\), and \(l_{n}=\log(1/\delta_{n})\)._
Several features of the above assumption are noteworthy. First, we impose conditions on the volatility estimator only in the vicinity of zero. Second, we require that \(\sigma_{t}\) is non-vanishing in a neighborhood of zero. This is important for the behavior of the volatility estimator, but also for the overall validity of our detection procedure, as the diffusive component of \(X\) drives our asymptotic results. Third, the deterministic sequence \(\delta_{n}\) captures the rate of convergence of the option-based volatility estimator. This rate is determined by the mesh of the strike grid of the available options as well as their tenor, see Todorov (2019).
The above assumption is natural in the absence of a local semimartingale violation. Its validity is less clear if there is a violation of the type discussed in the previous section in the vicinity of \(t=0\). However, note that a drift burst or persistent noise incident will not affect the norm of the conditional characteristic function of the price increment which forms the basis for the estimator of Todorov (2019). Hence, the consistency of our spot volatility estimator should not be affected by the local semimartingale violation. That said, the quality of the option-based estimator might worsen during such periods due to poor option data quality. To guard against this possibility, we impose some filters on the option data in our application.
To facilitate the statement of our next result, we introduce the following notation,
\[\widetilde{W}_{l_{1},\,l_{2}}\ =\ \sum_{i=l_{1}+1}^{l_{2}}\Delta_{i}^{n}W,\]
for some \(l_{1},l_{2}\in\mathbb{N}\) such that \(0\leq l_{1}<l_{2}\leq nT\). The issue in this section is how closely \(\widehat{W}_{l_{1},\,l_{2}}\) approximates \(\widetilde{W}_{l_{1},\,l_{2}}\). The key result is provided by the following proposition.
**Proposition 1**.: _Suppose Assumptions 1 and 2 hold and \(H\equiv 0\). Let \(k_{n}\to\infty\) and \(k_{n}\Delta_{n}\to 0\) as \(\Delta_{n}\to 0\). Let \(0\leq l_{1}^{n}<l_{2}^{n}\) be such that \(l_{1}^{n}\Delta_{n}\to 0\) as \(\Delta_{n}\to 0\) and, as before, \(l_{n}=\log(1/\delta_{n})\). Then, for a positive \(\xi>0\), we have,_
\[\begin{split}&\mathbb{P}_{0}\left(\max_{|l_{2}^{n}-l_{1}^{n}|\leq k_ {n}}\,\frac{|\widehat{W}_{l_{1}^{n},l_{2}^{n}}-\widetilde{W}_{l_{1}^{n},l_{2}^ {n}}|}{\sqrt{(l_{2}^{n}-l_{1}^{n})\,\Delta_{n}}}\;>\,\xi\right)\\ &\qquad\qquad\leq\;C_{0}\,k_{n}\,\left(\,\frac{\sqrt{k_{n}}\,l_{ n}\,\delta_{n}+\sqrt{k_{n}\,\Delta_{n}}}{\xi}\;+\;\frac{k_{n}\,\Delta_{n}}{1 \wedge\xi^{2}}\;+\,\Delta_{n}^{1-\varpi}\right)\,,\end{split} \tag{12}\]
_for some \(\mathcal{F}_{0}\)-adapted positive random variable \(C_{0}\) that does not depend on \(k_{n}\), \(\Delta_{n}\) or \(\xi\)._
### Average Run Length (ARL)
We first analyze the behavior of the stopping time \(\mathbf{N}^{\mathrm{w}}(\xi)\) under the null measure, \(\mathbb{P}_{\infty}\), where there is no Ito semimartingale violation, i.e., \(\tau_{n}=\infty\). More specifically, our interest is in evaluating the ARL, formally defined as \(\mathbb{E}_{\infty}[\,\mathbf{N}^{\mathrm{w}}(\xi)\,]\). We show in Theorem 1 below that \(\mathbf{N}^{\mathrm{w}}(\xi)\), under \(\mathbb{P}_{\infty}\), is approximately exponentially distributed, leading to an approximation result for ARL. Theorem 1 follows by adapting the theoretical framework in Siegmund and Venkatraman (1995), based on the random fields analysis in Siegmund (1988) to our setting (see also Yao (1993)). We use the symbol "\(x_{n}\sim y_{n}\)" to denote "\(x_{n}\) is asymptotic equivalent to \(y_{n}\)", that is, \(\lim_{n\to\infty}x_{n}/y_{n}=1\).
**Theorem 1**.: _Denote the standard normal CDF and PDF by \(\Phi\) and \(\phi\). Let \(w_{n}\) be such that \(w_{n}\sim a_{n}\,\xi^{2}\), with \(a_{n}\) being a deterministic sequence converging to \(a\in(0,\infty]\). Suppose the conditions in Proposition 1 hold. Then, if \((l_{n}\,\delta_{n}+\sqrt{\Delta_{n}})/\,[\xi\,\phi(\xi)]\to 0\), as \(\xi\to\infty\), \(\mathbf{N}^{\mathrm{w}}(\xi)\) will be asymptotically exponentially distributed with expectation,_
\[\mathbb{E}_{\infty}[\,\mathbf{N}^{\mathrm{w}}(\xi)\,]\;\sim\;\frac{1}{D_{a}\, \xi\,\phi(\xi)}\;\,. \tag{13}\]
_Here \(D_{a}\) is a positive constant depending on \(a\) such that \(D_{a}\to D\), as \(a\to\infty\), where \(D=\int_{0}^{\infty}x\,\nu^{2}(x)\,dx\) with \(\nu(x)=2x^{-2}\exp\left[-2\sum_{n=1}^{\infty}n^{-1}\,\Phi(-x\sqrt{n}/2)\right]\) for \(x>0\)._
The proof of Theorem 1 is provided in Appendix A. Following Siegmund and Venkatraman (1995), it is useful for numerical evaluation to apply the approximation \(\nu(x)=\exp(-\rho x)+o(x^{2})\) for \(x\to 0\), where the value of the constant \(\rho\) is about \(0.583\). Based on this, we can numerically determine \(D\) to take the value \(0.735\).
We desire for the ARL, under the null, to be as large as possible or, equivalently, \(\xi\) to be as large as possible. The optimal such \(\xi\), up to a log term and for a given \(w_{n}\), will be \(\xi\) such that \(\phi(\xi)\sim(\delta_{n}\,l_{n}\vee\sqrt{\Delta_{n}})\). It is reasonable to assume that the size of the
error in estimating spot volatility from options is much smaller than the high-frequency approximation error, that is, \(\delta_{n}\,l_{n}/\sqrt{\Delta_{n}}\to 0\). Under this condition and the indicated choice of \(\xi\), we obtain from Theorem 1 that,
\[\mathbb{E}_{\infty}[\,{\bf N}^{\rm w}(\xi)\,]\ \sim\ \frac{1}{\sqrt{\Delta_{n}}}\ \sqrt{\log\left(\frac{1}{\Delta_{n}}\right)}\;. \tag{14}\]
We note that this ARL value can be obtained with \(w_{n}\) (the window size) taking values within a wide range -- from an order of \(\xi^{2}\) through using (almost) all available observations on a given day (retaining \(w_{n}\,\Delta_{n}\to 0\)). For the case of \(w_{n}/\,\xi^{2}\to\infty\), we may denote our detector \({\bf N}(\xi)\), and replace \(D_{a}\) by \(D\), as \(D_{a}\to D\) with \(a\to\infty\).8
Footnote 8: For the standard i.i.d. case, more specific guidelines in selecting \(w_{n}\) can be found in Lai (1995).
### 3.4 False Detection Rate (FDR)
Given the nature of our sequential detection procedure, we cannot rely on the regular notion of test size to control the rejection rate under the null hypothesis. The preceding subsection instead focuses on the concept of ARL, establishing this quantity (the ARL) as the mean value of the asymptotic distribution for stopping time \({\bf N}^{\rm w}(\xi)\) under \(\mathbb{P}_{\infty}\). However, this distributional result has wider implications. For example, it allows us to control the false detection rate (FDR) as,
\[\mathbb{P}_{\infty}\,[\,{\bf N}^{\rm w}(\xi)\,<\,\ell_{n}\,]\ \leq\ \alpha\,, \tag{15}\]
where \(\ell_{n}\) indicates an asymptotically increasing sequence of observations, while \(\alpha\) is the bound for the FDR - the probability of triggering (at least) one false alarm after observing no more than \(\ell_{n}\) price increments. If \(\alpha\) is asymptotically shrinking, then we refer to \(\ell_{n}\) as the _bound (in probability) on false detection_ or BFD.
The FDR is often viewed as a more robust measure than ARL. Typically, the objective is to ensure an expected long duration (ARL) before a false alarm is triggered. The problem is that a stopping rule may feature a large ARL and still retain a high probability of false alarms within short periods. Although this does not apply in our case, we still proceed with a result concerning the limiting behavior of FDR under the null, since it provides a guideline for selecting a threshold in empirical applications. Moreover, it renders the Monte Carlo studies more computationally tractable, as we avoid simulating an excessively long sample to assess the ARL, especially when we experiment with large thresholds. Below, we use the symbol "\(x_{n}\lesssim y_{n}\)" to denote "\(x_{n}\) is asymptotic equivalent to or less than \(y_{n}\)", that is, \(\lim_{n\to\infty}x_{n}/y_{n}\leq 1\).
**Corollary 1**.: _Suppose the conditions in Theorem 1 hold and let \(\ell_{n}\) be such that \(\ell_{n}\,\xi\,\phi(\xi)\to 0\), as \(\xi\to\infty\). Then, we have,_
\[\mathbb{P}_{\infty}[\,\mathbf{N}^{\mathrm{w}}(\xi)<\ell_{n}\,]\ \lesssim\ \ell_{n}\,D_{a}\,\xi\,\phi(\xi)\,. \tag{16}\]
This corollary follows directly from Theorem 1. Specifically, the approximate exponential distribution of \(\mathbf{N}^{\mathrm{w}}(\xi)\) indicates \(\mathbb{P}_{\infty}[\mathbf{N}^{\mathrm{w}}(\xi)<\ell_{n}]\sim 1-\exp(-\ell_{n} \,D_{a}\,\xi\,\phi(\xi))\), and the result then follows from the inequality \(1-\exp(-x)\leq x\) for \(x\) close to \(0\). A sequence \(\ell_{n}\) satisfying the conditions of Corollary 1 constitutes a BFD.
### Bound on Detection Delay (BDD)
When an Ito semimartingale violation occurs at some time \(\tau_{n}\downarrow 0\), we wish to detect it as quickly as possible, using discrete high-frequency observations of \(Y\) starting at time \(0\). We refer to a deterministic sequence \(T_{n}\to\infty\) such that \(\mathbb{P}\left(\,\mathbf{N}^{\mathrm{w}}(\xi)\,>\,\tau_{n}/\Delta_{n}+T_{n}\, \right)\ \to\ 0\), when \(H\) is nontrivial, as a _bound (in probability) on detection delay_ or BDD. We want \(T_{n}\) as small as possible. Theorem 2 below provides a lower bound on BDD.
**Theorem 2**.: _Suppose \(b_{s}=b_{0}\), \(\sigma_{s}=\sigma_{0}\) and \(\Delta X_{s}=0\), almost surely, for \(s\) in a neighborhood of zero and that Assumption 2 holds. Let \(H\) be given by the drift burst model in equation (8) for some \(c\neq 0\). Let \(w_{n}\) be such that \(w_{n}\sim a_{n}\,\xi^{2}\), with \(a_{n}\) being a deterministic sequence converging to \(a\in(2/c^{2},\infty]\). Finally, let \(\varpi\in(0,1/2)\) and \(\vartheta\in(1/2,1)\) as well as \(\xi\Delta_{n}^{\iota}\to 0\) for any \(\iota>0\). We then have,_
\[\mathbb{P}\left(\,\mathbf{N}^{\mathrm{w}}(\xi)\,>\,\tau_{n}/\Delta_{n}+T_{n}\, \right)\ \to\ 0\,, \tag{17}\]
_for any sequence \(T_{n}\to\infty\) such that \(T_{n}\,\Delta_{n}\to 0\), \(\ w_{n}/T_{n}\to 0\), \(\ T_{n}/\Delta_{n}^{(1/2-\vartheta)/\vartheta}\to\infty\) and \(\sqrt{w_{n}}\ l_{n}\,\delta_{n}\,\Delta_{n}^{\varpi-1/2}/\left(\xi\,T_{n} \right)\to 0\)._
We invoke a number of simplifying assumptions in the derivation of Theorem 2. Primarily, we assume that \(X\) is continuous and features constant drift and constant diffusive volatility in a (shrinking) neighborhood of zero. This style of assumption is common in the high-frequency financial econometrics literature and -- as is typically the case -- it should be possible to relax at the cost of more complicated derivations.
Theorem 2 is proved by constructing an alternative measure, say \(\widetilde{\mathbb{P}}\), under which the semimartingale violations are milder. Specifically, the expected value of the increments under \(\widetilde{\mathbb{P}}\) are smaller in asymptotic order of magnitude than under \(\mathbb{P}\). Consequently, it takes longer to detect the change, and thus the BDD under \(\widetilde{\mathbb{P}}\) is an upper bound for that under \(\mathbb{P}\).
The object characterized by the BDD condition (17) is a diverging sequence of observations, \((T_{n})\), sufficiently large to ensure almost sure detection, in our infill asymptotic setting, of an Ito semimartingale violation. It is natural to require that our detector identifies such violations before triggering a false alarm with probability approaching one or, equivalently, \(\text{BDD}<\text{BFD}.\) From Theorem 2, we need \(T_{n}/\Delta_{n}^{(1/2-\vartheta)/\vartheta}\to\infty,\) while BFD, from Corollary 1 and the discussion thereafter, is of asymptotic order \(1/\left(\xi\,\phi(\xi)\right)\). Thus, we must have \(\xi\,\phi(\xi)\,\Delta_{n}^{(1/2-\vartheta)/\vartheta}\to 0\) to ensure \(\text{BDD}<\text{BFD}.\) This condition may be satisfied for any \(\vartheta<1\), provided \(\xi\) is chosen as large as possible, while still guaranteeing that Theorem 1 applies (recall the discussion after Theorem 1).
The BDD embeds two sources of delay (as shown in our proof): (i) the truncation of price increments to guard against jumps; (ii) the requisite cumulation of non-truncated returns in our detection statistic. Asymptotically, the first delay term dominates the second. Nonetheless, in finite samples, the BDD will be determined by both effects.
**Remark 3.1**.: _We can also introduce a double-window-limited GLR rule, defined as,_
\[\mathbf{N}^{\mathrm{ww}}(\xi):=\inf\left\{l>0:\max_{l-w_{n}-r_{n}\leq k<l-r_{n }}\frac{|\widehat{W}_{k,l}|}{\sqrt{(l-k)\Delta_{n}}}>\xi\right\}, \tag{18}\]
_where \(r_{n}\) imposes a restriction on the minimum span for the statistic \(\widehat{W}_{k,l}\). This helps reduce the false detection error under the null induced by extreme values for the statistic \(\widehat{W}_{k,l}\) due to only a few (\(<r_{n}\)) increments. Consequently, this will generate a longer ARL/smaller FDR under the null, and a slightly larger BDD under the alternative. We do not analyze \(\mathbf{N}^{\mathrm{ww}}(\xi)\) theoretically, but evaluate its performance numerically via simulation. The Monte Carlo study in Section 4 shows that setting \(r_{n}\) to, say, \(5\), we obtain effective protection against outliers without affecting the detection delay by much._
_We note that the literature on testing for bubbles in macroeconomic settings often imposes a similar minimum duration restriction on the bubble period (see, e.g., Phillips et al. (2011), Phillips and Yu (2011), and Phillips et al. (2015b))._
## 4 Simulation Study
In this section we carry out a simulation study to assess the finite-sample performance of our procedure for swift detection of local Ito semimartingale violations.
### Simulation setting
To generate sample paths for the efficient log-price \(X\), we simulate the following Heston type stochastic volatility model with jumps,
\[dX_{t} = b_{t}\,dt\,+\,\sigma_{t}\,dW_{1,t}\,+\,dJ_{t}\,, \tag{19}\] \[d\sigma_{t}^{2} = \kappa\left(\gamma-\sigma_{t}^{2}\,\right)dt\,+\,\varsigma\, \sigma_{t}\,W_{2,t} \tag{20}\]
where \(W_{1}\) and \(W_{2}\) are standard Brownian motions with correlation \(\mathbb{E}(dW_{1,t},dW_{2,t})=\rho\,dt\), and \(J_{t}\) denotes the jump term for which we employ a Poisson process with intensity \(p_{X}\) and jump size distribution \(\mathcal{N}(0,\lambda_{X}^{2})\).
In terms of parameter specification, we set the initial efficient log-price \(X_{0}\) to \(\log(1200)\), the drift term \(b_{t}\) to zero, and the unit of time to one trading day. The volatility process \(\sigma_{t}^{2}\) is initiated at its unconditional mean of \(\gamma\) on day one while, for other days, it is initiated at the ending value of the previous day. The annualized parameter vector for the Heston model is set to \((\kappa,\gamma,\varsigma,\rho)=(5,0.0225,0.4,-\sqrt{0.5})\). For the jump components, we let \(p_{X}=3/5\), corresponding to 3 jump per week on average, and \(\lambda_{X}=0.5\%\).
For the Ito semimartingale violation term \(H\), we employ the DB and PN models (8) and (6)-(7), respectively, to generate gradual-jump type patterns. Specifically, for the DB model, we set \(c=3\), \(\tau_{n}=0.25\) and \(\overline{\tau}=0.4\), and explore \(\vartheta\in\{0.55,0.65,0.75\}\). For the PN model, we add a jump in \(X\) at \(\tau_{n}=0.25\) each day. We let \(f(\Delta X_{\tau},\eta_{\tau})=\eta_{\tau}\,\Delta X_{\tau}\) with \(\eta_{\tau}=-1\), \(\overline{\tau}=0.4\), and explore \(\vartheta_{pn}\in\{0.45,0.35,0.25\}\), corresponding to \(\Delta X_{\tau}=\{1.4\%,2.0\%,3.0\%\}\). Finally, \(\widehat{\sigma}_{t}=\sigma_{t}\times(1+0.02\times Z)\), where \(Z\) is a standard normal random variable, serves as proxy for the (noisy) option-based volatility estimate.
### Simulation results
We first present results for the null measure \(\mathbb{P}_{\infty}\), i.e., no Ito semimartingale violations.
In Table 1, we report the ARL values for our proposed detectors \(\mathbf{N}(\xi)\) (with \(w_{n}=\infty\)), \(\mathbf{N}^{\mathrm{w}}(\xi)\) (\(w_{n}=30\) minutes), and \(\mathbf{N}^{\mathrm{ww}}(\xi)\) (\((w_{n},r_{n})=(30,5)\) minutes) based on our simulation setting. We also provide the theoretical ARL values for the former two detectors based on Theorem 1. Comparing the first two and the next two columns we find that, for a wide range of threshold values ranging from 3.4 to 4.0, the Monte Carlo values for \(\mathbf{N}(\xi)\) and \(\mathbf{N}^{\mathrm{w}}(\xi)\) are close to their corresponding theoretical values, indicating that the asymptotic approximation (13) captures the finite-sample performance well. When comparing the ARLs of \(\mathbf{N}(\xi)\) to those of \(\mathbf{N}^{\mathrm{w}}(\xi)\), we find that the latter are
slightly larger than the former. This indicates that the window limit \(w_{n}\) does not induce any major deterioration under the null, even with the relatively small value (here, 30 minutes). On the contrary, in the last column, we find the ARL values for \(\mathbf{N}^{\mathrm{ww}}(\xi)\) to be significantly larger than the previous ones within the same settings. Evidently, the window limit \(r_{n}\) has a significant impact under the null measure by ignoring false detections induced by just a couple of increments. Thus, there is room to experiment along this dimension in order to improve the FDR, although it is critical to also monitor the associated increase in the detection delay under the alternative.
The above findings are corroborated by the FDR results in Table 2, where we use a wider range of the threshold values, taking advantage of the fact that FDR can be assessed through simulations over deliberately chosen shorter time intervals. In particular, for each replication, we simulate one-minute prices for a day, and obtain the FDR as the fraction of replications with (at least) one false detection across our
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline \hline & \multicolumn{2}{c}{\(\mathbf{N}(\xi)\)} & \multicolumn{2}{c}{\(\mathbf{N}^{\mathrm{w}}(\xi)\)} & \multicolumn{2}{c}{\(\mathbf{N}^{\mathrm{ww}}(\xi)\)} \\ \hline Threshold \(\xi\) & Theory & Simulation & Theory & Simulation & Simulation \\ \hline
3.4 & 358 & 343 & 393 & 367 & 684 \\
3.5 & 487 & 475 & 536 & 509 & 939 \\
3.6 & 669 & 673 & 740 & 718 & 1298 \\
3.7 & 931 & 956 & 1033 & 1014 & 1792 \\
3.8 & 1310 & 1312 & 1457 & 1432 & 2456 \\
3.9 & 1864 & 1881 & 2082 & 2058 & 3356 \\
4.0 & 2682 & 2609 & 3007 & 2897 & 4406 \\ \hline \hline \end{tabular} The ARL is based on simulated 1-minute data for 1,000 replications. We choose the window size \(w_{n}=30\) minutes for \(\mathbf{N}^{\mathrm{w}}(\xi)\) and \((w_{n},r_{n})=(30,5)\) minutes for \(\mathbf{N}^{\mathrm{ww}}(\xi)\). We set the jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\,\mathrm{med}}\) for all cases.
\end{table}
Table 1: Average Run Length (ARL)
5,000 replica. In general, the Monte Carlo FDRs are close to their theoretical values for \({\bf N}(\xi)\) and \({\bf N}^{\rm w}(\xi)\) by Corollary 1, implying accurate analytic approximations. The window limit \(w_{n}\) does not impact the FDRs by much, while a moderate choice for \(r_{n}\) may significantly reduce the false detection rate under the null. Finally, we note that this table can be used as a guide for choosing \(\xi\), as well as \(w_{n}\) and/or \(r_{n}\), in empirical applications, perhaps following further simulations for alternative asset price dynamics.
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline \hline & \multicolumn{2}{c}{\({\bf N}(\xi)\)} & \multicolumn{2}{c}{\({\bf N}^{\rm w}(\xi)\)} & \multicolumn{2}{c}{\({\bf N}^{\rm ww}(\xi)\)} \\ \hline \multirow{2}{*}{Threshold \(\xi\)} & \multirow{2}{*}{Theory} & \multirow{2}{*}{Simulation} & \multirow{2}{*}{Theory} & \multirow{2}{*}{Simulation} & \multirow{2}{*}{Simulation} \\ \hline
3.5 & 0.5227 & 0.5111 & 0.4889 & 0.4880 & 0.3046 \\
3.6 & 0.4160 & 0.4079 & 0.3854 & 0.3868 & 0.2383 \\
3.7 & 0.3207 & 0.3088 & 0.2943 & 0.2903 & 0.1777 \\
3.8 & 0.2403 & 0.2311 & 0.2189 & 0.2143 & 0.1319 \\
3.9 & 0.1756 & 0.1667 & 0.1588 & 0.1508 & 0.0923 \\
4.0 & 0.1256 & 0.1180 & 0.1128 & 0.1066 & 0.0666 \\
4.1 & 0.0881 & 0.0847 & 0.0787 & 0.0775 & 0.0499 \\
4.2 & 0.0608 & 0.0593 & 0.0540 & 0.0541 & 0.0364 \\
4.3 & 0.0413 & 0.0395 & 0.0365 & 0.0355 & 0.0249 \\
4.4 & 0.0276 & 0.0265 & 0.0243 & 0.0233 & 0.0170 \\
4.5 & 0.0183 & 0.0180 & 0.0160 & 0.0157 & 0.0118 \\ \hline \end{tabular} FDR results based on simulated 1-minute data with \(\ell_{n}=390\) (a day) for 10,000 replications. We choose the window size \(w_{n}=30\) minutes for \({\bf N}^{\rm w}(\xi)\) and \((w_{n},r_{n})=(30,5)\) minutes for \({\bf N}^{\rm ww}(\xi)\). We set the jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\rm med}\).
\end{table}
Table 2: False Detection Rate (FDR)
We round off our simulation study for the null measure \(\mathbb{P}_{\infty}\) with Figure 2. It displays histograms for the duration until (false) alarm associated with our detectors \(\mathbf{N}(\xi)\) (\(w_{n}=\infty\)) and \(\mathbf{N}^{\mathrm{w}}(\xi)\) (\(w_{n}=30\) minutes) based on 1,000 replications. The plots visually corroborate the approximate exponential distribution results in Theorem 1.
We now turn to \(\mathbb{P}_{\tau_{n}}\) -- the probability measure with local Ito semimartingale violation at time \(\tau_{n}\) under the setting of Section 4.1. Table 3 reports the (true) detection rates (denoted \(\mathbf{r}\)), i.e., what fraction of the 1,000 replications contain (at least) one detection, and the estimated _expected detection delay_ or EDD defined as,
\[\mathbb{E}_{\tau_{n}}\left[\,\mathbf{N}^{\mathrm{w}}(\xi)-\tau_{n}\,|\, \mathbf{N}^{\mathrm{w}}(\xi)>\tau_{n}\,\right]. \tag{21}\]
The EDD is regarded as the analogue of the ARL for the alternative measure \(\mathbb{P}_{\tau_{n}}\). In fact, ARL versus EDD is the canonical criteria in the sequential detection literature (see, e.g., Lorden (1971)), as size versus power for hypothesis testing. Unfortunately, an explicit approximation for the EDD is not available in our case, and we can only derive BDD, which is a bound in probability on \(\mathbf{N}^{\mathrm{w}}(\xi)\). Reporting EDD in the simulation, however, makes it easier to assess the size of \(\mathbf{N}^{\mathrm{w}}(\xi)\) under the alternative.
We consider both DB and PN gradual jump patterns as specified above. In both cases, we explore our detectors \(\mathbf{N}^{\mathrm{w}}(\xi)\) with \(w_{n}=30\) minutes and \(\mathbf{N}^{\mathrm{ww}}(\xi)\) with \((w_{n},r_{n})=(30,5)\) minutes under three sampling frequencies: 10-seconds, 30-seconds and 60-seconds.
Both detectors provide satisfactory performance, delivering high detection rates and short detection delays. Specifically, comparing vertically within each panel, we see that
\(\mathbf{r}\) increases and EDD shrinks, as the violations grow more severe. Likewise, comparing horizontally within each panel, we find the detectors improving with higher sampling frequency -- generating larger \(\mathbf{r}\)'s and shorter detection delays (in seconds, obtained by multiplying the entries accordingly with 10, 30 and 60). Experiments featuring even higher sampling frequencies, in turn, confirm our (asymptotic) immediate detection result. However, implementation at very high frequencies within an actual market setting is impacted by microstructure noise, so it is arguably less practically applicable unless one employs a scheme to explicitly account for ultra high-frequency frictions.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \(\mathbf{r/EDD}\) & \multicolumn{3}{c}{\(\mathbf{N}^{\mathrm{w}}(\xi)\)} & \multicolumn{3}{c}{\(\mathbf{N}^{\mathrm{ww}}(\xi)\)} \\ \hline & 10s & 30s & 60s & 10s & 30s & 60s \\ \hline \multicolumn{3}{l}{_DB gradual jumps_} \\
0.55 & 0.853/30.37 & 0.772/13.14 & 0.744/ 8.12 & 0.870/28.48 & 0.703/13.90 & 0.514/11.75 \\
0.65 & 0.948/13.28 & 0.948/ 8.77 & 0.932/ 6.03 & 0.948/14.06 & 0.887/ 9.36 & 0.756/ 8.69 \\
0.75 & 1.000/ 7.41 & 0.999/ 5.83 & 0.989/ 5.14 & 0.996/ 9.00 & 0.972/ 8.09 & 0.977/ 6.72 \\ \multicolumn{3}{l}{_PN gradual jumps_} \\
0.45 & 0.943/14.24 & 0.819/12.24 & 0.645/ 9.16 & 0.938/14.86 & 0.800/13.50 & 0.621/10.65 \\
0.35 & 0.969/10.86 & 0.913/ 9.28 & 0.778/ 7.61 & 0.966/11.46 & 0.909/10.42 & 0.773/ 9.03 \\
0.25 & 0.997/ 7.47 & 0.962/ 7.52 & 0.977/ 5.49 & 0.997/ 8.75 & 0.958/ 8.98 & 0.977/ 6.80 \\ \hline \end{tabular} Detection rate (\(\mathbf{r}\)) and expected detection delay (EDD) results based on simulated 10-, 30- and 60-seconds data for 1000 days. We set \(w_{n}=30\) minutes for \(\mathbf{N}^{\mathrm{w}}(\xi)\) and \((w_{n},r_{n})=(30,5)\) minutes for \(\mathbf{N}^{\mathrm{ww}}(\xi)\), with jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\mathrm{med}}\).
\end{table}
Table 3: Detection Rate (\(\mathbf{r}\)) and Expected Detection Delay (EDD)
Empirical Application
### Data
We apply our detectors to one-minute S&P 500 equity (SPX) index futures data covering January 2007 - December 2020. We only use data for the regular trading hours and eliminate days with reduced trading hours, producing a sample of \(3,524\) days. Following our theoretical framework, we use the nonparametric option-based volatility estimator, SV, proposed by Todorov (2019). We rescale SV using the previous day's truncated volatility (TV) so that they are at the same level.
### Detection of semimartingale violations for S&P 500
We apply \(\mathbf{N}^{\mathrm{ww}}(\xi)\) with \((w_{n},r_{n})=(30,5)\) minutes and \(\xi=4\) for our one-minute data each day, implying we initiate the procedure 30 minutes after the market open. Given this design, we lose power in terms of detecting violation periods shorter than 5 minutes. From Table 2, we see that this detector has a probability of about 6% to trigger a false alarm at the daily level.
We further equip our detection with an _ad hoc_ identification rule to pin down Ito semimartingale violation regions. Specifically, we define it as the union of all (probably overlapping) intervals, \([\,l_{a},l_{b}\,]\), such that
\[[\,l_{a},l_{b}\,]=\underset{0\leq l_{a}-w_{n}-r_{n}<l_{b}-r_{n}}{\arg\max}\, \frac{|\widehat{W}_{l_{a},l_{b}}|}{\sqrt{(l_{b}-l_{a})\Delta_{n}}}>\xi. \tag{22}\]
That is, in words, we define the region as the union of the span of GLR-CUSUM statistics that exceed the chosen threshold \(\xi\).
Figures 3 and 4 provide illustrations for two trading days, August 7, 2007 and August 30, 2019, with detected Ito semimartingale violations. The price paths are red, \(\mathbf{N}^{\mathrm{ww}}(\xi)\) rejections are indicated by dark grey vertical lines, and the violation regions by light grey areas. In the bottom panel, we also provide 30-minutes rolling window TV in blue and the option-based SV (rescaled by the same day's TV average) in orange.
On August 7, 2007, the price displays a gradual upward trend, until an abrupt 20 point crash over 2:15-2:30 pm, followed by a rapid recovery, taking the price beyond the original level. Our detector swiftly signals a violation as the flash-crash pattern emerges, triggering an alarm within a few minutes. As shown in the bottom plot, this dramatic price pattern induces an explosion in the local realized volatility measure, which is not consistent with our option-based SV measure, confirming the potential for
dramatic biases in return-based volatility measurement. Similar findings hold for our second example on August 30, 2019, where a gradual jump initiates before 11:00 am. Again, our detector captures it expeditiously.
Table 4 reports the daily count of Ito semimartingale violation regions detected by \(\mathbf{N}^{\mathrm{ww}}(\xi)\) with window length \((w_{n},r_{n})=(30,5)\) minutes under various values for the threshold \(\xi\). We also split the trading day (of 390 minutes) into three periods -- the "Morning" for the first 130 minutes, the "Noon" for the middle 130 minutes, and the "Afternoon" for the rest -- and report the associated count of violation regions initiated within that period. Finally, Figure 5 plots the corresponding histograms for the duration of these violation regions for the scenario with \(\xi=4\).
We find that "Morning" generates most violations, w
Figure 3: **Upper plot**: Price (red), Detection points (black vertical lines) and PN-regions (in grey) for August 7, 2007. **Bottom plot**: same day’s rolling window truncated realized volatility (TV, in blue) and spot volatility (SV, in orange) standardized by the day’s average TV. Detection based on \(\mathbf{N}^{\mathrm{ww}}(\xi)\) with \((w_{n},r_{n})=(30,5)\) minutes, \(\xi=4\), and jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\,med}\).
only slightly fewer, while "Afternoon" produces the least, even if the difference is not striking. In sum, the violations are frequent and they are not particularly prone to occur in specific regions of the trading day. Finally, from the histograms in Figure 5, we note that the duration of a typical violation is short, with the clear majority lasting less than 20 minutes. Moreover, the violation regions starting in the morning tend to be somewhat shorter lived than those initiated during noon or afternoon.
A manifestation of the Ito semimartingale violation during the detected PN episodes is the divergence between return- and option-based spot volatility estimators. The bottom panels of Figures 3 and 4 show that the divergences are large in these two specific cases. Table 5 reports the sample mean of the return- and option-based volatility estimates \(TV\) and \(SV\) over the hour before and after the initiation of a PN episode. The
Figure 4: **Upper plot**: Price process (in red), Detection points (black vertical lines) and PN-regions (in grey) for August 30, 2019. **Bottom plot**: same day’s rolling window truncated volatility (TV, in blue) and spot volatility (SV, in orange) standardized by the day’s average TV. Detection based on \(\mathbf{N}^{\text{ww}}(\xi)\) with \((w_{n},r_{n})=(30,5)\) minutes, \(\xi=4\), and jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\,med}\).
results reveal that the two volatility proxies are close before the PN episode but feature a substantial gap over the following hour, with the average \(TV\) about 35% higher than the average \(SV\). This gap only grows larger, if we exclude days with FOMC announcements from the calculation, so the discrepancy is not driven by this type of macroeconomic announcements. These findings demonstrate how local deviations from the semimartingale assumption can distort the measurement of volatility.
Finally, Table 6 reports the number of PN episodes for each year across the sample given different threshold levels for detection. The PN events appear near uniformly distributed over the sample, with no particular clustering apparent during periods characterized by generally volatile or tranquil market conditions. Overall, we conclude that PN episodes are largely idiosyncratic events with no particular tendency to occur during specific times within the trading day or during specific market conditions.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Threshold \(\xi\) & Total & Morning & Noon & Afternoon \\ \hline
4 & 2186 & 824 & 738 & 624 \\
4.1 & 1901 & 714 & 641 & 546 \\
4.2 & 1638 & 620 & 542 & 476 \\
4.3 & 1398 & 520 & 461 & 417 \\
4.4 & 1172 & 438 & 378 & 356 \\
4.5 & 994 & 364 & 318 & 312 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Count of Regions with Itô Semimartingale Violations
## 6 Conclusion
In this paper, we focus on real-time detection of local Ito semimartingale violations that have drawn increasing attention in the recent literature. We propose CUSUM-type detectors as stopping rules exploiting high-frequency data. We show that they possess desirable theoretical properties under infill asymptotics. Specifically, for a suitably chosen average run length (the average sample length until a false alarm), our detectors enable "quick" detection of a violation, once it occurs. Our formal interpretation of rapid detection is that the bound on the detection delay (BDD) shrinks to zero
Figure 5: Histograms of PN-region durations. Specifications: \(\mathbf{N}^{\text{ww}}(\xi)\) with \((w_{n},r_{n})=(30,5)\) minutes and \(\xi=4\); Jump truncation threshold \(\zeta=4\,\widehat{\sigma}_{t-1}^{\,med}\).
under infill asymptotics, as the sampling interval \(\Delta_{n}\) goes to zero. These properties are corroborated through simulations calibrated to reflect key features of market data. Finally, we apply our detector to S&P 500 equity (SPX) index futures data. We obtain timely detection of a nontrivial number of short-lived episodes involving likely semimartingale violations. Such turbulent market events are critical for real-time decision making. For example, they may indicate temporary market malfunction motivating exchange or regulatory action, they may induce the termination of trading strategies and algorithms among active market participants triggering a period of fleeting liquidity, and they signal problems in extracting reliable high-frequency based market volatility and risk measures.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & \multicolumn{2}{c}{1 hour before} & \multicolumn{2}{c}{1 hour after} \\ \hline & TV & SV & TV & SV \\ \hline ALL & 10.93\(\times 10^{-5}\) & 10.58\(\times 10^{-5}\) & 12.61\(\times 10^{-5}\) & 10.53\(\times 10^{-5}\) \\ No FOMC & 11.72\(\times 10^{-5}\) & 11.56\(\times 10^{-5}\) & 13.72\(\times 10^{-5}\) & 11.51\(\times 10^{-5}\) \\ \hline \hline \end{tabular} This table reports the average volatility measures \(TV\) and \(SV\), one hour before and one hour after the starting point of PN episodes.
\end{table}
Table 5: Volatility Estimates during Regions with Ito Semimartingale Violations
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Year & \(\xi=4\) & \(\xi=4.1\) & \(\xi=4.2\) & \(\xi=4.3\) & \(\xi=4.4\) & \(\xi=4.5\) \\ \hline
2007 & 171 & 155 & 132 & 105 & 83 & 71 \\
2008 & 132 & 117 & 98 & 88 & 77 & 63 \\
2009 & 89 & 80 & 65 & 56 & 42 & 30 \\
2010 & 129 & 108 & 94 & 80 & 65 & 53 \\
2011 & 133 & 117 & 100 & 88 & 78 & 57 \\
2012 & 133 & 109 & 92 & 83 & 69 & 56 \\
2013 & 194 & 168 & 142 & 118 & 103 & 92 \\
2014 & 241 & 206 & 165 & 143 & 112 & 96 \\
2015 & 139 & 121 & 116 & 97 & 87 & 76 \\
2016 & 153 & 137 & 118 & 100 & 87 & 75 \\
2017 & 170 & 151 & 132 & 113 & 99 & 84 \\
2018 & 150 & 131 & 117 & 98 & 80 & 71 \\
2019 & 172 & 153 & 133 & 111 & 91 & 83 \\
2020 & 180 & 148 & 134 & 118 & 99 & 87 \\ \hline \hline \end{tabular} This table reports annual persistent-noise episode counts from 2007 to 2020 under different threshold values.
\end{table}
Table 6: PN-episodes Yearly Counts |
2301.12333 | Deep Learning model integrity checking mechanism using watermarking
technique | In response to the growing popularity of Machine Learning (ML) techniques to
solve problems in various industries, various malicious groups have started to
target such techniques in their attack plan. However, as ML models are
constantly updated with continuous data, it is very hard to monitor the
integrity of ML models. One probable solution would be to use hashing
techniques. Regardless of how that would mean re-hashing the model each time
the model is trained on newer data which is computationally expensive and not a
feasible solution for ML models that are trained on continuous data. Therefore,
in this paper, we propose a model integrity-checking mechanism that uses model
watermarking techniques to monitor the integrity of ML models. We then
demonstrate that our proposed technique can monitor the integrity of ML models
even when the model is further trained on newer data with a low computational
cost. Furthermore, the integrity checking mechanism can be used on Deep
Learning models that work on complex data distributions such as Cyber-Physical
System applications. | Shahinul Hoque, Farhin Farhad Riya, Jinyuan Sun | 2023-01-29T03:05:53Z | http://arxiv.org/abs/2301.12333v1 | # Deep Learning model integrity checking mechanism using watermarking technique
###### Abstract
In response to the growing popularity of Machine Learning (ML) techniques to solve problems in various industries, various malicious groups have started to target such techniques in their attack plan. However, as ML models are constantly updated with continuous data, it is very hard to monitor the integrity of ML models. One probable solution would be to use hashing techniques. Regardless of how that would mean re-hashing the model each time the model is trained on newer data which is computationally expensive and not a feasible solution for ML models that are trained on continuous data. Therefore, in this paper, we propose a model integrity-checking mechanism that uses model watermarking techniques to monitor the integrity of ML models. We then demonstrate that our proposed technique can monitor the integrity of ML models even when the model is further trained on newer data with a low computational cost. Furthermore, the integrity checking mechanism can be used on Deep Learning models that work on complex data distributions such as Cyber-Physical System applications.
Model integrity, Model watermarking, Secure Machine Learning, Security, Machine Learning for CPS
## I Introduction
In recent years, Machine Learning techniques have become one of the most preferred technique to solve complex real-world problems. More and more systems are integrating Machine Learning and Deep Learning (DL) to solve problems or provide new features. This increase in interest has resulted in more and more organizations to use Machine Learning models. Various Cyber Physical Systems (CPS) now use DL models to improve operation or for various other benefits. Various DL models have been proposed to predict power grid loads, water quality in water treatment plants, predict in and out flow in highway systems, and improving diagnostic tools in healthcare. These systems can be considered as essential and if issues arise in some of these systems, then we might see catastrophic disasters in regional or national level.
Subsequently, to improve operations, CPS systems and other systems are using more and more complex ML techniques and DL models that require large datasets to train and test which is difficult as it is not always possible to obtain large training datasets due to the cost of building such datasets, restrictions, and various privacy concerns. A solution to this difficult situation is Machine Learning as a Service or MLaaS. MLaaS provides a way for organizations having access to large data sources to train ML models and get financial value by providing these DL models to entities that do not have access to large datasets or enough research and development resources. We can already see some cloud computing companies have started to provide dedicated services for power grid operations [11].
CPS architectures deploy such DL models in their essential operation without any way to verify the integrity of the DL model.
Furthermore, we can see that a lot of systems use low-computing capable devices as sensors or triggers where such device use pretrained or downloaded DL models. There are also scenarios where large DL models are trained on supercomputers and then are transferred to regular computing capable devices for application. For example, a full build of Autopilot Neural Network in Tesla vehicles involves 48 Networks that take 70,000 GPU hours or around 8 GPU years to train [5]. Then the fully trained Autopilot network is downloaded and deployed by the Tesla vehicles.
We can clearly see a security concerns in such techniques where a system or device is downloading or utilizing a pre-trained DL model without checking the integrity of the model. Furthermore, companies are using pre-trained DL models as MLaaS or in a cloud computing environment without monitoring the integrity of the DL model. To solve these problems, we propose a ML model integrity checking mechanism based on watermarking techniques designed for ML models. The integrity checking mechanism can be used to verify the integrity of DL models using a secret key. Our proposed integrity checking mechanism can work even after the DL model has been trained on newer data. This is possible as most of the ML model neural networks are overparameterized and are capable of handling more information even after learning the complex relationship between the input and output data. Our technique uses this overparameterization of Neural Network architectures to store more information inside the DL model and monitor the integrity of the model.
In recent times, we have seen many new proposed watermarking techniques by various research groups for DL models to protect and keep track of intellectual property rights of organizations and ML models. Watermarking techniques provide a solution for ML model owners to present a proof of intellectual property rights on their trained models. In our proposed integrity checking mechanism, we utilize such a watermarking technique by modifying it to work on any DL model deployed for classification problems and make it capable of handling DL models working on any general data distribution.
**Our contributions:** We present the first general DL model integrity checking mechanism that can be adapted to support ML models.
We introduce an updated watermarking technique that can be used on general DL models used in the classification problem field and is not restricted to any specific data distribution. We verify our claim by utilizing our integrity
checking mechanism on two applications that use CPS data collected from a group of different types of sensors containing a wide range of data distributions and properties.
We perform comprehensive experiments to see the changes in performance and accuracy of the model with and without the watermarks embedded.
Finally, we illustrate how our proposed integrity checking mechanism can be integrated in a regular DL model application.
## II Background
The main purpose of checking the integrity of a file or data is to ensure that a file or data has no unauthorized modifications and that is has not been changed or damaged in any way from the original file or data used in the transportation medium. A common way to ensure the integrity of data files is through hashing which is the process of transforming a data file into a string of fixed length value. The process of hashing a static file is simple and straight forward. However, when we are working with DL models, the models are dynamic in any good application environment the models keep learning and improving with newer data. Is it not feasible to hash a DL model each time it has been trained with newer data.
In recent years we have seen the development of various watermarking techniques for DL models to protect the intellectual property right of the model owners.
Author Uchida et al. [1] proposed to use parameter regularizes to embed watermarks into a DL model in the training phase by imposing certain restrictions on what the model learns. Similarly, Li et al. [2] proposed a watermarking technique that can work in a Black-Box setup environment where the intellectual property of a DL model can be verified by just querying the model using inputs and observing the outputs.
In [4] the authors proposed a watermarking model based on adversarial example and adversarial training. However, an issue with this approach is the ability to generate adversarial examples for the targeted model as it might not always be possible to generate adversarial examples for a given DL model. Similarly, Merrer et al. [6] proposes a technique to embed watermarks in such a way where the watermark can be retrieved using trigger samples without accessing the weights of the Neural Network. [7, 8, 9] propose similar techniques that work in Black-box and White-box setups.
In [12], authors propose a watermarking technique for DL models that use watermarked images to train the DL model to recognize images containing watermarks to predict some targeted labels.
A common issue with the proposed watermarking techniques is that they are targeted for image-based DL models and are dependent on the data distribution type of images as images can have only a fixed value range of 0 to 255 for each color channel. However, as CPS systems are based on a wide range of sensor types of devices that generate values without a fixed range or specific data distribution. Therefore, such techniques are not suitable for CPS applications and their effectiveness in applications designed for CPS systems cannot be guaranteed without further testing and experiments.
## III Threat Model
### _Focus_
The focus of proposed integrity checking mechanism is to create a mechanism to monitor the integrity of DL models continuously after a fixed time interval. Our proposed approach is capable of verifying the integrity of the model even after training the DL model on new training samples which is not possible when using any hashing technique. Furthermore, our proposed mechanism can be used on any type of DL model for classification-based application.
### _Assumptions_
We assume that an attacker does not have access to the original dataset used to train the model. However, the attacker has access to a shadow dataset that has similar properties to the original dataset used to train the original model. Similarly, we assume the attacker has access to the original DL model architecture, or a DL model architecture very close to the original one. Therefore, our experiments and evaluations have been designed assuming the attacker is capable of training a DL model very close to the original DL model. However, considering the complexity of CPS systems, the immense distribution, and vast categories of sensors, it is unlikely that an attacker can obtain a dataset containing data from all types of sensors, or systems to train such a DL model that an organization trains using their own proprietary dataset.
### _Non-threats_
Our proposed integrity monitoring mechanism does consider all the requirements needed for a watermarking technique as a watermarking technique also needs to provide a way to link the watermark with the model creator's identity to prove the intellectual property rights of the model creator. However, our proposed technique only considers the requirement of checking the integrity of DL models and the resilience of the watermark to verify the integrity of the DL model even after further training the model on newer data.
## IV System Model
The process of embedding the watermark, generating the Key dataset, and evaluating the integrity of the DL model can be separated in three stages.
The first stage is the regular training stage where the model creator trains a DL model using their own proprietary dataset for a certain application. The second stage is the
watermark embedding and Key dataset generation stage. In this, we embed the watermarking by slightly modifying the weights of the DL model and generate the Key dataset based on the modified DL model. This is the most important stage as we need to consider multiple factors based on each individual DL model such as the length of the Key dataset, the embedding epoch. For example, the length of the Key dataset needs to be small compared the length of the original dataset. However, this is not an issue in most circumstances the length of the original dataset is more than 1000 times in larger than the length of the Key dataset.
How the regular training stage and Key dataset generation stage work are illustrated in Figure 1. In the Key Dataset Generation stage, we generate a dataset consisting of random samples collected from Gaussian distribution computing the length using the Key length \(k\). We can consider the length of the random sample dataset (R) as \(l\) where \(l\) is:
\[l=k*C \tag{1}\]
The constant \(C\) remains constant for each specific application and is based on the length of the original dataset (O) used for training the DL model. We also generate a random output \(Y^{R}\) for all the samples in R dataset. Let \(Y^{MI}\) be the output of the DL model using the O+R dataset. We select \(Y^{MI*}\) using the following criteria:
\[Y^{MI*}=:f\ (Y^{R}\ \cap\ \mathrm{Y^{MI}}) \tag{2}\]
Then by training the DL model using the O+R dataset, we observe generate the output \(Y^{\mathrm{MA}}\). Next, we select all the samples that fall under the following criteria:
\[Y^{MA*}=(Y^{R}\ \cap\ \mathrm{Y^{MA}}) \tag{3}\]
We the A and B output, we select all samples that match the following criteria:
\[Y^{W}=(Y^{MI}\ \cap\ \mathrm{Y^{MA}}) \tag{4}\]
Then a set of samples along with their label are stored as the Key dataset randomly sampled from the set using the length \(k\).
The third stage is the verification stage where an entity can verify the integrity of the model by comparing the predictions of the model using the Key dataset. Because the classes of the key dataset are not based on the original problem data, a DL model that previously did not learn the relationship between the Key data samples and Key labels will not be able to accurately produce the right labels for the Key dataset. Only a DL model trained on the Key Dataset is able to get a high enough accuracy using the Key dataset.
We can make our DL model adopt to a new relationship with very low impact on the original problem relationship as all Neural Networks are overparameterized and therefore, they are capable of storing more information than needed.
## V Implementation
### _Datasets_
In order to analyze the performance of our integrity checking mechanism, we tested our technique on two applications. The first application is a DL model to classify the potability of water in water treatment plans consisting of data records collected from multiple types of sensors recording hardness, conductivity, trihalomethanes, and various other factors contributing to the potability of water [10]. We have selected this application as it is an important part in the water treatment plan which is a critical CPS system. This is an essential system and many people's health directly rely on properly identifying potable water. Furthermore, this system uses data collected from a wide range of sensors and thus difficult to fit using DL models.
Fig. 1: Diagram of Stage 1 and Stage 2 of the integrity checking mechanism.
Fig. 2: Diagram of verification stage of the integrity checking mechanism.
How Our second application is a DL model detect anomalies in BUS14 system for Meter reading. The dataset for this application contains Meter readings from various phase angles to detect anomalies.
### _Experimental Setting_
We designed two Neural Network architectures for our chosen two application setups. Our implementation of the DL model is based on the TensorFlow library [13] for Machine Learning. All our experiments were conducted on a M1 Mac device running native TensorFlow library designed for Mac M1 architecture.
### _Data preprocessing_
In order to properly fit our data to our designed Neural Network architecture we decided to normalize the data using the Min-Max normalization technique. Therefore, all the values of both dataset fall under the range of 0 to 1. The process of normalizing the data using either the Min-Max technique or the Mean-Standard Deviation technique is a common practice when preprocessing data. Similarly, in order to generate the Key dataset, we generated all our values between 0 to 1. Therefore, the Key dataset samples and regular dataset samples are indistinguishable, and our technique is not limited to any specific value range such as the proposed techniques that work on image-based applications.
## VI Experimental Results
In this section we present the experimental results for the DL model integrity checking mechanism.
### _Water Potability Application_
As we increase the number of embedding epoch to embed the watermarked data more deeply into the DL model, we can see a decrease in non-watermarked model watermark verification accuracy. Therefore, as we more embed the watermark into the DL model, the DL model becomes more distinguishable from a non-watermarked model without much drop in regular application accuracy.
In Table 1, we can see that when we use a longer Key length for our key length dataset, the watermark verification accuracy for a non-watermark model decreases to nearly random detection accuracy. However, increasing the Key length will also result in a drop in the model's regular application accuracy.
## VII Conclusion
Nowadays, we can see an increase in the deployment of pre-trained DL models. Many critical systems such as Cyber-Physical systems related to the power grid, water treatment plant and many other have started to integrate such pre-trained model in their day-to-day applications. As such, our proposed integrity checking mechanism provides a way for verifying the integrity of the pre-trained model after deployment.
Now that many essential services are integrated using DL models, monitoring the integrity of the DL model could help preventing blackouts from such services either due to attacks or integrity degradation of DL models.
Moreover, our proposed technique can be generalized to be able to work in a classical Machine Learning application
Fig. 4: Histogram of data distribution in BUS14 Anomaly detection dataset.
Fig. 3: Histogram of data distribution in Water Quality dataset.
environment. Also, the watermarking scheme used by our technique is not reliable on any specific data distribution and therefore, is not limited to DL models utilized in only specific applications such as image processing, or natural language processing.
Furthermore, an interesting direction for the future would be to replace the last layer of prediction-based DL models using static multi-neuron layers, so that we can implement our integrity checking mechanism to prediction-based DL models.
|
2301.02120 | Reprogramming Pretrained Language Models for Protein Sequence
Representation Learning | Machine Learning-guided solutions for protein learning tasks have made
significant headway in recent years. However, success in scientific discovery
tasks is limited by the accessibility of well-defined and labeled in-domain
data. To tackle the low-data constraint, recent adaptions of deep learning
models pretrained on millions of protein sequences have shown promise; however,
the construction of such domain-specific large-scale model is computationally
expensive. Here, we propose Representation Learning via Dictionary Learning
(R2DL), an end-to-end representation learning framework in which we reprogram
deep models for alternate-domain tasks that can perform well on protein
property prediction with significantly fewer training samples. R2DL reprograms
a pretrained English language model to learn the embeddings of protein
sequences, by learning a sparse linear mapping between English and protein
sequence vocabulary embeddings. Our model can attain better accuracy and
significantly improve the data efficiency by up to $10^5$ times over the
baselines set by pretrained and standard supervised methods. To this end, we
reprogram an off-the-shelf pre-trained English language transformer and
benchmark it on a set of protein physicochemical prediction tasks (secondary
structure, stability, homology, stability) as well as on a biomedically
relevant set of protein function prediction tasks (antimicrobial, toxicity,
antibody affinity). | Ria Vinod, Pin-Yu Chen, Payel Das | 2023-01-05T15:55:18Z | http://arxiv.org/abs/2301.02120v1 | # Reprogramming Pretrained Language Models for Protein Sequence Representation Learning
###### Abstract
Machine Learning-guided solutions for protein learning tasks have made significant headway in recent years. However, success in scientific discovery tasks is limited by the accessibility of well-defined and labeled in-domain data. To tackle the low-data constraint, recent adaptions of deep learning models pretrained on millions of protein sequences have shown promise; however, the construction of such domain-specific large-scale model is computationally expensive. Here, we propose Representation Learning via Dictionary Learning (R2DL), an end-to-end representation learning framework in which we reprogram deep models for alternate-domain tasks that can perform well on protein property prediction with significantly fewer training samples. R2DL reprograms a pretrained English language model to learn the embeddings of protein sequences, by learning a sparse linear mapping between English and protein sequence vocabulary embeddings. Our model can attain better accuracy and significantly improve the data efficiency by up to \(10^{5}\) times over the baselines set by pretrained and standard supervised methods. To this end, we reprogram an off-the-shelf pre-trained English language transformer and benchmark it on a set of protein physicochemical prediction tasks (secondary structure, stability, homology, stability) as well as on a biomedically relevant set of protein function prediction tasks (antimicrobial, toxicity, antibody affinity).
## Introduction
Recent advances in artificial intelligence (AI), particularly in deep learning, have led to major innovations and advances in many scientific domains, including biology. These deep learning models aim to learn a highly accurate and compressed representation of the biological system, which then can be employed for a range of tasks. There has been notable success across a range of tasks, from high-quality protein structure prediction from protein sequences [1; 2], accurate prediction of protein properties, to enabling novel and functional peptide discoveries [3; 4]. Many of these advances rely on developing deep learning models [1; 5; 6] which are trained from scratch on massive amounts (on the order of billions of tokens) of data. However, labeled data in biology is scarce and sparse, which is also the case for many other real-world scenarios in the scientific domain. In the biological domain, label annotation can involve biological assays, high resolution imaging and spectroscopy, which are all costly and time consuming processes.
The technique of pretraining deep learning models was proposed to address this issue. Pretraining methods leverage large amounts of sequence data and can learn to encode features that can explain the variance seen in sequences across biological task-specific training samples. In the context of protein sequences, pretraining has enabled meaningful density modelling across protein functions, structures, and families [7]. In this work, we reference two types of pretraining methods: (i) unsupervised pretraining, where all data is unlabeled, and (ii) self-supervised pretraining, where a model learns to assign labels to its unlabeled data. Large models then pretrain on massive amounts of unlabeled data, specifically biological sequences, which are available at scale. Once pretrained, these foundation models (FMs) [8] are finetuned on smaller amounts of labeled data, which correspond to a specific downstream task. Interestingly, for the large-scale models pretrained on protein sequences, biological structure and function seem to emerge in the learned protein representation, even though such information was not included in model training [5].
Though highly powerful, the training of those domain-specific foundation models from scratch is highly resource-intensive [9]. For example, one training run of BERT (the language model considered in this work) learns 110 million parameters, costs up to $13,000 USD and takes 64 days (without parallelized computing) and results in 0.7 tons of carbon emissions [10]. A single training run of another popular language model, the T5 transformer, learns 11 billion parameters, costs up to $1.3 million USD, takes 20 days, and results in 47 tons of carbon emissions [11; 12]. Such pretrained language models and size variants are abundantly available with the advent of models libraries (e.g., Hugging Face [13]) which host pretrained models and datasets. The scale of data, compute, and financial resources required to train these models is not only available to a limited number of researchers, but is also infeasible for applications with limited labeled data. However, in the scientific domain, we still aim to train models with similar
representational capacity and predictive performance. To this end, we propose a lightweight, and more accurate alternative method to large-scale pretraining. Specifically, we introduce a method to reprogram an existing foundation model of high capacity that is trained on data from a different domain. This situation calls for innovations in cross-domain transfer learning, which is largely unexplored, particularly in scientific domains.
One known fact is that biological sequences are similar to natural language, as they also contain long-range dependencies and follow Zipf's law [16]. These sequences and their associated dependencies are crucial for determining their structural and functional properties. Such similarity has motivated the use of deep learning architectures and mechanisms that are widely used in natural language processing (NLP) to build protein sequence models from scratch. In this work, we explore an alternative _warm-start_ paradigm, i.e. how to effectively and efficiently reprogram an existing, fully-trained large English language model to learn a meaningful (i.e., biomeclically relevant) representation of protein sequences. The goal is to create a more carbon-friendly, resource-efficient, and broadly accessible framework to motivate different scientific domains toward democratizing the representation power of large AI models. This _warm-start_ paradigm is defined by the framework's ability to achieve the performance of transformers that are pretrained on billions of tokens, with a lighter-weight training procedure that is similar to that of a standard supervised classifier trained from scratch. In particular, we consider highly specific biological and biomedical protein sequence datasets (illustrated in Figure 1) which have much fewer samples than standard supervised language task datasets. Reprogramming thus provides a more data and resource-efficient approach to developing models to achieve deep representational capacity and performance for downstream protein tasks. Reprogramming has been previously explored in the language domain as a sub-problem of transfer learning [17]. [18] explored reprogramming language models for alternate text classification tasks, [19] reprogrammed acoustic models for time series classification, [20] reprogrammed ImageNet classification models for alternate image classification tasks. However, none of these methods investigate mappings between domains that require a very high representational capacity (from natural language to biological sequence), which is the setting we require in the protein sequence domain.
Toward this goal, we introduce R2DL (Representation Reprogramming via Dictionary Learning), a novel cross-domain transfer learning framework to reprogram an existing pretrained large-scale deep-learning model of the English language, namely a English BERT model [10], to learn and predict physicochemical and biomedical properties of protein sequences. To the best of our knowledge, our work remains the first work to address reprogramming in any biological, and more broadly, scientific domain. In Figure 1, we illustrate the set of protein physicochemical and functional property prediction tasks we consider, as well as the baseline methods against which we compare R2DL performance to, and a brief description of R2DL's advantages compared to these existing methods. We test the reprogrammed model for a range of biomeclically relevant downstream physicochemical property, structure, and function prediction tasks, which include prediction of secondary structure, homology, mutational stability, solubility, as well as antimicrobial nature, toxicity, and antibody affinity of proteins. Each of these tasks involves learning on datasets which are limited to a few thousands of labeled samples, at least an order of magnitude smaller needed
Figure 1: **Left:** Descriptions of considered predictive tasks. We select the set of physicochemical property prediction tasks from the well-studied domains in [6], and the biomedical function prediction tasks from works with biomeclically relevant small-sized labeled datasets [3; 14]. **Center:** We compare R2DL to pretraining and standard supervised training methods. We refer to supervised methods as standard supervised classifiers that are trained from scratch from labeled data alone. Depending on how labeled and unlabeled data are used in pretraining, we consider pretraining to constitute unsupervised/supervised pretraining. **Right:** The comparative table shows the broad adaptability of the R2DL framework. In comparison to existing gold standard methods, R2DL is has a broader utility across different domains, sizes of training datasets, and data efficiency. We categorize supervised methods as cross-domain adaptable, through various domain adaptation and transfer learning techniques [15].
to train a foundation model or a large language model [21]. R2DL uses dictionary learning, a machine learning framework that finds the optimal sparse linear mapping between the English vocabulary embeddings and the amino acid embeddings. To do so, a protein property prediction task-specific loss is used to learn the optimal parameters of the reprogrammed model. We train R2DL in a supervised setting with the downstream protein prediction task datasets that are labeled and small in size (illustrated in Figure 1). R2DL demonstrates consistent performance improvement from existing baselines across seven different physicochemical (e.g., up to 11% in stability), structural, and functional property prediction (e.g., up to 3% in toxicity) tasks of proteins. We estimate R2DL to be over 105 times more data-efficient than existing pretraining methods. We further demonstrate the performance robustness of R2DL when trained on a reduced size version of the supervised protein datasets. In addition, we show that that R2DL learns to encode physicochemical and biomedical properties in the learned representations, even in a limited data scenarios. This work thus blazes a path toward efficient and large-scale adaptation of existing foundation models toward different real-world learning tasks and accelerates scientific discovery, which naturally involves learning from limited real-world data.
## Results
Figure 2 illustrates the proposed Representation Reprogramming via Dictionary Learning (R2DL) framework, which learns to embed a protein sequence dataset of interest by training on the representations of a transformer that is pretrained on an English text corpus. A one-to-one label mapping function is assigned for each downstream protein prediction task for cross-domain machine learning, and a class label or a regression value is predicted using R2DL for each protein sequence during testing. Below we discuss details of the general framework (tasks described in Figure 1).
### R2DL Framework Formulation
The R2DL objective is to reprogram a source model (pretrained language model) to be able to correctly classify, or predict the regression values of, protein sequences (for a target prediction task). We use pretrained instances of BERT, a bidirectional transformer (termed the source model), which has been finetuned separately for different language tasks (e.g., sentiment classification, named entity recognition) [10; 22]. For a protein sequence classification task, we use the source model trained on a language task for which there are \(n\) sentence output classes (e.g., positive and negative for sentiment classification), and \(n\) protein sequence classes (e.g., toxic, non-toxic). The output-label mapping \(h\) is then a simple 1-1 correspondence between the source task labels and the target task labels (e.g., positive \(\rightarrow\) toxic and negative \(\rightarrow\) non-toxic). For a regression task, R2DL uses a mapping between the regression values in protein sequence feature space and the classification probability values in the source model embedding space. It does so by learning optimal thresholds of regression values that map to the source model class labels.
Figure 2: System illustration of the Representation Reprogramming via Dictionary Learning (R2DL) framework. In Step 1, R2DL loads a pretrained language model (source), obtains the source vocabulary embeddings, and specifies the protein tokens (target). In Step 2, R2DL learns a sparse linear mapping between the source and target embeddings via dictionary learning, to represent a target token embedding as a sparse linear combination of source token embeddings. In Step 3, the system maps the source task labels (e.g., positive/negative sentiments) to target task labels (e.g., toxic/non-toxic proteins) and optimizes the embedding mapping parameters based on the task-specific loss evaluation on a given protein sequence dataset. Finally, in Step 4 the reprogrammed model is deployed for the test-time evaluation.
The input data of the source English language model is tokenized at the word level. These tokens form the atoms for our dictionary representation of \(V_{S}\), a matrix with its rows corresponding to embedding vectors of source tokens. The input data to the target task, protein sequences, are tokenized on a character level with only 20 distinct tokens (corresponding to the set of 20 discrete natural amino acid characters). R2DL obtains \(V_{S}\) from the learned embeddings of the source model and learns to represent \(V_{T}\), the matrix of the target token embedding, as a weighted combination of the English token embeddings. We propose token reprogramming by approximating a linear mapping between \(V_{S}\) and \(V_{T}\). That is, we aim to find a transformation of the latent representation of the protein sequences, such that it can be embedded in the pretrained language model's latent space and enable R2DL to leverage these re-embedded tokens for learning. Specifically, we learn the linear map \(\Theta\) by approximating a dictionary using a k-SVD solver [23]. That is, we want to approximate \(V_{T}=\Theta V_{S}\). The k-SVD solver guarantees a task-specific level of sparsity in the coefficients when linearly combining English token embeddings to represent a protein sequence token embedding. In other words, it helps select \(k\) English tokens and use their linearly combined embeddings as the embedding of a target token. Additionally, with a one-to-one label mapping function of the protein sequence label to the English text label, we are able to use the pretrained language model for inference on the embedded protein dataset, \(V_{T}\). We thus design an end-to-end reprogramming framework for any arbitrary protein sequence classification or regression task.
### R2DL Training and Optimization Procedure
We are given a pretrained classifier, \(\mathbf{C}\) (which has been pretrained on a source-task dataset with source tokens denoted by \(\{v_{S}\}_{i=1}^{|V_{S}|}\)) and a target-task dataset with target tokes denoted by \(\{V_{Tj}\}_{j=1}^{|V_{T}|}\). The embedding matrices are \(V_{S}\) and \(V_{T}\) respectively. We can encode an output label mapping function translating between source and target labels. In Figure 2, we show how R2DL aims to find a linear mapping function \(\Theta\) that learns the optimal coefficients for our atoms in \(V_{T}\) to be represented as a sparse encoding of the dictionary \(V_{S}\) such that \(V_{T}=\Theta V_{S}\). The map \(\Theta\) is used to reprogram \(\mathbf{C}\) to be able to correctly classify the protein sequences through the transformation \(h(\mathbf{C}(\theta_{t},t))\) where \(t\) is a protein sequence from a protein task and \(\theta_{t}\) is the linear weights associated with the protein sequence \(t\) in \(\Theta\). We note that for each of the downstream protein property prediction task, R2DL only trains a corresponding token mapping function \(\Theta\) while keeping the pretrained classifier \(\mathbf{C}\) intact. Therefore, the number of trainable parameters in R2DL is simply the size of the matrix \(\Theta\), which is usually much smaller compared to the number of parameters in the pretrained deep neural network classifier \(\mathbf{C}\). To approximate the dictionary, we use a k-SVD solver to optimize over the cross entropy loss for updates to \(\Theta\). We then apply the assigned label mapping \(h\) for protein classification tasks, or thresholding for regression tasks, and train the mapping function \(\Theta\) using gradient-based optimization evaluated on the task-specific cross-entropy loss. Details for R2DL training procedure are given in the Method section.
### Benchmark Tasks and Evaluation
We consider four physicochemical structure and property prediction tasks from a well-established protein benchmark from [6] (represented in Figure 1). Secondary structure prediction involves predicting secondary structure \(y\in\{\)Helix, Strand, Other\(\}\) for each amino acid \(x\) in a given protein sequence. Solubility prediction considers mapping an input protein sequence \(x\) to a label of \(y\in\{\)Membrane-Bound, Water Soluble\(\}\). Homology detection is a sequence classification task, where each input protein \(x\) is mapped to a label \(y\in\{1,...,1195\}\), representing different possible protein folds. Stability prediction is a regression task. We further consider three biomedically relevant function prediction tasks, which are sequence classification tasks (represented in Figure 1). Using R2DL, we predict for a given sequence \(x\), its binary class label \(y\in\{\)AMP, non-AMP\(\}\) for antimicrobial-nature prediction [3] or \(y\in\{\)Toxic, non-Toxic\(\}\) for toxicity prediction [3]. Finally, we predict antigen and non-specific binding of antibody variant sequences from [14]: given a sequence \(x\), the task is to predict \(y\in\{\)on-target, off-target\(\}\). Further details on the protein tasks and datasets are in the Method section. The sizes of the individual datasets vary between 4,000 and 50,000 (see supplementary for details on data sizes and train-test splits). Data efficiency is defined as the ratio of the R2DL prediction accuracy to the number of biological sequences used during pretraining and finetuning. We use data efficiency as a metric to compare the performance of R2DL to established benchmarks for the protein tasks in [6; 3; 14]. For classification tasks, we evaluate prediction accuracy with a top-n accuracy, where \(n\) is the number of classes in the protein sequence classification task. For regression tasks, we evaluate prediction accuracy with Spearman's correlation.
### Model Baselines and Data
The baseline models we consider in this work are of two types. Firstly, we consider models trained in a supervised manner, by training standard sequence Long Range Short Term Memory (LSTM) models from scratch. For each downstream peptide or protein classification task, we have labeled (supervised) datasets. The results of these models are reported in Figure 3(a). Secondly, we consider models that are pretrained in an unsupervised manner on protein sequence data and are fintuned for a particular downstream task. Pretraining methods that do not use labeled data
pose an advantage, as those models can then learn from a significantly larger number of data samples. In the cases of the toxicity and antimicrobial prediction tasks, the baseline model we compare to has been pretrained on a subset of the UniProt database where sequences are limited to being 50 residues long [24]. The pretraining corpus size is then 1.7 million peptide sequences. Using unlabeled data for pretraining is thus much more advantage than pretraining in a supervised scheme. Of these 1.7 million sequences, only 9,000 are labeled (0.005% of sequences). The model is a Wasserstein Autoencoder, which is a generative model that undergoes unsupervised pretraining on the subset of UniProt data. The WAE embeddings of the labeled sequences are then used to train a logistic regressor model on the labeled dataset to obtain a binary classifier for Antimicrobial/non-Antimicrobial (6489 labeled samples) or for toxic/non-toxic (8153 labeled samples) label prediction. For the physicochemical property prediction tasks, the baseline model we consider is pretrained on the Pfam corpus [25]. This corpus consists of 31 million protein domains and is widely used in bioinformatics pipelines. Sequences are grouped by protein families which are categorized by evolutionarily-related sequences. In contrast, the downstream physicochemical tasks of structure, homology, stability and solubility prediction have labeled datasets that range from 5,000 to 50,000 samples which the model can be finetuned on. Pretraining thus poses the advantage of modeling the density over a range of protein families and structures, but stipulates that there must be sequence datasets that contain structural and functional information about the downstream task datasets, and typically be of a size on the order of millions of sequences. R2DL eliminates this requirement by repurposing existing pretrained English language models, and leveraging transferrable information from models that are not conditioned on protein sequence information.
### Data Efficiency and Accuracy of Reprogramming
We report the performance of R2DL for the set of 7 protein predictive and their corresponding baselines in Figure 3. Baselines for the physicochemical prediction tasks are established by a transformer from [6] that has been pretrained in an unsupervised setting on the Pfam pretraining corpus [26]. Baselines for the antimicrobial and toxicity prediction tasks are established in [3], where Das et al. pretrained a Wasserstein Autoencoder on the peptides from the UniProt corpus [24] using unsupervised training, and then used the latent encodings from autoencoder to train the property classifiers. Baselines for the antibody affinity task are established in [14] where they train a linear discriminant analysis model in a supervised setting. Each physicochemical and biomedical function prediction task then has a relatively small, supervised dataset which we split into training and testing sets to train the R2DL framework and evaluate its performance on the test set. Henceforth, we refer to these baselines as task-specific baselines, whereas the baseline model we compare R2DL to varies with the downstream protein prediction task and the best performing model available (see Supplementary for details on task-specific baselines).
We show that, for every prediction task we achieve a higher test accuracy with R2DL than with the corresponding task-specific baseline model when both models are trained on the full labeled dataset. R2DL shows performance improvement up to 11.2% when compared to the pretrained models, and up to 29.3% performance when compared to a standard, supervised LSTM that is trained from scratch on the same dataset. However, R2DL needs a pretrained source model and only a small-sized, labeled protein sequence dataset as the input. And, therefore the size of R2DL training set is limited to the number of samples in the downstream protein prediction dataset. Pretrained models require a large amount of protein sequence data for pretraining, on the order of \(10^{6}\) samples, in addition to the downstream supervised protein task sequence data that the pretrained model is fine-tuned on. In Figure 3(a), we show the number of training samples and corresponding accuracy metric (see Method section for details) of the R2DL, pretrained, and supervised models. In Figure 3(b), we show the data efficiency, _i.e._, the ratio of the number of training samples (including the pretraining corpus only of biological sequences for pretrained source models) to the accuracy of the model for R2DL and baseline models. We show that R2DL is a maximum of \(10^{4}\) times more data efficient, as in the case of the toxicity prediction task. This is due to the very large number of pretraining data samples required relative to the downstream protein task dataset.
Figures 3(c) and 3(d) show the R2DL performance on the antigen affinity prediction task for antibody variant sequences and its comparison with the baseline LDA model reported in [14]. R2DL achieves a higher predictive accuracy than the baseline LDA model by 3% and with a higher classification accuracy with imbalanced datasets. The antibody affinity task dataset has the following distribution on target: 1516, off-target: 2484. For 37% to 62% class-imbalance ratio of labels, we show that the R2DL model has a better classification accuracy than the LDA model. The learned representations can therefore be inferred to be more accurate in our model than in the baseline model. This is important, as in many real-world prediction tasks, the dataset is found to be class-imbalanced.
### R2DL Performance vs. Pretraining Performance in Low Data Settings
Motivated by the data efficiency of R2DL as a framework, we tested the task-specific predictive performance of R2DL in reduced-data training settings. We compared these results to the performance of task-specific baseline models, when trained and tested in the same restricted data setting. In Figure 4, we show the performance of the R2DL model and then baseline model when trained on 100%, 80%, 60%, and 40% of a specific task dataset. We show results for
the Antimicrobial, Toxicity, Secondary Structure, Stability, Homology, and Solubility prediction tasks in Figure 4 and compare the performance of R2DL and pretrained models against the performance of a random guess. We observe, that for downstream tasks of Toxicity, Secondary Structure, Homology and Solubility, R2DL always performs better than a pretrained protein language model across the size range of the limited datasets. Furthermore, we observe that, except in the stability task, the rate of failure to perform better than a random guess is higher for the pretrained
Figure 3: Task-specific evaluation of R2DL performance compared to the performance of the baseline models. In Figure 3(a), results for the pretrained baseline models are from unsupervised pretrained transformers for secondary structure, stability, homology, and solubility prediction tasks [6]. The baseline models for the antimicrobial and toxicity prediction tasks are logistic regressors trained using sequence embeddings from the pretrained peptide wassertein variational autoencoder [3]. Results for the supervised classifiers are from sequence-level LSTMs trained from scratch on the downstream protein prediction data. For classification tasks, we evaluate prediction accuracy with a top-n accuracy, where \(n\) is the number of classes in the protein sequence classification task. For regression tasks, we evaluate prediction accuracy with Spearman’s correlation coefficient. Results of the pretrained models on the antibody task dataset have not been previously reported in any work and are hence left out for future work. In 3(b), Data efficiency is defined as the ratio of the R2DL prediction accuracy to the number of protein sequences used during training. In Figure 3(c)-(d), we show a comparison between the performance of a linear discriminant analysis (LDA) model in [14] and R2DL on the antibody affinity dataset. The LDA model is a binary classifier which finds the optimal classification boundary by projecting the data onto a one-dimensional feature space and finding a threshold. The antibody affinity dataset consists of 4,000 labeled protein sequences, with labels {1 (on-target binding), 0 (off-target binding)}. R2DL achieves a predictive accuracy of 95.5% compared to the LDA model performance of 92.8%.
models than for R2DL. In both cases, R2DL outperforms pretraining until the cutoff point that is the intersection of the random guess curve with the accuracy curves (the point at which the model is not learning any meaningful representation).
### Correlation Between Learned Embeddings and Evolutionary Distances
Beyond comparing the R2DL model against the individual protein task benchmarks, we demonstrate that the R2DL dictionary learning framework shows interpretable correspondences between the learned embeddings in the latent space and the specific protein property. We show this result for the antibody affinity, secondary structure, and toxicity prediction tasks. Figures 5(a-c) show the t-SNE projection of task-specific R2DL embeddings \(V_{T}=\Theta V_{S}\) of protein sequences for secondary structure, toxicity, and antibody affinity prediction tasks. Clear separation between different protein classes is evident. We further calculate the similarity between the euclidean distance between the latent representation at the last layer for each amino acid embedding, and compare it to the pairwise evolutionary distance with the BioPython module. In Figure 5(d), we show the euclidean distances between the latent embeddings learned in the R2DL model and the pairwise evolutionary distances between protein sequences, as estimated using BLOSUM62 matrix implemented in the pairwise function of BioPython module.
The matrix shows a correlation of close to 1.0 along the diagonal showing a perfect correspondence between the learned representation and the empirical observations of amino acid relatedness. R2DL thus captures the underlying structure of the linear sequence of amino acid residues in protein sequences in the context of the protein task reprogrammed.
## Discussion
We propose a new framework, R2DL, to reprogram large language models for various protein tasks. R2DL demonstrates powerful predictive performance across tasks that involve evolutionary understanding, structure prediction, property prediction and protein engineering. We thus provide a strong alternative to pretraining large language models on upto \(10^{6}\) protein sequences. With only a pretrained natural language model (which are abundantly available at the time of writing), a small-sized labeled protein data set of interest, and a small amount of cross-domain finetuning, we can achieve better performance for each protein prediction task with interpretable correspondences between features.
Figure 4: Results of the R2DL model and baseline model for each downstream task in reduced training data settings.
Beyond improvements in predictive performance, we show that the ratio of performance improvements to pretraining and training samples involved in the R2DL framework make R2DL up to 105 times more data-efficient than any current methods. This work opens many doors to biological prediction tasks that can acquire very few labeled, high quality data samples. We emphasize the results of the data-efficiency of R2DL, when applied to biomedically relevant protein predictions, which are critical to advancing scientific understanding and discovery, but have been unsuccessful until now.
While R2DL does make gradient updates in the framework, the data and resource requirements of the R2DL method is much lower than any unsupervised or self-supervised pretraining approach for protein sequence modeling. Though R2DL has the same data and resource requirements as any standard supervised training approach, R2DL demonstrates much higher task accuracy across a broad and diverse range of property prediction tasks. We claim that R2DL is able to do this because it can leverage the deep representational capacity induced by reprogramming, which standard supervised models cannot achieve without an unjustifiably large number of parameters. R2DL is thus more efficient than existing baseline models in the following aspects: (i) R2DL only requires a pretrained transformer (trained on English language data) and a small-sized, labeled protein sequence data set of interest. We do not make any updates to the pretrained model itself, unlike traditional transfer learning methods. Rather we make updates to the R2DL model during a supervised training process that optimizes over class-mapped labels. (ii) R2DL does not require large-scale un/self-supervised pretraining on millions of unlabeled protein sequences, as in [6; 3; 5]. (iii) Further, R2DL does not require any large-scale supervised pretraining, which has been found beneficial in protein-specific tasks [6] as well as in computer vision [27]. Labeling protein sequences at scale, particularly for biomedical function, is almost infeasible for the size of dataset that is required for supervised pretraining. With these three considerations in mind, we pose R2DL as a data-efficient alternative to pretraining methods for protein prediction tasks of biological and biomedical relevance. To the best of our knowledge, R2DL is the first framework
Figure 5: (a-c) Clustering of R2DL learned embeddings for secondary structure prediction, toxicity prediction, and antibody affinity prediction tasks. When tagged by protein property classification, we see very high correspondence between the clusters and protein sequences with the same physicochemical or biomedical property classification. (d) For the antibody affinity prediction task, we observe a high correlation coefficient along the diagonal. This shows that the representation learned by R2DL is highly similar to empirical observations of pairwise residue correlations.
without explicit pretraining that facilitates accurate predictions across a general suite of protein prediction tasks and provides interpretable correspondences between amino acid features that are very closely aligned with domain knowledge (evolutionary distances). The success of R2DL can be attributed to its representational power to encode a sparse representation by leveraging the natural language modeling entailed in large language models for efficient learning on protein structure and function prediction tasks, as both English and protein sequences follow Zipf's law [16].
We first demonstrate the effectiveness of R2DL on a set of physicochemical structure and property prediction tasks, and then on a set of biomedically relevant function prediction tasks, for protein sequences. We show predictive performance improvements against pretrained methods (up to 11% in stability) and standard supervised methods (up to 3.2% in antibody affinity). Similarly, on the remaining tasks, we show performance improvements over the best reported baseline in structure prediction (4.1%), homology (2.3%), solubility (7.1%), antibody affinity (3.2%), toxicity (2.4%). R2DL thus shows the capability to learn a general representation of protein sequences that can be efficiently adopted to different downstream protein tasks. These powerful representation capabilities as evidenced by its ability to achieve high performance across protein datasets with a highly varied number of task-specific training samples. The performance of R2DL across protein tasks show the potential to repurpose and develop powerful models that can learn from small, curated, and function-specific datasets. This mitigates the need to train large pretrained models for peptide learning tasks. We thus provide an alternative method to pretraining that is cheaper to run and more accurate, and therefore adoptable to broader researcher communities who may not have access to large-scale compute. This potential is critical for many applications, such as discovery of new materials, catalysts, as well as drugs. Although we establish the efficacy and efficiency of R2DL in a domain where pretrained large language models already do exist, we hope that our work will pave the path to extending this approach to other domains where pretrained LLMs do not exist, such as polymers.
## Method
### Representation of Tokens
In the R2DL framework, we use 2 input datasets, an English language text dataset (source dataset) and a protein sequence dataset (target dataset). The vocabulary size of a protein sequence dataset at a unigram level is 20, as proteins are composed of 20 different natural amino acids. We obtain a latent representation of the English text vocabulary, \(V_{S}\), by extracting the learned embeddings of the data from a pretrained language model (source model). The protein sequence data is embedded in the same latent space, and is termed the target vocabulary, \(V_{T}\). For each task, the token embedding matrix is of dimensions \((n,m)\) where \(n\) is the number of tokens and \(m\) is the length of the embedding vectors. We use the same encoding scheme of \(V_{S}\) and \(V_{T}\) across all downstream tasks.
#### Procedure Description of the R2DL Framework for a Protein Task
* **Procedure Inputs**: Pretrained English sentence classifier \(\mathbf{C}\), target model training data \(\mathbf{X}_{\ell}\) for task \(\ell\), class mapping label function, \(h_{\ell}\) (if classification) where \(\ell\in\{\text{Secondary Structure},\text{Fluorescence},\text{ Homology},\text{Solubility}, \text{Antimicrobial},\text{Toxicity},\text{Antibody}\}\).
* **Procedure Hyperparameters**: Maximum number of iterations \(T_{1}\) for updates to \(\Theta\), number of iterations \(T_{2}\) for k-SVD, step size \(\{\alpha_{t}\}_{t=1}^{T_{1}}\)
* **Procedure Initialization**: Random initialization of \(\Theta\), obtain the source token embedding matrix \(V_{S}\)
* **Define Objective Function**: Objective function for k-SVD: \(\|V_{T}-\Theta V_{S}\|\leq\epsilon\)
* **k-SVD Approximation of \(\Theta\)**: If \(t_{1}\leq T_{1}\), while \(t_{2}\leq T_{2}\) use approximate k-SVD to solve \(V_{T}\approx\Theta V_{S}\), \(t_{2}\longleftarrow t_{2}+1\)
* **Calculate the Loss and Perform Gradient Descent**: \(\Theta\longleftarrow\Theta-\alpha_{t}\cdot\nabla_{\Theta}\text{Loss}(\Theta, \mathbf{X}_{\ell},h_{\ell},\mathbf{C})\), \(t_{1}\longleftarrow t_{1}+1\) and return to the previous K-SVD step
* **Output Protein Sequence Labels for Protein Sequence \(x\) of Task \(\ell\)**: \(h_{\ell}(\mathbf{C}(\Theta,x))\)
We are given a pretrained English classifier, \(\mathbf{C}\), and a protein sequence target-task dataset \(\mathbf{X}_{\ell}\). We denote the task with \(\ell\), such that \(\ell\in\{\text{Secondary Structure},\text{Fluorescence},\text{ Homology},\text{Solubility}, \text{Antimicrobial},\text{Toxicity},\text{Antibody}\}\). We also encode an output label mapping function \(h_{\ell}\) specifying the one-to-one correspondence between source and target labels. As shown in Figure 2, the source vocabulary embedding, \(V_{S}\), is extracted from the pretrained model, \(\mathbf{C}\). The next objective is to learn \(\Theta\) that approximates the embedding of tokens in \(\mathbf{X}_{\ell}\) (denoted by \(V_{T}\)) in the representation space of the source model.
We aim to learn \(\Theta\in\mathbf{R}^{a\times b}\) that finds the optimal coefficients \(\{\theta_{t}\}\) for each of the target tokens \(t\in\{1,...,a\}\) in \(V_{T}\in\mathbf{R}^{a\times m}\) to be represented as a sparse encoding of the dictionary, \(V_{S}\in\mathbf{R}^{b\times m}\), such that \(V_{T}=\Theta V_{S}\). For a given target protein sequence \(x\) from the \(\ell\)-th task, \(\Theta\) is used to perform the target task through the transformation \(h_{\ell}(\mathbf{C}(\Theta,x))\). While we do not make any modification to the parameters or architecture of \(\mathbf{C}\), we assume access to the gradient \(\nabla_{\text{d}}\text{loss}(\cdot)\) for loss evaluation and parameter updates during training.
A target token embedding \(v_{t}\in\mathbf{R}^{m}\) can be represented as a sparse linear combination of the source token embeddings (rows) in \(V_{S}\), \(v_{t}=\theta_{t}V_{s}\). \(v_{t}\) is the representation of the protein token in the dictionary space and satisfies \(||v_{t}-\theta_{t}V_{s}||_{p}\leq\epsilon\), where \(\|\cdot\|_{p}\) is an \(L_{p}\) norm and \(\theta_{t}\) is made to be sparse by satisfying \(||\theta_{t}||_{0}\leq k\) for all \(t\). An exact solution \(v_{t}=\theta_{t}V_{S}\) is computationally expensive to find, and is subject to various convergence traps, so for the purpose of our efficient fine-tuning approach we approximate \(v_{t}\approx\theta_{t}V_{S}\) using k-SVD. We first fix the dictionary \(V_{S}\), as extracted from \(\mathbf{C}\), and then find the optimal \(\Theta\) according to the optimization problem, by minimizing the alternative objective \(\sum_{t=1}^{\epsilon}||\theta_{t}||_{0}\) subject to \(\|V_{T}-\Theta V_{S}\|_{F}^{2}\leq\epsilon\) as explored in [23]. While algorithms exist to choose an optimal dictionary (an exact solution to k-SVD) that can be continually updated [23], we penalize computational expense over performance for the purpose of maintaining an efficient solution (at the cost of statistically insignificant improvements in accuracy) by using a predetermined number of iterations for k-SVD convergence, which is then used to evaluate the cross entropy loss on \(h_{\ell}(\mathbf{C}(\Theta,x))\) and update the mapping function \(\Theta\).
### Data
#### Classification
We provide five biologically relevant downstream physicochemical property prediction tasks, adapted from [6] to serve as benchmarks. We categorize these into property prediction, structure prediction, evolutionary understanding, and protein engineering tasks. The sizes of the individual datasets vary between 4,000 and 50,00 (see supplementary for details on data sizes and train-test splits).
**Secondary Structure Prediction (Structure Task):** Secondary structure (SS) is critical to understanding the function and stability of a protein, and SS prediction is an important intermediate step in designing designing protein complexes. This dataset, obtained from [28] has 8,678 data samples. It is derived from the CB513 dataset, and each amino acid, \(x\) in a protein sequence is mapped to \(y\in\{\text{Helix, Strand, Other}\}\). The benchmark for this task is a transformer that reports a best performance of 80% accuracy.
**Solubility:** This task takes an input protein \(x\) and maps it to a label of \(y\in\{\text{Membrane-Bound, Water Soluble}\}\). Determining the solubility of proteins is useful when designing proteins or evaluating their function for particular cellular tasks. This dataset, obtained from [29] has 16,253 data samples. The benchmark is a pretrained transformer, that achieves a best performance of 91% on a binary classification task.
**Antigen Affinity (Protein Engineering):** Therapeutic antibody development requires the selection and engineering of molecules with high affinity and other drug-like biophysical properties. This dataset, obtained from [14] has 4,000 data samples. The task is to map an input protein \(x\) to a label \(y\in\{\text{on-target, off-target}\}\). The task corresponds to predicting antigen and non-specific binding. The benchmark for this task is a Linear Discriminant Analysis model with Spearman's \(\rho\) values for antigen binding (0.87) and for non-specific binding (0.67).
**Antimicrobial Prediction (AMP) (Property Task):** Determining the antimicrobial nature of a peptide is a critical step in developing antimicrobials to fight against resistant pathogens. The dataset, obtained from [3], consists of 6,489 labeled protein sequences \(x\), is mapped to a label \(y\in\{\text{AMP, non-AMP}\}\). The original model trained on this data provides a de novo approach for discovering new, broad-spectrum and low-toxic antimicrobials. The benchmark for this task is a transformer that reports a best performance of 88% accuracy with a pretrained classifier.
**Toxicity (Property Task):** Improving the functional profile of molecules, especially in the context of drug discovery, requires optimizing for toxicity and other physicochemical properties. To that end, toxicity is an important property to predict in AMP development. This dataset, obtained from [3] consists of 8,153 antimicrobial peptide sequences which are either toxic (positive class), or non-toxic (negative class). The benchmark for this task is a transformer that reports a best performance of 93.78% accuracy with a pretrained classifier.
#### Regression
**Stability (Protein Engineering Task):** This regression task where each protein, \(x_{i}\) is mapped to \(y_{i}\in\mathbb{R}\) based on maintaining its fold beyond a threshold of concentration. This dataset, obtained from [30] has 21,446 data samples. Stability is an important protein engineering task, as we can use this fold concentration to test protein inputs such
that design candidates are stable in the settings of different tasks. The benchmark for this task is a transformer that reports a best performance of 0.73 Spearman's \(\rho\).
**Homology (Evolutionary Understanding Task):** This is a sequence classification task where each input protein, \(x\) is mapped to a protein fold represented by \(y\in\{1,...,1195\}\). This dataset, obtained from [31] has 12,312 data samples. Detecting homologs is particularly important in a biomedical context as they inform structural similarity across a set of sequences, and can indicate emerging resistance of antibiotic genes [cite]. The original model removes entire homologous groups during model training, thereby enforcing that models generalize well when large evolutionary gaps are introduced. The benchmark for this task is a LSTM that reports a best performance of 26% Top-1 Accuracy.
### R2DL Settings and Hyperparameter Details
#### AMP
The full AMP dataset size is 8112, we use a training set size of 6489 and a test set size of 812. We use the \(L_{0}\) norm in our objective function, 10,000 k-SVD iterations and \(\epsilon=0.045\).
#### Toxicity
The full Toxicity dataset size is 10,192, we use a training set size of 8153 and a test set size of 1020. We use the \(L_{0}\) norm in our objective function, 10,000 k-SVD iterations and \(\epsilon=0.045\).
#### Secondary Structure
The full Toxicity dataset size is 9270, we use a training set size of 7416 and a test set size of 1854. We use the \(L_{0}\) norm in our objective function, 9,000 k-SVD iterations and \(\epsilon=0.38\).
#### Stability
The full Stability dataset size is 56,126, we use a training set size of 44,900 and a test set size of 11,226. We use the \(L_{0}\) norm in our objective function, 6,000 k-SVD iterations and \(\epsilon=0.29\).
#### Homology
The full Homology dataset size is 13,048, we use a training set size of 10,438 and a test set size of 2,610. We use the \(L_{0}\) norm in our objective function, 4,000 k-SVD iterations and \(\epsilon=0.73\).
#### Solubility
The full Solubility dataset size is 43,876, we use a training set size of 35,100 and a test set size of 8,775. We use the \(L_{0}\) norm in our objective function, 9,000 k-SVD iterations and \(\epsilon=0.42\).
#### Data and Code Availability
Links to protein sequence data and code are available on Github (github.com/riavinod/r2dl) |
2305.18464 | Privileged Knowledge Distillation for Sim-to-Real Policy Generalization | Reinforcement Learning (RL) has recently achieved remarkable success in
robotic control. However, most RL methods operate in simulated environments
where privileged knowledge (e.g., dynamics, surroundings, terrains) is readily
available. Conversely, in real-world scenarios, robot agents usually rely
solely on local states (e.g., proprioceptive feedback of robot joints) to
select actions, leading to a significant sim-to-real gap. Existing methods
address this gap by either gradually reducing the reliance on privileged
knowledge or performing a two-stage policy imitation. However, we argue that
these methods are limited in their ability to fully leverage the privileged
knowledge, resulting in suboptimal performance. In this paper, we propose a
novel single-stage privileged knowledge distillation method called the
Historical Information Bottleneck (HIB) to narrow the sim-to-real gap. In
particular, HIB learns a privileged knowledge representation from historical
trajectories by capturing the underlying changeable dynamic information.
Theoretical analysis shows that the learned privileged knowledge representation
helps reduce the value discrepancy between the oracle and learned policies.
Empirical experiments on both simulated and real-world tasks demonstrate that
HIB yields improved generalizability compared to previous methods. | Haoran He, Chenjia Bai, Hang Lai, Lingxiao Wang, Weinan Zhang | 2023-05-29T07:51:00Z | http://arxiv.org/abs/2305.18464v1 | # Privileged Knowledge Distillation for Sim-to-Real Policy Generalization
###### Abstract
Reinforcement Learning (RL) has recently achieved remarkable success in robotic control. However, most RL methods operate in simulated environments where privileged knowledge (e.g., dynamics, surroundings, terrains) is readily available. Conversely, in real-world scenarios, robot agents usually rely solely on local states (e.g., proprioceptive feedback of robot joints) to select actions, leading to a significant sim-to-real gap. Existing methods address this gap by either gradually reducing the reliance on privileged knowledge or performing a two-stage policy imitation. However, we argue that these methods are limited in their ability to fully leverage the privileged knowledge, resulting in suboptimal performance. In this paper, we propose a novel single-stage privileged knowledge distillation method called the Historical Information Bottleneck (HIB) to narrow the sim-to-real gap. In particular, HIB learns a privileged knowledge representation from historical trajectories by capturing the underlying changeable dynamic information. Theoretical analysis shows that the learned privileged knowledge representation helps reduce the value discrepancy between the oracle and learned policies. Empirical experiments on both simulated and real-world tasks demonstrate that HIB yields improved generalizability compared to previous methods.
## 1 Introduction
Reinforcement learning (RL) has achieved remarkable progress and has been applied to various domains, including games, financial trading, and robotics. However, most RL methods work in simulated environments, and their applications in real-world scenarios are still challenging. To obtain an RL policy for the real-world scenario, one way is to interact with the real environment directly. However, since RL method requires a large number of interactions and also needs to handle potentially dangerous behaviors, applying such a method is prohibitive in scenarios where safety is important [13]. A more efficient way is to learn a policy in the simulated environment first and then transfer it to the real-world environment. Nevertheless, there is always an inherent mismatch between the simulated environment and the real environment. Such mismatch makes the policy learned from the simulated environment performs sub-optimally in real-world environment, which is known as a _sim-to-real gap_[22; 21; 28]. Previous works tackle this problem in the inspects of sensing and actuation, where the sensing mismatch can be alleviated via adversarial training [1; 19] and the actuation mismatch can be minimized by more realistic simulation [33]. Another branch of methods is domain randomization [8; 29], which tackles both mismatches by introducing additional perturbances in the simulator [49]. The intuition behind domain randomization is to cover the real-world environment by randomized environments. However, such randomization assumes that the state spaces of the simulators and that of the real-world scenarios are the same, which is
typically not the case. Meanwhile, previous methods neither reconstruct privileged knowledge nor take advantage of history, which we believe are important to close the sim-to-real gap.
To demonstrate the importance of privileged knowledge, we consider a more realistic and challenging sim-to-real setting, where the simulation and real-world environment provide different information. Such a setting is common for real-world robotics applications since the simulated environment often provides the oracle state information, including _privileged knowledge_ (e.g., dynamics, surroundings, terrains), while the real robot can only rely on local states (e.g., _proprioceptive feedback_ of robot joints) to select the action. Such a sim-to-real gap can also be extended to a general policy generalization problem with a knowledge gap. Previous methods solve this problem via a two-stage policy distillation process [34]. In particular, a teacher policy is first trained in the simulator with oracle states, then a student policy with local states is trained by imitating the teacher policy. Nevertheless, such a two-stage paradigm is computationally expensive and requires careful design for imitation. An alternative way is to gradually drop the privileged information as the policy trained [23], which, however, cannot take advantage of the known privileged information sufficiently in training.
In this paper, we present a representation-based approach, instead of policy distillation, to better utilize the privileged knowledge from training data with a single-stage learning paradigm. In particular, we propose a novel method called **H**istorical **I**nformation **B**otleneck (HIB) to distill the privileged knowledge. HIB takes advantage of historical information that contains previous local states and actions to learn a history representation, which is trained by maximizing the mutual information (MI) between the representation and the privileged knowledge. Theoretically, we show that maximizing such an MI term will minimize the privileged knowledge modeling error, reducing the discrepancy between the optimal value function and the learned value function. Furthermore, motivated by the Information Bottleneck (IB) principle, we compress the decision-irrelevant information from the history and obtain a more robust representation. The IB objective is approximated by variational lower bounds to handle the high-dimensional state space.
In summary, our contributions are threefold: (i) We propose a novel policy generalization method called HIB that follows the Information Bottleneck (IB) [40] principle to distill privileged knowledge from a fixed length of history. (ii) We provide a theoretical analysis of both the policy distillation methods and the proposed method, which shows that minimizing the privilege modeling error is crucial in learning a near-optimal policy. (iii) Empirically, we show that HIB learns robust representation in randomized RL environments and achieves better generalization performance in both simulated and real-world environments than SOTA algorithms, including out-of-distribution test environments.
## 2 Related Work
**Sim-to-Real Transfer.** Transferring RL policies from simulation to reality is challenging due to the domain mismatch between the two domains. To this end, the previous study hinges on domain randomization, which trains the policy under a wide range of environmental parameters and sensor noises [50; 6; 30]. However, domain randomization typically sacrifices the optimality for robustness, leading to an over-conservative policy [47]. In addition, domain randomization cannot handle the more challenging setting where the simulator contains privileged knowledge that is unavailable in the real world. To address this problem, policy distillation methods [22; 21; 28] perform privilege distillation by first learning a teacher policy with access to the privileged knowledge, and then using the historical trajectories as inputs to distill a student policy through supervised training. However, policy distillation is sample inefficient and cannot bridge the knowledge gap effectively. An alternative distillation method gradually drops privileged knowledge [23], while it still sacrifices optimality for better generalization and cannot fully leverage historical knowledge. In contrast, our method distill privileged knowledge in a single stage, resulting in better utilization of historical information.
**Contrastive Representation.** Contrastive representation learning is a class of methods that learn a representation that obeys similarity constraints in a dataset typically organized by similar and dissimilar pairs [44; 16; 7]. Contrastive learning has been recently used in RL as auxiliary tasks to improve sample efficiency. Srinivas et al. [39] combines data augmentation [48] and contrastive loss for image-based RL. Other methods also use contrastive learning to extract dynamics-relevant [26; 32], temporal consistent [38] or goal-conditional [10] information, thereby obtaining a stable and task-relevant representation. In our use case, we select similar pairs by the privileged knowledge
obtained and its corresponding trajectory history. Furthermore, we aim to distill the privileged knowledge in a representation space without negative sampling. We highlight that we propose the first representation learning method that learns a privilege representation for policy generalization.
**Information Bottleneck for RL.** The IB principle [41; 40] was initially proposed to trade off the accuracy and complexity of the representation in supervised learning. Specifically, IB maximizes the MI between representation and targets to extract useful features, while also compressing the irrelevant information by limiting the MI between representation and raw inputs [2; 35]. Recently, IB has been employed in RL to acquire a compact and robust representation. For example, Fan and Li [11] takes advantage of IB to learn task-relevant representation via a multi-view augmentation. Other methods [20; 24; 3] maximize the MI between representation and dynamics or value function, and restrict the information to encourage the encoder to extract only the task-relevant information. Unlike the previous works that neither tackle the policy generalization problem nor utilize historical information, HIB derives a novel objective based on IB, which aims to learn a robust representation of privileged knowledge from history while simultaneously removing redundant decision-irrelevant information.
## 3 Preliminaries
In this section, we briefly introduce the problem definition and the corresponding notations used throughout this paper. We give the definition of privileged knowledge in robot learning as follows.
**Definition 1** (Privileged Knowledge).: _Privileged knowledge is the hidden state that is inaccessible in the real environment but can be obtained in the simulator, e.g., surrounding heights, terrain types, and dynamic parameters like fraction and damping. An oracle (teacher) policy is defined as the optimal policy with privileged knowledge visible._
In the policy generalization problem, we define the MDP as \(\mathcal{M}=(\mathcal{S}^{l},\mathcal{S}^{p},\mathcal{A},P,r,\gamma)\), where \([s^{l},s^{p}]=s^{o}\) represents the oracle state \(s^{o}\) that contains \(s^{l}\in\mathcal{S}^{l}\) (i.e., the local state space) and \(s^{p}\in\mathcal{S}^{p}\) (i.e., the privileged state space), where \(s^{p}\) contains privileged knowledge defined in Definition 1. \(\mathcal{A}\) is the action space. The transition function \(P(s^{o}_{t+1}|s^{o}_{t},a_{t})\) and reward function \(r(s^{o},a)\) follows the ground-truth dynamics based on the oracle states. Based on the MDP, we define two policies: \(\pi(a|s^{l},s^{p})\) and \(\hat{\pi}(a|s^{l})\), for the simulation and real world, respectively. Specifically, \(\pi(a|s^{l},s^{p})\) is an oracle policy that can access the privileged knowledge, which is only accessible in the simulator. In contrast, \(\hat{\pi}(a|s^{l})\) is a local policy without accessing the privileged knowledge throughout the interaction process, which is common in the real world.
One could reminisce about Partially Observable MDP (POMDP) [18; 27] which is similar to our problem. Nevertheless, agents in POMDP cannot access the privileged information in both training and evaluation, which is different from our setting that the agent can learn to extract the privileged knowledge in simulation. A discussion of these two problems is given in Appendix A.1.
Based on the above definition, our objective is to find the optimal local policy \(\hat{\pi}^{*}\) based on the local state that maximizes the expected return, denoted as
\[\hat{\pi}^{*}:=\arg\max_{\hat{\pi}}\mathbb{E}_{a_{t}\sim\hat{\pi}(\cdot|s^{l} _{t})}\Big{[}\sum\nolimits_{t=0}^{\infty}\gamma^{t}r(s^{o}_{t},a_{t})\Big{]}. \tag{1}\]
In our work, the main challenge is how to utilize the privileged state \(\mathcal{S}^{p}\) to make the policy learned with local states \(\mathcal{S}^{l}\) approach the optimal oracle policy \(\pi^{*}\) learned with \(\mathcal{S}^{o}\). Our goal is to obtain a near-optimal local policy that can generalize from simulation to the real-world without significant performance degeneration.
## 4 Theoretical Analysis & Motivation
### Value Discrepancy for Policy Generalization
In this section, we give a theoretical analysis of traditional oracle policy imitation algorithms [12]. Specifically, the local policy is learned by imitating the optimal oracle policy \(\pi^{*}\). We denote the optimal value function of the policy \(\pi^{*}\) learned with oracle states \(s^{o}=[s^{l},s^{p}]\) as \(Q^{*}(s^{l},s^{p},a)\), and the value function of policy learned with local states as \(\hat{Q}^{\pi}(s^{l},a)\). The following theorem analyzes the relationship between the value discrepancy and the policy imitation error in a finite MDP setting.
**Theorem 1** (Policy imitation discrepancy).: _The value discrepancy between the optimal value function with privileged knowledge and the value function with the local state is bounded as_
\[\sup_{s^{l},s^{p},a}\left|Q^{*}(s^{l},s^{p},a)-\hat{Q}^{\hat{\pi}}(s^{l},a) \right|\leq\frac{2\gamma r_{\max}}{(1-\gamma)^{2}}\epsilon_{\hat{\pi}}, \tag{2}\]
_where_
\[\epsilon_{\hat{\pi}}=\sup_{s^{l},s^{p}}D_{\mathrm{TV}}\big{(}\pi^{*}(\cdot|s^ {l},s^{p})\|\hat{\pi}(\cdot|s^{l})\big{)} \tag{3}\]
_is the policy divergence between \(\pi^{*}\) and \(\hat{\pi}\), and \(r_{\max}\) is the maximum reward in each step._
The proof is given in Appendix A.2. Theorem 1 shows that minimizing the total variation (TV) distance between \(\pi^{*}(\cdot|s^{l},s^{p})\) and \(\hat{\pi}(\cdot|s^{l})\) reduces the value discrepancy. However, minimizing \(\epsilon_{\hat{\pi}}\) can be more difficult than ordinary imitation learning that \(\pi^{*}\) and \(\hat{\pi}\) have the same state space. Specifically, if \(\pi^{*}\) and \(\hat{\pi}\) have the same inputs, \(D_{\mathrm{TV}}(\pi^{*}(\cdot|s)\|\hat{\pi}(\cdot|s))\) will approach zero with sufficient model capacity and large iteration steps, at least in theory. In contrast, due to the lack of privileged knowledge in our problem, the policy error term in Eq. (3) can still be large after optimization as \(\pi^{*}(\cdot|s^{l},s^{p})\) and \(\hat{\pi}(\cdot|s^{l})\) have different inputs.
Previous works [21; 22] try to use historical trajectory
\[h_{t}=\{s^{l}_{t},a_{t-1},s^{l}_{t-1},\ldots,a_{t-k},s^{l}_{t-k}\} \tag{4}\]
with a fixed length to help infer the oracle policy, and the policy imitation error becomes \(D_{\mathrm{TV}}(\pi^{*}(\cdot|s^{l},s^{p})\|\hat{\pi}(\cdot|s^{l},h))\). However, without appropriately using the historical information, imitating a well-trained oracle policy can still be difficult for an agent with limited capacity, which results in suboptimal performance. Meanwhile, imitating the oracle policy can be sample-inefficient as it needs to train an oracle policy for many epochs based on oracle states. Another alternative distillation method drops privileged knowledge gradually [23] to alleviate the difficulty in mimicking a well-trained oracle agent. Such a method sacrifices the optimality to enhance learning efficiency as it does not use history to capture more information.
### Privilege Modeling Discrepancy
To address the above challenges, we raise an alternative theoretical motivation to relax the requirement of the oracle policy. Specifically, we quantify the discrepancy between the optimal value function and the learned value function based on the error bound in reconstructing the privileged state \(s^{p}_{t}\) via historical information \(h_{t}\), which eliminates the reliance on the oracle policy and makes our method a _single-stage_ distillation algorithm. Specifically, we define a density model \(\hat{P}(s^{p}_{t}|h_{t})\) to predict the privileged state based on history \(h_{t}\) in Eq. (4). Then the predicted privileged state can be sampled as \(\hat{s}^{p}_{t}\sim\hat{P}(\cdot|h_{t})\). In policy learning, we concatenate the local state \(s^{l}\) and the predicted \(\hat{s}^{p}\) as input. The following theorem gives the value discrepancy of \(Q^{*}\) and the value function \(\hat{Q}\) with the predicted \(\hat{s}^{p}\).
**Theorem 2** (Privilege modeling discrepancy).: _Let the divergence between the privileged state model \(\hat{P}(s^{p}_{t+1}|h_{t+1})\) and the true distribution of privileged state \(P(s^{p}_{t+1}|h_{t+1})\) be bounded as_
\[\epsilon_{\hat{P}}=\sup_{t\geq t_{0}}\sup_{h_{t+1}}D_{\mathrm{TV}}\big{(}P( \cdot\mid h_{t+1})\parallel\hat{P}(\cdot\mid h_{t+1})\big{)}. \tag{5}\]
_Then the performance discrepancy bound between the optimal value function with \(P\) and the value function with \(\hat{P}\) holds, as_
\[\sup_{t\geq t_{0}}\sup_{s^{l},s^{p},a}|Q^{*}(s^{l}_{t},s^{p}_{t},a_{t})-\hat{ Q}_{t}(s^{l}_{t},\hat{s}^{p}_{t},a_{t})|\leq\frac{\Delta_{\mathbb{E}}}{(1- \gamma)}+\frac{2\gamma r_{\max}}{(1-\gamma)^{2}}\epsilon_{\hat{P}}, \tag{6}\]
_where \(\Delta_{\mathbb{E}}=\sup_{t\geq t_{0}}\left\|Q^{*}-\mathbb{E}_{s^{p}_{t}\sim P (\cdot|h_{t})}[Q^{*}]\right\|_{\infty}+\left\|\hat{Q}-\mathbb{E}_{\hat{s}^{p} _{t}\sim P(\cdot|h_{t})}[\hat{Q}]\right\|_{\infty}\) is the difference in the same value function with sampled \(s^{p}_{t}\) and the expectation of \(s^{p}_{t}\) conditioned on \(h_{t}\)._
Proof Sketch.: Since the privilege distribution \(P\) and \(\hat{P}\) can be stochastic, we introduce \(\Delta_{\mathbb{E}}\) to measure the difference in the same value function with the sampled privileged state and the expectation of
(or \(\hat{s}^{p}\)) conditioned on \(h_{t}\). The value discrepancy between \(Q^{*}(s^{l}_{t},s^{p}_{t},a_{t})\) and \(\hat{Q}_{t}(s^{l}_{t},\hat{s}^{p}_{t},a_{t})\) is derived as
\[Q^{*}(s^{l}_{t},s^{p}_{t},a_{t})-\hat{Q}_{t}(s^{l}_{t},\hat{s}^{p}_{t},a_{t}) \leq\Delta_{\mathbb{E}}(t)+\underbrace{\mathbb{E}_{s^{p}_{t}\sim P(\cdot|h_{t })}[Q^{*}(s^{l}_{t},s^{p}_{t},a_{t})]-\mathbb{E}_{s^{p}_{t}\sim P(\cdot|h_{t})} [\hat{Q}_{t}(s^{l}_{t},s^{p}_{t},a_{t})]}_{(i)\text{ value error}}\]
\[+\underbrace{\mathbb{E}_{s^{p}_{t}\sim P(\cdot|h_{t})}[\hat{Q}_{t}(s^{l}_{t}, s^{p}_{t},a_{t})]-\mathbb{E}_{\hat{s}^{p}_{t}\sim\hat{P}(\cdot|h_{t})}[\hat{Q}_{t} (s^{l}_{t},\hat{s}^{p}_{t},a_{t})]}_{(ii)\text{ model error}}. \tag{7}\]
Term \((i)\) represents the value difference of \(Q^{*}\) and \(\hat{Q}\) with the true privilege distribution, which can be bounded by the infinite norm of value difference. Term \((ii)\) represents the model difference of the privileged state with different distributions, and we introduce \(\epsilon_{\hat{P}}\) to consider the model discrepancy in the worst case with an informative history.
In \(\Delta_{\mathbb{E}}\), we consider using a sufficient long (i.e., by using history \(t\) greater than some \(t_{0}\)) and informative (i.e., by extracting useful features) history to make \(\Delta_{\mathbb{E}}\) small in practice. We remark that \(\Delta_{\mathbb{E}}\) captures the inherent difficulty of learning without privileged information. The error is small if privileged information is near deterministic given the history, or if the privileged information is not useful given the history. We defer the detailed proof in Appendix A.3. In the next section, we provide an instantiation method inspired by Theorem 2.
## 5 Methodology
In this section, we propose a practical algorithm named HIB to perform privilege distillation via a historical representation. HIB only acquires the oracle state without an oracle policy in training. In evaluation, HIB relies on the local state and the learned historical representation to choose actions.
### Reducing the Discrepancy via MI
Theorem 2 indicates that minimizing \(\epsilon_{\hat{P}}\) yields a tighter performance discrepancy bound. We then start by analyzing the privilege modeling discrepancy \(\epsilon_{\hat{P}}\) in Eq. (5). We denote the parameter of \(\hat{P}_{\phi}\) by \(\phi\), then the optimal solution \(\phi^{*}\) can be obtained by minimizing the TV divergence for \(\forall t\), as
\[\phi^{*} =\arg\min_{\phi}D_{\mathrm{TV}}\big{(}P(\cdot|h_{t})\big{\|}\hat {P}_{\phi}(\cdot|h_{t})\big{)}=\arg\min_{\phi}D_{\mathrm{KL}}\big{(}P(\cdot|h_ {t})\big{\|}\hat{P}_{\phi}(\cdot|h_{t})\big{)} \tag{8}\] \[=\arg\max_{\phi}\mathbb{E}_{p(s^{p}_{t},h_{t})}\big{[}\log\hat{P} _{\phi}(s^{p}_{t}|h_{t})\big{]}\triangleq\arg\max_{\phi}I_{\mathrm{pred}}, \tag{9}\]
where the true distribution \(P(\cdot|h_{t})\) is irrelevant to \(\phi\), and we convert the TV distance to the KL distance in Eq. (8) by following Pinsker's inequality. Since \(h_{t}\) is usually high-dimensional, which is of linear complexity with respect to time, it is necessary to project \(h_{t}\) in a representation space and then predict \(s^{p}_{t}\). Thus, we split the parameter of \(\hat{P}_{\phi}\) as \(\phi=[\phi_{1},\phi_{2}]\), where \(\phi_{1}\) aims to learn a historical representation \(z=f_{\phi_{1}}(h_{t})\) first, and \(\phi_{2}\) aims to predict the distribution \(\hat{P}_{\phi_{2}}(z)\) of privileged state (e.g., a Gaussian). In the following, we show that maximizing \(I_{\mathrm{pred}}\) is closely related to maximizing the MI between the historical representation and the privileged state. In particular, we have
\[I_{\mathrm{pred}} =\mathbb{E}_{p(s^{p}_{t},h_{t})}\big{[}\log\hat{P}_{\phi_{2}} \big{(}s^{p}_{t}|f_{\phi_{1}}(h_{t})\big{)}\big{]} \tag{10}\] \[=\mathbb{E}_{p(s^{p}_{t},h_{t})}\big{[}\log P\big{(}s^{p}_{t}|f_{ \phi_{1}}(h_{t})\big{)}\big{]}-D_{\mathrm{KL}}[P\|\hat{P}]=-\mathcal{H}\big{(} S^{p}_{t}|f_{\phi_{1}}(H_{t})\big{)}-D_{\mathrm{KL}}[P\|\hat{P}]\] \[=I\big{(}S^{p}_{t};f_{\phi_{1}}(H_{t})\big{)}-\mathcal{H}(S^{p}_{ t})-D_{\mathrm{KL}}[P\|\hat{P}]\leq I\big{(}S^{p}_{t};f_{\phi_{1}}(H_{t}) \big{)},\]
where we denote the random variables for \(s^{p}_{t}\) and \(h_{t}\) by \(S^{p}_{t}\) and \(H_{t}\), respectively. In Eq. (10), the upper bound is obtained by the non-negativity of the Shannon entropy and KL divergence. The bound is tight since the entropy of the privileged state \(\mathcal{H}(S^{p}_{t})\) is usually fixed, and \(D_{\mathrm{KL}}(P\|\hat{P})\) can be small when we use a variational \(\hat{P}_{\phi}\) with an expressive network.
According to Eq. (10), maximizing the predictive objective \(I_{\mathrm{pred}}\) is closely related to maximizing the MI between \(S^{p}_{t}\) and \(f_{\phi_{1}}(H_{t})\). In HIB, we adopt the contrastive learning [44] as an alternative variational approximator [31] to approximate MI in a representation space, which addresses the difficulty of reconstructing the raw privileged state that can be noisy and high-dimensional in \(I_{\mathrm{pred}}\) objective. Moreover, HIB restricts the capacity of representation to remove decision-irrelevant information from the history, which resembles the IB principle [41] in information theory.
### Historical Information Bottleneck
We first briefly introduce the IB principle. In a supervised setting that aims to learn a representation \(Z\) of a given input source \(X\) with the target source \(Y\), IB maximizes the MI between \(Z\) and \(Y\) (i.e., max \(I(Z;Y)\)) and restricts the complexity of \(Z\) using the constraint as \(I(Z;X)<I_{c}\). Combining the two terms, the IB objective is equal to \(\max I(Z;Y)-\alpha I(Z;X)\) with a Lagrange multiplier \(\alpha\).
To optimize the MI in Eq. (10) via contrastive objective [9], we introduce a historical representation \(z_{t}\sim f_{\psi}(h_{t})\) to extract useful features that contain privileged information from a long historical vector \(h_{t}\), where \(f_{\psi}\) is a temporal convolution network (TCN) [4] that captures long-term information along the time dimension. We use another notation \(\psi\) to distinguish it from the predictive encoder \(\phi_{1}\) in Eq. (10), since the contrastive objective and predictive objective \(I_{\mathrm{pred}}\) learn distinct representations by optimizing different variational bounds. In our IB objective, the input variable is \(H_{t}\) and the corresponding target variable is \(S_{t}^{p}\). Our objective is to maximize the MI term \(I(Z_{t};\mathcal{S}_{t}^{p})\) while minimizing the MI term \(I(H_{t};Z_{t})\) with \(Z_{t}=f_{\psi}(H_{t})\), which takes the form of
\[\min-I(Z_{t};S_{t}^{p})+\alpha I(H_{t};Z_{t}), \tag{11}\]
where \(\alpha\) is a Lagrange multiplier. The \(I(Z_{t};S_{t}^{p})\) term quantifies the amount of information about the privileged knowledge preserved in \(Z_{t}\), and the \(I(H_{t};Z_{t})\) term is a regularizer that controls the complexity of representation learning. With a well-tuned \(\alpha\), we do not discard useful information that is relevant to the privileged knowledge.
We minimize the MI term \(I(H_{t};Z_{t})\) in Eq. (11) by minimizing a tractable upper bound. To this end, we introduce a variational approximation \(q(z_{t})\) to the intractable marginal \(p(z_{t})=\int p(h_{t})p(z_{t}|h_{t})dh_{t}\). Specifically, the following upper-bound of \(I(H_{t};Z_{t})\) holds,
\[\begin{split} I(H_{t};Z_{t})=\mathbb{E}_{p(z_{t},h_{t})}\Big{[} \log\frac{p(z_{t}|h_{t})}{p(z_{t})}\Big{]}&=\mathbb{E}_{p(z_{t},h_{t})}\Big{[}\log\frac{p(z_{t}|h_{t})}{q(z_{t})}\Big{]}-D_{\mathrm{KL}}[p(z_ {t})\|q(z_{t})]\\ &\leq D_{\mathrm{KL}}[p(z_{t}|h_{t})\|q(z_{t})],\end{split} \tag{12}\]
where the inequality follows the non-negativity of the KL divergence, and \(q(z_{t})\) is an approximation of the marginal distribution of \(Z_{t}\). We follow Alemi et al. [2] and use a spherical Gaussian \(q(z_{t})=\mathcal{N}(0,I)\) as an approximation.
One can maximize the MI term \(I(Z_{t};S_{t}^{p})\) in Eq. (11) based on the contrastive objective [9]. Specifically, for a given \(s_{t}^{p}\), the positive sample \(z_{t}\sim f_{\psi}(h_{t})\) is the feature of corresponding history in timestep \(t\), and the negative sample \(z^{-}\) can be extracted from randomly sampled historical vectors. However, considering unresolved trade-offs involved in negative sampling [46; 17], we try to simplify the contrastive objective without negative samples. In HIB, we empirically find that the performance does not decrease without negative sampling. Such a simplification was also adopted by recent contrastive methods for RL [37; 32]. Without negative sampling, the contrastive loss becomes a cosine similarity with only positive pairs.
We adopt a two-stream architecture to learn \(z_{t}\), including an _online_ and a _target_ network. Each network contains an encoder and a projector, as shown in Fig. 1. The online network is trained to use history to predict the corresponding privilege representation. Given a pair of a history sequence and privileged state \((h_{t},s_{t}^{p})\), we obtain \(\tilde{e}_{t}=f_{\omega}(s_{t}^{p})\) with an encoder \(f_{\omega}\) to get the representation of \(s_{t}^{p}\). Then we use TCN as the history encoder \(f_{\psi}\) to learn the latent representation \(z_{t}\sim f_{\psi}(\cdot|h_{t})\). Here \(f_{\omega}\) is used to project \(s_{t}^{p}\) in the same dimensional space as \(z_{t}\), so \(f_{\omega}\) can be an identity operator or a simple MLP in the case of dimension requirement (see Appendix C.1 for details of implementation). As
Figure 1: HIB adopts the IB principle to recover the privileged knowledge from a fixed length of local history information. The RL objective also provides gradients to the history encoder \(f_{\psi}\), implying that the learned representation can be combined with any RL algorithm effectively.
and \(z_{t}\) have the same dimensions, the projectors share the same architecture. The online projector \(g_{\theta}\) outputs \(y_{t}=g_{\theta}(z_{t})\) and the target projector \(g_{\theta^{-}}\) outputs \(\tilde{y}_{t}=g_{\theta^{-}}(\tilde{e}_{t})\). We use the following cosine similarity loss between \(y_{t}\) and \(\tilde{y}_{t}\), and use stop gradient (\(\mathrm{sg}[\cdot]\)) for the target value \(\tilde{y}\), as
\[\mathcal{L}_{\mathrm{sim}}=-\sum_{y_{t},\,\tilde{y}_{t}}\left(\frac{y_{t}}{ \left\|y_{t}\right\|_{2}}\right)^{\top}\left(\frac{\mathrm{sg}[\tilde{y}_{t}] }{\left\|\mathrm{sg}[\tilde{y}_{t}]\right\|_{2}}\right), \tag{13}\]
To prevent collapsed solutions in the two-stream architecture, we follow previous architectures [14] by using a momentum update for the target network to avoid collapsed solutions. Specifically, the parameter of the target network \(\theta^{-}\) takes an exponential moving average of the online parameters \(\theta\) with a factor \(\tau\in[0,1]\), as \(\theta^{-}\leftarrow\tau\theta^{-}+(1-\tau)\theta\).
At each training step, we perform a stochastic optimization step to minimize \(\mathcal{L}_{\mathrm{sim}}\) with respect to \(\theta\) and \(\psi\). Meanwhile, we learn an RL policy \(\pi(a|s_{t}^{l},z_{t})\) based on the historical representation \(z_{t}\), and the RL objective is also used to train the TCN encoder \(f_{\psi}\). The dynamics are summarized as
\[\theta\leftarrow\mathrm{optimizer}(\theta,\nabla_{\theta}\mathcal{L}_{ \mathrm{sim}}),\quad\psi\leftarrow\mathrm{optimizer}(\psi,\nabla_{\psi} \big{(}\lambda_{1}\mathcal{L}_{\mathrm{sim}}+\lambda_{2}\mathcal{L}_{\mathrm{ KL}}+\mathcal{L}_{\mathrm{RL}}(s_{t}^{l},z_{t})\big{)}, \tag{14}\]
where \(\mathcal{L}_{\mathrm{KL}}=D_{\mathrm{KL}}(f_{\psi}(h_{t})||\mathcal{N}(0,I))\) is the IB term in Eq. (12) that controls the latent complexity, and \(\mathcal{L}_{\mathrm{RL}}\) is the loss function for an arbitrary RL algorithm. We summarize the process in Alg. 1.
```
Training Process (sim) Initialize: Buffer \(\mathcal{D}=\{[s_{t}^{l},s_{t}^{p}],a_{t},r_{t},[s_{t+1}^{l},s_{t+1}^{p}],h_{t}\}\) Initialize: Historical encoder \(f_{\psi}\), privilege encoder \(f_{\omega}\), projector \(g_{\theta}\) and target projector \(g_{\theta^{-}}\).
1:whilenot coveragedo
2: Interact to the environment to collect \((s_{i}^{o},a_{i},r_{i},s_{i+1}^{o})\) with privileged state and save it to \(\mathcal{D}\)
3:for\(j\) from 0 to \(N\)do
4: Sample a batch of \((s_{i}^{o},a_{i},r_{i},s_{i+1}^{o})\) with history \(h_{i}\)
5: Feed online \(z_{i}\sim f_{\psi}(h_{i})\), \(y_{i}\gets g_{\theta}(z_{i})\), and target network \(\tilde{e}_{i}\gets f_{\omega}(s_{i}^{p})\), \(\tilde{y}_{i}\gets g_{\theta^{-}}(\tilde{e}_{i})\)
6: Compute cosine similarity \(\mathcal{L}_{\mathrm{sim}}(y_{i},\tilde{y}_{i})\), KL regularization \(\mathcal{L}_{\mathrm{KL}}\) and RL objective \(\mathcal{L}_{\mathrm{RL}}\)
7: Update the HIB parameters via Eq. (14)
8:endfor
9:endwhile
```
**Algorithm 1** Historical Information Bottleneck (HIB)
## 6 Experiments
### Benchmarks and Compared Methods
To quantify the generalizability of the proposed HIB, we conduct experiments in simulated environments that include multiple domains for a comprehensive evaluation, and also the legged robot locomotion task to evaluate the generalizability in sim-to-real transfer.
**Privileged DMC Benchmark.** We conduct experiments on DeepMind Control Suite (DMC) [42] with manually defined privileged information, which contains dynamic parameters such as friction and torque strength. The privileged knowledge is _only_ visible in the training process. Following Benjamins et al. [5], we randomize the privilege parameters at the beginning of each episode, and the randomization range can be different for training and testing. Specifically, we choose three randomization ranges for varied difficulty levels, i.e., _ordinary_, _o.o.d._, and _far o.o.d._. The _ordinary_ setting means that the test environment has the same randomization range as in training, while _o.o.d._ and _far o.o.d._ indicate that the randomization ranges are larger with different degrees, causing test environments being out-of-distribution compared with training environment. The detailed setup can
be found in Appendix B.1. We evaluate the algorithms in three different domains, namely _pendulum_, _finger spin_, and _quadruped walk_, which cover various difficulties ranging from _ordinary_ to _far o.o.d._. This Privileged DMC benchmark is referred to as DMC benchmark in the following for simplicity.
**Sim-to-Real Learning in Legged Robot.** This experiment is conducted on a quadrupedal robot. In this domain, privileged knowledge is defined as terrain information (e.g. heights of surroundings) of the environment and dynamic information such as friction, mass, and damping of the quadrupedal robot. In simulation, we develop the training code based on the open-source codebase [33] for on-policy PPO in legged robot, which leverages the Isaac Gym simulator [25] to support simulation of massive robots in parallel. The simulated environment also provides multi-terrain simulation, including slopes, stairs, and discrete obstacles with automatic curriculum that adapts the task difficulty according to the performance. Details can be found in Appendix B.2. For policy generalization in the real robot, we utilize Unitree A1 robot [43] to facilitate the real-world deployment.
**Baselines.** We compare HIB to the following baselines. (i) **Teacher** policy is learned by oracle states with privileged knowledge. (ii) **Student** policy follows RMA [21] that mimics the teacher policy through supervised learning, with the same architecture as the history encoder in HIB. We remark that student needs a two-stage training process to obtain the policy. (iii) **Dropper** is implemented according to Li et al. [23], which gradually drops the privileged information and finally converts to a normal agent that only takes local states as input. (iv) **DR** agent utilizes domain randomization for generalization and is directly trained with standard RL algorithms with local states as input.
### Simulation Comparison
For _DMC benchmark_, we choose SAC [15] as the basic RL algorithm to perform a fair comparison. For _Legged Robot_ task, we adopt PPO [36] as the basic RL algorithm since previous studies show that PPO combined with massive parallelism obtains remarkable performance in challenging legged locomotion tasks [22; 21]. As described above, we uniformly sample privilege parameters episodically from a specified range for both training and testing. Thus, the agent needs to take actions in different underlying dynamics episodically, which is very different from the standard RL setting.
The results for DMC benchmark are shown in Table 1. Our method achieves the best performance in almost all test environments, especially in the most challenging task _quadruped walk_. Surprisingly, we find HIB even outperforms the _teacher_ policy in this task. We hypothesize that this environment is stochastic, and the history of local states and actions contains more useful information that can benefit future decisions. In _quadruped walk_, an agent relies on the angles and positions of previous legged joints to make a smooth movement, which is somewhat more important than the current privileged state. HIB performing better in history utilization while other methods fail.
The results on Legged Robot benchmark (Table 2) further verify the advantage of HIB, where our method outperforms other baselines on most terrains except on rough slope. From the simulation
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \hline
**Domain** & **Testing Difficulty** & **Teacher** & **HIB (ours)** & **SAC-DR** & **Student** & **Dropper** \\ \hline \multirow{3}{*}{_Pendulum_} & ordinary & \(-98.29\pm 80.41\) & \(-10.33\pm 89.67\) & \(-206.33\pm 259.66\) & \(-107.69\pm 90.87\) & \(-204.50\pm 223.74\) \\ & o.o.d. & \(-251.16\pm 34.52\) & \(-271.73\pm 255.96\) & \(-502.58\pm 53.67\) & \(-401.39\pm 50.42\) & \(-436.38\pm 473.84\) \\ & fr.o.o.d. & \(-609.85\pm 35.22\) & \(-671.83\pm 51.84\) & \(-800.62\pm 53.95\) & \(-729.69\pm 51.58\) & \(-674.43\pm 520.75\) \\ \hline \multirow{3}{*}{_Finger Spin_} & ordinary & \(826.19\pm 152.61\) & \(\mathbf{714.06\pm 233.85}\) & \(528.32\pm 41.99\) & \(657.00\pm 411.07\) & \(509.56\pm 308.62\) \\ & o.o.d. & \(606.93\pm 19.60\) & \(\mathbf{609.21\pm 254.22}\) & \(460.79\pm 417.85\) & \(649.85\pm 405.97\) & \(551.61\pm 318.03\) \\ & fr.o.d. & \(663.73\pm 292.41\) & \(\mathbf{645.42\pm 248.03}\) & \(453.00\pm 396.03\) & \(508.76\pm 40.90\) & \(518.05\pm 393.88\) \\ \hline \multirow{3}{*}{_Quadruped Walk_} & ordinary & \(\mathbf{273.28\pm 03.87}\) & \(\mathbf{946.72\pm 31.43}\) & \(240.21\pm 27.57\) & \(223.13\pm 1127.00\) & \(331.45\pm 3019.44\) \\ & o.o.d. & \(217.45\pm 120.18\) & \(\mathbf{927.91\pm 51.53}\) & \(203.21\pm 38.98\) & \(207.17\pm 27.37\) & \(287.68\pm 324.54\) \\ \cline{1-1} & far o.o.d. & \(192.93\pm 124.30\) & \(\mathbf{904.72\pm 85.76}\) & \(164.79\pm 64.32\) & \(173.41\pm 136.94\) & \(286.00\pm 347.52\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluated episodic return achieved by HIB and baselines on DMC tasks. We report the mean and standard deviation for 100K steps. _o.o.d._ and _far o.o.d._ settings mean the randomization ranges in testing are larger than the range in training, indicating that the test environments are out-of-distribution (o.o.d) compared to the training environment. We refer to Appendix B.1 for the details.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline
**Terrain** & **Teacher** & **HIB (ours)** & **PPO-DR** & **Student** & **Dropper** \\ \hline Smooth Slope & \(21.30\pm 2.49\) & \(\mathbf{21.16\pm 2.69}\) & \(17.30\pm 2.67\) & \(21.13\pm 2.76\) & \(18.68\pm 2.96\) \\ Rough Slope & \(20.29\pm 2.76\) & \(19.94\pm 3.39\) & \(16.27\pm 2.89\) & \(\mathbf{20.24\pm 3.36}\) & \(17.35\pm 3.44\) \\ Stair up & \(20.13\pm 2.07\) & \(\mathbf{18.47\pm 3.58}\) & \(16.43\pm 2.38\) & \(18.34\pm 3.82\) & \(16.79\pm 3.22\) \\ Stair down & \(20.72\pm 3.66\) & \(\mathbf{18.35\pm 6.33}\) & \(17.36\pm 4.26\) & \(18.26\pm 6.43\) & \(16.47\pm 4.14\) \\ Obstacle & \(21.95\pm 2.40\) & \(\mathbf{21.58\pm 3.28}\) & \(17.34\pm 2.72\) & \(21.36\pm 3.54\) & \(20.14\pm 2.45\) \\ \hline \hline \end{tabular}
**Sim-to-Real Learning in Legged Robot.** This experiment is conducted on a quadrupedal robot. In this domain, privileged knowledge is defined as terrain information (e.g. heights of surroundings) of the environment and dynamic information such as friction, mass, and damping of the quadrupedal robot. In simulation, we develop the training code based on the open-source codebase [33] for on-policy PPO in legged robot, which leverages the Isaac Gym simulator [25] to support simulation of massive robots in parallel. The simulated environment also provides multi-terrain simulation, including slopes, stairs, and discrete obstacles with automatic curriculum that adapts the task difficulty according to the performance. Details can be found in Appendix B.2. For policy generalization in the real robot, we utilize Unitree A1 robot [43] to facilitate the real-world deployment.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline
**Terrain** & **Teacher** & **HIB (ours)** & **PPO-DR** & **Student** & **Dropper** \\ \hline Smooth Slope & \(21.30\pm 2.49\) & \(\mathbf{21.16\pm 2.69}\) & \(17.30\pm 2.67\) & \(21.13\pm 2.76\) & \(18.68\pm 2.96\) \\ Rough Slope & \(20.29\pm 2.76\) & \(19.94\pm 3.39\) & \(16.27\pm 2.89\) & \(\mathbf{20.24\pm 3.36}\) & \(17.35\pm 3.44\) \\ Stair up & \(20.13\pm 2.07\) & \(\mathbf{18.47\pm 3.58}\) & \(16.43\pm 2.38\) & \(18.34\pm 3.82\) & \(16.79\pm 3.22\) \\ Stair down & \(20.72\pm 3.66\) & \(\mathbf{18.35\pm 6.33}\) & \(17.36\pm 4.26\) & \(18.26\pm 6.43\) & \(16.47\pm 4.14\) \\ Obstacle & \(21.95\pm 2.40\) & \(\mathbf{21.58\pm 3.28}\) & \(17.34\pm 2.72\) & \(21.36\pm 3.54\) & \(20.14\pm 2.45\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluated episodic return achieved by HIB and baselines of legged robot evaluated in the Isaac-Gym simulator. Results are averaged over 1000 trajectories with different difficulties. Reward designs follow
results, HIB can be seamlessly combined with different RL algorithms and generalizes well across different domains and tasks, demonstrating the efficiency and high scalability of HIB.
### Visualization and Ablation Study
To investigate the ability of HIB in modeling the privileged knowledge, we visualize the latent representation learned by HIB and the strongest baseline _student_ via dimensional reduction with T-SNE [45]. We also visualize the true privileged information for comparison. The visualization is conducted in _finger spin_ task and the results are given in Fig. 4. We find that the _student_ agent can only recover part of the privileged knowledge that covers the bottom left and upper right of the true privilege distribution. In contrast, the learned representation of HIB has almost the same distribution as privileged information. This may help explain why our method outperforms other baselines and generalizes to o.o.d. scenarios without significant performance degradation.
We conduct an ablation study for each component of HIB to verify their effectiveness. Specifically, we design the following variants to compare with. (i) **HIB-w/o-ib**. This method only uses RL loss to update history encoder \(f_{\psi}\), which is similar to a standard recurrent neural network policy. (ii) **HIB-w/o-rl**. This variant only uses HIB loss to update the history encoder without the RL objective. (iii) **HIB-w/o-proj**. We drop the projectors in HIB and directly compute cosine similarity loss between \(z_{t}\) and \(\hat{e}_{t}\). (iv) **HIB-contra**. HIB-contra uses contrastive loss [39] instead of cosine similarity with a score function that assigns high scores for positive pairs and low scores for negative pairs.
From the result in Fig. 4, we observe that HIB-w/o-ib almost fails and HIB-w/o-rl can get relatively high scores, which signifies that both HIB loss and RL loss are important for the agent to learn a well-generalized policy, especially the HIB loss. The HIB loss helps agent learns a historical representation that contains privileged knowledge for better generalization. Furthermore, projectors and the momentum update mechanism are also crucial in learning robust and effective representation. Moreover, HIB-contra performs well at the beginning but fails later, which indicates that the contrastive objective requests constructing valid negative pairs and learning a good score function, which is challenging in the general state-based RL setting.
### Real-world Application
To further evaluate the generalization performance of HIB in the real-world, we deploy the HIB policy trained in Legged Robot benchmark on a real-world A1 robot without any fine-tuning. Note that the policy is directly run on the A1 hardware and the local state is read or estimated from the onboard sensors and IMU, making the real-world control noisy and challenging. Fig. 4 shows the snapshots of HIB agent traversing flat, grass, and pebble terrains. The agent can generalize to different challenging terrains with stable control behavior and there is no failure in the whole experimental process. These real experiments demonstrate that HIB can help bridge the sim-to-real gap without any additional tuning in real environments.
Figure 3: T-SNE visualization for the privilege representation and the learned latent representation of history encoder in _finger spin_. Figure 4: Comparison of different HIB variants in _quadruped walk_.
Conclusion and Future Work
We propose a novel privileged knowledge distillation method based on the Information Bottleneck to narrow the knowledge gap between local and oracle RL environments. In particular, the proposed two-stream model design and HIB loss help reduce the performance discrepancy given in our theoretical analysis. Our experimental results on both simulated and real-world environments show that (i) HIB learns robust representations to reconstruct privileged knowledge from local historical trajectories and boosts the RL agent's performance, and (ii) HIB can achieve improved generalizability in out-of-distribution environments compared to previous methods. In the future, we plan to extend our method to recover multi-modal privileged knowledge, which is more high-dimensional and complex. |
2307.16230 | An Unforgeable Publicly Verifiable Watermark for Large Language Models | Recently, text watermarking algorithms for large language models (LLMs) have
been proposed to mitigate the potential harms of text generated by LLMs,
including fake news and copyright issues. However, current watermark detection
algorithms require the secret key used in the watermark generation process,
making them susceptible to security breaches and counterfeiting during public
detection. To address this limitation, we propose an unforgeable publicly
verifiable watermark algorithm named UPV that uses two different neural
networks for watermark generation and detection, instead of using the same key
at both stages. Meanwhile, the token embedding parameters are shared between
the generation and detection networks, which makes the detection network
achieve a high accuracy very efficiently. Experiments demonstrate that our
algorithm attains high detection accuracy and computational efficiency through
neural networks. Subsequent analysis confirms the high complexity involved in
forging the watermark from the detection network. Our code is available at
\href{https://github.com/THU-BPM/unforgeable_watermark}{https://github.com/THU-BPM/unforgeable\_watermark}.
Additionally, our algorithm could also be accessed through MarkLLM
\citep{pan2024markllm} \footnote{https://github.com/THU-BPM/MarkLLM}. | Aiwei Liu, Leyi Pan, Xuming Hu, Shu'ang Li, Lijie Wen, Irwin King, Philip S. Yu | 2023-07-30T13:43:27Z | http://arxiv.org/abs/2307.16230v7 | # A Private Watermark for Large Language Models
###### Abstract
Recently, text watermarking algorithms for large language models (LLMs) have been mitigating the potential harms of text generated by the LLMs, including fake news and copyright issues. However, the watermark detection of current text algorithms requires the key from the generation process, making them susceptible to breaches and counterfeiting. In this work, we propose the first private watermarking algorithm, which extends the current text watermarking algorithms by using two different neural networks respectively for watermark generation and detection, rather than using the same key at both stages. Meanwhile, part of the parameters of the watermark generation and detection networks are shared, which makes the detection network achieve a high accuracy very efficiently. Experiments show that our algorithm ensures high detection accuracy with minimal impact on generation and detection speed, due to the small parameter size of both networks. Additionally, our subsequent analysis demonstrates the difficulty of reverting the watermark generation rules from the detection network. Our code and data are available at [https://github.com/THU-BPM/private_watermark](https://github.com/THU-BPM/private_watermark).
## 1 Introduction
With the development of current large language models (LLMs), many LLMs, like GPT4 (OpenAI, 2023) and Claud1, could rapidly generate texts which are difficult to distinguish from human texts. This has led to numerous risks, such as the generation of a vast amount of false information on the Internet (Pan et al., 2023), and the infringement of copyrights of creative works (Chen et al., 2023). Therefore, texts generated by LLMs need to be detected and tagged.
Footnote 1: [https://claude.ai/chat](https://claude.ai/chat)
At present, some text watermark algorithms have been successful in making machine-generated texts detectable by adding implicit features during the text generation process that are difficult for humans to discover but easily detected by the specially designed method (Christ et al., 2023; Kirchenbauer et al., 2023). However, current text watermark algorithms are all public, which means the detection of watermarks requires the key from the watermark generation process. This allows attackers easily remove and forge the text watermarks using these public keys. Although Kirchenbauer et al. (2023) have suggested that the watermark detection process could be placed behind the web API to achieve the effect of private watermarking, this approach requires substantial server resources and robust designs against hacking (even social engineering). Moreover, the requirement for users' text uploading carries an inherent risk of privacy breaches. If a text watermark algorithm could be designed in such a way that the watermark's generation key could be hidden during the detection process, this could significantly mitigate the issues mentioned above.
In this work, we propose the first private watermark algorithm for LLMs. Our work is built on the common watermark paradigm, which splits the vocabulary into the green and red lists and then prefers
to choose tokens from the green list. The difference is we implement these concepts in a private way. In order to hide the detail of the watermark generation method during the detection process, we propose two separate neural networks for watermark generation and detection instead of using the same key for both stages. The privacy of our algorithm derives from the black-box nature of neural networks, that is, it's nearly impossible to infer the watermark generation detail from the parameters of the detection network. Also, we analyze the difficulty of reverting watermarking generation detail from the output of the detection network in section 4.5. However, in practice, training such a detection network from scratch requires a vast amount of data, and achieving a high accuracy is challenging due to the complexity of the problem. Therefore, we also propose a neural network for the watermark generation process. To achieve a high-accuracy detection network with relatively small data, we share the token embedding layers between the watermark generation network and the watermark detection network, which essentially provides some prior information to the detection network. Specifically, our watermark generation network takes the input of \(w\) (local window size) tokens and outputs whether the last token belongs to the green list, which differs from the origin method Kirchenbauer et al. (2023) of splitting the vocabulary into the green and red list based on the local window text's hash value and the secret key. Meanwhile, the text detection network directly inputs all the token lists from the text, with the output being a classification indicating whether the entire text contains the watermark added by the generation network.
While constructing the training data for the watermark detection network, the presence of the watermark is also determined by considering the labels (red or green) of the first 'window size - 1' tokens. These labels are generated by treating the text as a cyclic document connected from head to tail. In this way, we prevent attackers from easily deducing the watermarking rule by continually altering the last token and observing the output changes.
In our experiments, we demonstrate that the watermark detection algorithm could achieve a nearly 99% detection accuracy rate, which is only marginally inferior to the public watermark algorithm. Given that the detection accuracy of the public watermark algorithm represents our theoretical upper bound, this is already a remarkable result. Moreover, because the amount of parameters of our watermark generation and detection network is negligible compared to the large language model, it brings almost no additional computation burden to the text generation process. Subsequent experiments also illustrate the critical importance of sharing the token embedding layer between the generation and detection networks.
The main contributions of this work can be summarized as follows:
* We propose the first private watermark algorithm which utilizes two neural networks during the watermark generation and detection phase instead of using the same key in both stages. This makes the watermark more difficult to erase and counterfeit.
Figure 1: The illustration of our private watermark algorithm. The left part describes one step of generating tokens in a watermarked language model. The top-k tokens generated by the language model are processed by the watermark generator, which increases the probability of those belonging to the green list. The watermark generator accepts a window size of token inputs, determining whether the last token within this window belongs to the green list. The watermark detector takes the entire text as input and determines if the input text contains a watermark. It is important to note that the watermark generator and watermark detector share the embedding layer for each token.
* The token embedding is shared between the watermark generation and watermark generation network, which makes the training of the watermark detection network more efficient.
* Subsequent experiments indicate that our private watermark algorithm can achieve a detection accuracy only marginally inferior to the direct calculation of z-scores (public algorithm).
## 2 Related work
As the quality of text generated by large language models (LLMs) improves, it becomes increasingly important to detect and tag machine-generated text. Up to this point, there are primarily two kinds of methods for detecting text produced by large language models. The first direction is the text watermarking method, which involves incorporating some implicit features (watermarks) into the text during generation, then detecting these texts using specially designed methods. The second approach keeps the text generation process unchanged and designs a classifier aimed at distinguishing between machine-generated and human-generated text. The following content will primarily introduce these two kinds of methods separately.
Current classifier-based detection methods usually directly employ a binary classification model. Zhan et al. (2023) utilized generation text from GPT2 (Radford et al., 2019), BART (Lewis et al., 2019), and GPT3.5-turbo 2 to fine-tune the _Roberta-large_(Liu et al., 1907) model, resulting in a highly accurate GPT text detector. Similarly, Mireshgallah et al. (2023) discovered that smaller language models perform well for the detection of machine-generated text. In an effort to improve the robustness of detection algorithms, Su et al. (2023) incorporated log-rank information from language models into the detector as a crucial feature. Meanwhile, Hu et al. (2023) introduced a paraphraser and utilized adversarial learning to enhance robustness. To distinguish text from more LLMs, Wu et al. (2023) utilized the prior information of the model's next-token probabilities to design a better detection model. However, whether machine-generated text can fundamentally be detected remains an open question. Chakraborty et al. (2023) believe that with enough data collection, it is possible to train a good detector. On the contrary, Sadasivan et al. (2023) argue that as language models become more complex and the distance between human and AI-generated text decreases, the optimal detector's performance may be only slightly better than a random classifier. In conclusion, some classifier-based detection methods can achieve impressive results. However, due to their limited explainability, their performance in real-world scenarios may be still doubted.
Footnote 2: [https://chat.openai.com](https://chat.openai.com)
Compared to the classifier-based methods, text watermarking is more explainable due to the injected implicit features in the text. There are typically two categories of text watermarking methods. The first is to add a watermark to the existing text. For example, Abdelnabi and Fritz (2021) designed a data-hiding network to embed watermark information in the text, and utilized a data-revealing network to recover the embedded information. Yoo et al. (2023) injected the watermark by substituting some words in the text. However, adding a watermark to the existing text struggles to keep the semantics of the text unchanged which limits its use in real-world scenarios. Another line of methods is injecting the watermark during the text decoding process. Christ et al. (2023) used pseudorandom numbers to sample the next token and subsequently detected the watermark by observing the correlation between the preset pseudorandom numbers and the generated tokens. Kirchenbauer et al. (2023) divided the vocabulary into red and green lists and preferred to generate tokens from the green list. Zhao et al. (2023) enhanced the robustness of this approach by using a global fixed red-green vocabulary. Lee et al. (2023) designed a watermarking method for low-entropy code generation scenarios. However, the above methods are all public, which means the key used to generate the watermark is required during detection. This makes the watermark susceptible to removal and counterfeiting. In this work, we propose the first private text watermarking method to alleviate these issues.
## 3 Problem definition
To facilitate subsequent discussions, this section introduces the key concepts used in this work: language models and the watermarking algorithm.
**A language model**\(\mathcal{M}\) is essentially a function for the next token prediction, which is typically implemented using neural networks. Given an input sequence \(\mathbf{x}=[x_{0}....x_{n-1}]\), it outputs the
probability of the next token \(x_{n}\) over the vocabulary \(\mathcal{V}\): \(\mathbf{p}_{n}:=P_{\mathcal{M}(\mathbf{x})}[x_{n}=\cdot|\mathbf{x}_{1:n-1}]\). The next token to be generated is then selected from this probability distribution, which can be achieved through sampling decode, choosing the token with the highest probability (greedy decode), or using other decode algorithms such as beam search to select a list of tokens with the highest probability.
**A watermarking algorithm** is the combination of two interconnected algorithms: the watermark generation algorithm and the watermark detection algorithm.
* **The watermark generation algorithm** could be viewed as a slight adjustment to the probability distribution of the language model. We can use \(\hat{\mathcal{M}}\) to represent the language model that includes the text watermark. Formally, the probability of the next token prediction can be represented as follows: \(\mathbf{p}_{n}:=P_{\hat{\mathcal{M}}(\mathbf{x})}[x_{n}=\cdot|\mathbf{x}_{1:n-1}]\).
* **The watermark detection algorithm** accepts a text \(\mathbf{x}=[x_{0}....x_{n}]\) as input and output whether the input sentence contains a watermark. The watermark detection model Detect and the watermarked language \(\hat{\mathcal{M}}\) correspond to each other one-to-one.
## 4 Proposed Method
As illustrated in Figure 1, the private watermarking algorithm utilizes two distinct neural networks rather than sharing the same key for the watermark generation and detection stages. In the subsequent sections, we will first introduce the decoding step of the watermarked language model (Section 4.1), followed by the details of the watermark generation network (Section 4.2). Then the principles of watermark detection are introduced (Section 4.3) as well as the specifics of the watermark detection network (Section 4.4). Finally, we analyze the privacy of the entire algorithm in detail (Section 4.5).
### Watermarked Large Language Model
As shown in Algorithm 1 for watermark generation, given the input \(\mathbf{x}=[x_{0}....x_{n-1}]\), we first generate the next token's logits, \(\mathbf{p}_{n}:=P_{\hat{\mathcal{M}}(\mathbf{x})}[x_{n}=\cdot|\mathbf{x}_{1:n-1}]\), through the target language model \(\mathcal{M}\). Then we select the top K tokens with the highest probability from the logits and use the watermark generation network \(\mathbf{W}\) to determine whether they belong to the green list. The probability of these green list tokens is then increased by \(\delta\), while keeping the probability of other tokens unchanged. The modified logits serve as the output of the watermarked language model \(\hat{\mathcal{M}}\).
Note that we are not required to label all tokens in vocabulary during each generation step. In the top-K sampling as shown in Algorithm 1, only the top K tokens are tagged as green or red. Meanwhile, for the scenario of beam search, the number of tokens that need to be labeled is dynamic. Suppose the beam size is B, the first step is to identify the Bh largest score \(S_{B}\). Subsequently, all tokens with scores greater than \(S_{B}-\delta\) are required to be tagged by the watermark generation network.
```
1:Input: a watermark generation network \(N\), a fixed number \(K\), watermark strength \(\delta\), a language model \(\mathcal{M}\), previous generated text \(\mathbf{x}=[x_{0}....x_{n-1}]\), local window size \(w\).
2:Generate the next token logit \(\mathbf{p}_{n}:=P_{\hat{\mathcal{M}}(\mathbf{x})}[x_{n}=\cdot|\mathbf{x}_{1:n-1}]\).
3:Get the top K logits \(topK(\mathbf{p}_{n})\) and their ids \(topK(\mathbf{x_{n}})\).
4:for\(x_{ni}\in topK(\mathbf{x_{n}})\)do
5:if\(N([x_{n-w+1},...,x_{ni}])=1\)then
6: Add the token \(x_{ni}\) to the "green list" \(G\).
7:endif
8:endfor
9: Define a new language model \(\hat{\mathcal{M}}\) where given input \(\mathbf{x}=[x_{0}....x_{n-1}]\), the resulting logits satisfy \[\hat{\mathbf{p}}_{n}[i]:=\mathbf{p}_{n}[i]+\delta\mathbf{1}(i\in G),\] where \(\mathbf{1}(\cdot)\) is the indicator function.
10:Output: watermarked language model \(\hat{\mathcal{M}}\).
```
**Algorithm 1** Watermark Generation Step (Top K sampling)
### Watermark Generation Network
The structure of our watermark generation network is illustrated in the middle part of figure 1. The embedding of each input token is first generated by the shared embedding network \(\mathbf{E}\). Then, the embeddings within a local window \(w\) are concatenated and fed into the subsequent classification network \(\mathbf{C}\) to determine if the last token belongs to the green list:
\[\mathbf{W}(\mathbf{x})=\mathbf{C}([\mathbf{E}(x_{n-w+1}),....,\mathbf{E}(x_{n})]). \tag{1}\]
The embedding network is a fully connected network and its input is the binary representation of token IDs, where the number of encoding bits depends on the size of the vocabulary. For example, GPT2 Radford et al. (2019) has a vocabulary size \(|\mathcal{V}|\) of 50,000, which requires 16 bits for its vocabulary representation. Common language models typically require bits between 15 and 17 for binary vocabulary representations.
To facilitate the subsequent watermark detection, the proportion of green labels generated by the watermark generation network requires to remain constant. Specifically, for any local window prefix \([x_{n-w+1},\ldots,x_{n-1}]\), the probability of \(x_{n}\) belongs to the green list is always a fixed value \(\gamma\):
\[\forall[x_{n-w+1},\ldots,x_{n-1}],P(\mathbf{W}([x_{n-w+1},\ldots,x_{n-1},x_{n} ])=1)=\gamma, \tag{2}\]
where the \(\gamma\) has the same meaning as the green list ratio in the previous public watermark algorithms Kirchenbauer et al. (2023); Zhao et al. (2023).
However, due to the black-box nature of neural networks, it is challenging to get a fixed ratio by pre-defined parameters. We achieve this by constructing a training dataset strictly with the desired proportion \(\gamma\). It's worth noting that this method does not guarantee that the ratio of green to red will be strictly the same under every local window. Still, the expected value of this ratio is \(\gamma\), and there is also a standard deviation \(\sigma\). We will show the standard deviation \(\sigma\) only has a very slight impact on the final detection process in the following section 4.3.
### Watermark Detection
In this section, we introduce how to detect a given watermark using the z-value test. Then in the next section, the training data of the watermark detection network would be tagged by the z-value calculation.
If vocabulary is divided into the green list and red list according to the fixed ratio \(\gamma\), then the number of tokens from the green list appearing in a normal text of length T would be \(\gamma T\), with a variance of \(\gamma(1-\gamma)T\). In this case, we can adopt the z-value test method proposed by Kirchenbauer et al. (2023). If the z-score from the following formula is greater than a certain threshold, the text would be considered as containing a watermark:
\[z=(|s|_{G}-\gamma T)/\sqrt{T\gamma(1-\gamma)}. \tag{3}\]
However, based on the previous section, the watermark generation network cannot guarantee a fixed ratio \(\gamma\); we can only obtain a ratio \(\hat{\gamma}\), which has an expectation \(\gamma\) and a standard deviation \(\sigma\). It is necessary to amend the aforementioned formula under these circumstances. The expectation of the green token numbers is still \(\gamma T\), but the variance changed. According to the law of total variance, we can use the following formula to calculate the new variance:
\[Var(\gamma T)=E[Var(\gamma T|\gamma)]+Var(E[\gamma T|\gamma])=\gamma(1-\gamma)T +\sigma^{2}T, \tag{4}\]
and the new z-score could be calculated as follows:
\[z=(|s|_{G}-\gamma T)/\sqrt{\gamma(1-\gamma)T+\sigma^{2}T}. \tag{5}\]
Since our standard deviation \(\sigma\) is very small in practice, the increase in variance, \(\sigma^{2}T\), is also quite minimal. In the process of subsequent experiments, we will initially estimate the variance of the generation network and then include the variance during the z-score test calculation.
### Watermark Detection Network
While the z-value test is effective in detecting watermarks within a text, it has a drawback that requires the label (green or red) of each token during the process. This makes it easier for the watermark to be removed or forged based on this information. To keep this information private, we innovatively propose a watermark detection neural network, which only accepts a sequence of text as input and output whether the text contains a watermark or not.
The detailed structure of our watermark detection network is illustrated in the right part of figure 1. The input to the entire network is the ID sequence of all tokens in the target sentence, where an output of 1 indicates the presence of a watermark in the entire sentence, and 0 signifies its absence.
Specifically, all tokens first pass through a shared embedding network. The parameters of this token embedding network are identical to those of the watermark generation network, and will not be fine-tuned in the following training process. The motivation behind this novel approach is the shared embedding could give prior information to the detection networks and substantially reduce the difficulty of training the watermark generation network.
After obtaining the embedding of each token, we combine the embedding of all tokens and feed it into an LSTM (Long Short-Term Memory) network. Eventually, the LSTM network will output a binary classification to represent whether the text contains a watermark:
\[\mathbf{D}(\mathbf{x})=\mathbf{LSTM}([\mathbf{E}(x_{0}),....,\mathbf{E}(e_{n})]). \tag{6}\]
The entire watermark detection network could be viewed as a discriminator to judge whether the z-value of a given input text is greater or less than a certain threshold. Therefore, we use equation 5 with a certain threshold to construct the training dataset. Specifically, during the training dataset construction, we sample texts with different proportions of green tokens and then assign a label of 0 or 1 depending on whether the calculated z-value exceeds a certain threshold.
It should be noted that the input for the training of the entire watermark detection network does not need to be a meaningful text - any number ID list is acceptable. Therefore, the detection model trained in this way theoretically will not encounter out-of-domain issues. We will further illustrate this point in subsequent experiments.
Moreover, under normal circumstances, the first \(w-1\) tokens of a string of text sequences are usually not labeled as red or green. To make it more difficult for attackers to infer the watermark generation rules from the watermark detection network, we also labeled the first w-1 tokens by treating the text as a cyclic document connected head-to-tail. For instance, we can determine the label for \(x_{0}\) through \(x_{n-w+1}....x_{n}\). Normally, the labels of the first w-1 tokens are usually random, but since the window size is much smaller than the overall length of the text, this can be neglected in the overall watermark detection.
### Analysis of the Privacy
To demonstrate that our private watermark algorithm could effectively hide the process of watermark generation, we analyze the difficulty of reverting the watermark generation rules from the watermark detection network.
A more detailed definition of the reverting problem is provided here: given the structure and parameters of the watermark detection network, obtain as many watermark generation rules as possible, \([x_{i}....x_{i+w-1}]\to\) 0 (red) or 1(green).
Considering the black-box nature of neural networks, inferring watermark rules based on the parameters of the detection network is nearly impossible, i.e., attackers can only infer from the output of the detection network. To achieve this goal, attackers need to continually modify the input to observe the output logits change. Every time modifying a \(x_{i}\) to \(x_{j}\) in the text, the label (green or red) of tokens within a window size would change and the only information attackers could only get is the inequality of the number of green tokens between two groups as follows (assuming the probability of text being watermarked decreases):
\[\mathrm{Num}(\{x_{i-w+1}...x_{i}\},...,\{x_{i},...x_{i+w-1}\})>\mathrm{Num}( \{x_{j-w+1}...x_{i}\}...\{x_{j},...,x_{i+w-1}\}), \tag{7}\]
where \(\mathrm{Num}\) is a function to count the number of green labels within a group.
First, we give the lower bound of the number of times required to query the detection network. Given the window size \(w\), there's no way to infer all the rules within \(|V|^{w}\) times of executing the detection network because the total number of rules is \(|V|^{w}\), it is obviously impossible to infer two generation rules using any single query to the detection network.
However, it should be noted that this lower bound is very rough. In actual scenarios, it is even very difficult for attackers to obtain a clear inequality relation shown in Equation 7 because the window size is unknown to the attackers, and the logits change of the detection network is not 100% accurate. As a result, the user has to pay a considerable computational cost even to get a specific rule. Therefore, the actual number of required computations is far much greater than \(|V|^{w}\).
It can be seen that a larger window size could make the watermark generation rules more difficult to decipher. The method which uses a global fixed red-green list as adopted by Zhao et al. (2023) is not suitable for the private watermark algorithm.
Instead of getting the watermark generation network rules from the watermark detection network, Sadasivan et al. (2023) proposed a method to infer the green list by statistically analyzing the pair frequency of large amounts of generated watermarked texts. However, their method is unlikely to be effective against our private watermarking method. First, Sadasivan et al. (2023) assumes the local window size is 2 but the window size we use is unknown. If a search is conducted for all possible window sizes, the computational cost would be extremely high, as the required computational power increases exponentially with the window size. Secondly, their approach assumes that the analysis could be conducted with a fixed set of N = 181 common tokens. However, in actual scenarios, since attackers cannot access the watermarked language model (otherwise there would be no need for an attack), they cannot limit its output tokens to a fixed token set.
## 5 Experiment
In this section, we validate the effectiveness of the private watermark algorithm through extensive experiments.
### Experiment Setup
We utilize GPT-2 (Radford et al., 2019), OPT-1.3B (Zhang et al., 2022), and OPT-2.7B (Zhang et al., 2022) as the models for generating watermarks. For each model, we adopt both top-K sampling and beam search methods for text generation. The specific details of the two sampling methods have already been mentioned in section 4.2.
\begin{table}
\begin{tabular}{l l r r r r r r r r} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{Methods / Dataset} & \multicolumn{3}{c}{C4} & \multicolumn{3}{c}{Dbpedia Class} \\ \cline{3-10} & & FPR & FNR & TPR & TNR & FPR & FNR & TPR & TNR \\ \hline \multirow{3}{*}{GPT2} & Pub top-K. & 0.4 & 2.4 & 97.6 & 99.6 & 0.6 & 1.0 & 99.0 & 99.4 \\ & Pri top-K. (ours) & 0.3 & 3.0 & 97.0 & 99.7 & 1.0 & 2.0 & 98.0 & 99.0 \\ & Pub beam search. & 0.4 & 3.2 & 96.8 & 99.6 & 0.6 & 0.6 & 99.4 & 99.4 \\ & Pri beam search. (ours) & 0.2 & 4.8 & 95.2 & 99.8 & 1.0 & 2.8 & 97.2 & 99.0 \\ \hline \multirow{3}{*}{OPT 1.3B} & Pub top-K. & 0 & 3.8 & 96.2 & 100 & 0 & 6.2 & 93.8 & 100 \\ & Pri top-K. (ours) & 0.2 & 3.9 & 96.1 & 99.8 & 0 & 11.2 & 88.8 & 100 \\ & Pub beam search. & 0 & 1.8 & 98.2 & 100 & 0 & 0.4 & 99.6 & 100 \\ & Pri beam search. (ours) & 0 & 3.2 & 96.8 & 100 & 0 & 2.0 & 98.0 & 100 \\ \hline \multirow{3}{*}{OPT 2.7B} & Pub top-K. & 0 & 3.4 & 96.6 & 100 & 0 & 8.0 & 92.0 & 100 \\ & Pri top-K. (ours) & 0 & 5.6 & 94.4 & 100 & 0 & 13.9 & 86.1 & 100 \\ \cline{1-1} & Pub beam search. & 0 & 1.2 & 98.8 & 100 & 0 & 1.0 & 99.0 & 100 \\ \cline{1-1} & Pri beam search. (ours) & 0 & 2.0 & 98.0 & 100 & 0 & 3.3 & 96.7 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Empirical error rates for watermark detection using top-K sampling and beam search. Each row is averaged over \(\sim 4000\) generated sequences of length \(T=200\pm 5\). The table compares our proposed private watermark algorithm (prefixed with Pri) and the public watermark algorithm that directly calculates the z-score (prefixed with Pub). The hyperparameters adopted in the table are uniformly set as \(\delta=2.0\), \(\gamma=0.5\), and z-value threshold \(4.0\).
Meanwhile, we use the C4 (Raffel et al., 2020) and Dbpedia Class datasets (Gangemi et al., 2012) to evaluate our watermark algorithm. Specifically, following the approach of Kirchenbauer et al. (2023), we selected texts with length 30 from these datasets as prompts, and let the language models perform completions given these prompts. For each prompt, the models would generate \(T=200\pm 5\) tokens. We used the completions from the original datasets as the non-watermarked text (human text), and the text generated by our models as the watermarked text. The effectiveness was evaluated based on the ratio of false positive errors (human text falsely flagged as watermarked) and false negative errors (watermarked text not detected).
Unless specified otherwise, the hyperparameters used in the experiment are as follows: for the generator network, the ratio of green labels generated is 0.5, the window size is 5, the layer number of the token embedding network is 5, and the value of \(\delta\) is set to 2. For the detector network, the value of z used in training is 4, and the number of LSTM network layers is 2. When using the top-K sampling method, the K is set to 20 and the beam size of the beam search method is set to 8.
### Main Results
Table 1 demonstrates the detection accuracy of the private watermarking algorithm. We refer to the method which utilizes the label of each token to calculate the z-value (section 4.3) as the public watermarking algorithm and use this algorithm as the baseline for our comparison. The hyper-parameters used are \(\delta=2.0\) and \(\gamma=0.5\). The detection network is trained following a z-value threshold of 4.
As illustrated in Table 1, similar to the public watermarking algorithm, our private watermarking algorithm also scarcely produces false positive results (both 0.2% on average), meaning that human text is almost never mistakenly identified as watermarked text. Moreover, in most scenarios, the false negative probability is only marginally higher than the public watermarking algorithm by an average of \(1.3\%\). Considering that the performance of the public watermarking algorithm represents the strict upper bound of our method, this is indeed an outstanding result. For some special cases when the z-value is not properly selected (using the top-K sampling with OPT 1.3B and 2.7B models to test the DBpedia CLASS dataset), even the public detection algorithm generates more false negatives cases and our private detection methods would also decrease in performance. With a properly selected z-value threshold, the private watermarking algorithm exhibits similar performance across different decoding methods, various language models, and disparate domain datasets, which demonstrates its strong generalizability and adaptability.
### Ablation study
To further analyze the private watermark algorithm, we conduct an ablation study in Table 2 to illustrate the effectiveness of shared token embedding for the detection network. Specifically, the experiment is conducted on the GPT2, OPT1.3B, and OPT2.7B language models on the C4 and
\begin{table}
\begin{tabular}{l l r r r r r r r r} \hline \hline \multicolumn{2}{c}{Methods / Datasets} & \multicolumn{5}{c}{C4} & \multicolumn{5}{c}{Dbpedia Class} \\ \cline{3-10} & FPR & FNR & TPR & TNR & FPR & FNR & TPR & TNR \\ \hline \multirow{3}{*}{GPT2} & w. shared-layer & 0.3 & 3.0 & 97.0 & 99.7 & 1.0 & 2.0 & 98.0 & 99.0 \\ & w/o shared-layer & 28.8 & 23.2 & 76.8 & 71.2 & 47.7 & 24.4 & 75.6 & 52.3 \\ & w ft shared-layer & 10.8 & 0.6 & 99.4 & 89.2 & 21.3 & 2.1 & 97.9 & 78.7 \\ \hline \multirow{3}{*}{OPT 1.3B} & w. shared-layer & 0.2 & 3.9 & 96.1 & 99.8 & 0 & 11.2 & 88.8 & 100 \\ & w/o shared-layer & 26.9 & 7.7 & 92.3 & 73.1 & 34.0 & 14.1 & 85.9 & 66.0 \\ & w ft shared-layer & 0.8 & 2.8 & 97.2 & 99.2 & 28.6 & 2.8 & 97.2 & 71.4 \\ \hline \multirow{3}{*}{OPT 2.7B} & w. shared-layer & 0 & 5.6 & 94.4 & 100 & 0 & 13.9 & 86.1 & 100 \\ & w/o shared-layer & 55.4 & 6.3 & 93.7 & 44.6 & 6.4 & 29.4 & 70.6 & 93.6 \\ \cline{1-1} & w ft shared-layer & 13.0 & 1.7 & 98.3 & 87.0 & 0 & 17.4 & 82.6 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The table presents an ablation study on the shared layer, contrasting the effects of using a shared layer (w. shared layer), not using a shared layer (w.o. shared layer), and fine-tuning the shared layer (w. ft shared layer). The experiment was conducted under the hyperparameter settings of \(\delta=2.0\), \(\gamma=0.5\), and z-value=4.
DBPEDIA CLASS datasets. We have presented results under three different settings: using shared token embedding, not using shared token embedding, and fine-tuning shared token embedding.
As seen from Table 2, without the shared layer, the proportion of false negatives (watermarked text not detected) and false positives (human text falsely flagged as watermarked) dramatically decreases on an average of \(15.1\%\) and \(32.0\%\) respectively. This renders the entire detection algorithm almost inapplicable. Concurrently, although fine-tuning the shared layer reduces the occurrence of false negatives, it also introduces some instances of wrongly tagged human text. Given that mistakenly recognizing human text as the watermarked text presents more severe consequences, we eventually choose to adopt the method without fine-tuning the shared layer.
### Hyper-parameters Analysis
To better understand how the private watermark algorithm works, we perform a series of analyses on several key hyper-parameters. Specifically, we tested the influence of different z value thresholds and \(\delta\) values in Figure 2 (a) and Figure 2 (b) respectively. When analyzing different z value thresholds, the value of \(\delta\) is set to 2.0, and when analyzing different \(\delta\) values, the value of z is set to 4. We use the LLaMA 13B model Touvron et al. (2023) to calculate the perplexity (PPL) value in Figure 2(b).
From Figure 2 (a), it can be observed that with the z-value threshold increases, the number of false positives gradually decreases. At a z-value of 4, there are almost no false positive cases. In contrast, the rate of false negatives tends to increase with the z-value threshold. As a trade-off, we selected a z-value threshold of 4 for this work. Additionally, as shown in Figure 2 (b), with the increase in the value of \(\delta\), the accuracy of the detection network also increases. However, the corresponding perplexity (PPL) value of the generated text will also rise. Weighing these factors, we finally chose a \(\delta\) value of 2, which maintains detection accuracy without excessively impacting the text quality.
### Error Analysis
To better analyze the error cases of the private watermark algorithm, we present the z-score distributions of both the human text and the watermarked text, as well as the detection accuracy of the algorithm at different z-score ranges in Figure 3(a). These results are generated by GPT2 on the C4 dataset. As can be observed from Figure 3(a), the human text and watermarked text exhibited a normal-like distribution centered around 0 and 9 respectively. The detection accuracy of the private watermark algorithm is relatively low only around the z-score threshold 4, while it is almost 100% in other ranges. This suggests that for inputs with highly certain labels, our algorithm is quite reliable.
Figure 2: The left figure depicts the variations in the classification performance of private watermark detection under different z-value thresholds, while the right figure illustrates the changes in the detection accuracy and the PPL of generated texts with different \(\delta\) values.
### Watermark Generation Network Analysis
Based on our analysis in the section 4.3, it is critical for the watermark generation network to generate a stable label ratio because the modified z-score calculation (equation 5) is dependent on the variance of the label ratio. Therefore, in this section, we calculate the actual mean and variance of the labels generated by the watermark generation network.
Specifically, we train the watermark generation network using 5000 data items with strictly a 0.5 ratio of green labels, using the Adam optimizer Kingma and Ba (2014) with a learning rate of 0.001. As can be seen from figure 3 (b), the ratio of green labels gradually approaches the target value 0.5 with the training loss decreases, and its standard deviation also gradually diminishes. Ultimately, the standard deviation can be controlled within 0.02, corresponding to a variance of less than \(4e-4\). According to equation 5, \(\sigma^{2}T\) could be nearly neglected in the final z-value calculation. We adopt the value 0.02 in the revised z-score calculation.
### Time Complexity Analysis
Due to the private watermark generation process employing an additional watermark generation network, there is a risk of introducing an extra computational burden. Therefore, we analyze the time complexity of the watermark generation process in this section.
First, we compare the number of parameters in the watermark generation network and the language model. Our watermark generation network only consists of 43k parameters, whereas GPT2, OPT1.3B, and OPT2.7B have 124M, 1.3B, and 2.7B parameters respectively. It is evident that compared to the large language models with an enormous number of parameters, the number of parameters in our watermark generation network can be considered almost negligible.
Then we analyze the actual running time. On a single Tesla V100 GPU, decoding a token in GPT2 requires 30ms, whereas incorporating our watermark generation network only adds an average of 1ms to the cost. For models with a larger number of parameters, such as OPT1.3B and OPT2.7B, the influence on the decoding time is even smaller. Hence, our watermark generation algorithm does not cause significant additional computational overhead.
Figure 3: The left figure is an error analysis, illustrating the detection accuracy for data within various ranges of z-scores. The right figure depicts the changes in loss and the mean proportion (\(\pm\) its standard deviation) of green labels generated by the watermark generator network during the training process.
Conclusion
In this work, we have proposed the first private watermarking algorithm. Unlike previous works that detect watermarks by calculating the z-score using the key from the watermark generation phase, we detect watermarked text by a trained detection network. To facilitate the training of the watermark detection network, we also employ a neural network during the watermark generation phase and share token embeddings between the two networks. As demonstrated in the previous experimental stages, the detection accuracy achieved by our private watermarking algorithm is only slightly lower than that of the direct z-value calculation method. Meanwhile, further experiments demonstrate the strong adaptability of our algorithm. In future work, the details of watermark generation and detection can be further optimized. Meanwhile, enhancing the robustness of our private watermarking method is also an important direction.
|
2306.16119 | A Hierarchical Architecture for Optimal Unit Commitment and Control of
an Ensemble of Steam Generators | A hierarchical architecture for the optimal management of an ensemble of
steam generators is presented. The subsystems are coordinated by a multilayer
scheme for jointly sustaining a common load. The high level optimizes the load
allocation and the generator schedule, considering activation dynamics by a
hybrid model. At the medium level, a robust tube-based model predictive control
(MPC) tracks a time-varying demand using a centralized--but aggregate--model,
whose order does not scale with the number of subsystems. A nonlinear
optimization, at medium level, addresses MPC infeasibility due to abrupt
changes of ensemble configuration. Low-level decentralized controllers
stabilize the generators. This control scheme enables the dynamical
modification of the ensemble configuration and plug and play operations.
Simulations demonstrate the approach potentialities. | Stefano Spinelli, Marcello Farina, Andrea Ballarino | 2023-06-28T11:44:00Z | http://arxiv.org/abs/2306.16119v1 | A Hierarchical Architecture for Optimal Unit Commitment and Control of an Ensemble of Steam Generators
###### Abstract
A hierarchical architecture for the optimal management of an ensemble of steam generators is presented. The subsystems are coordinated by a multi-layer scheme for jointly sustaining a common load. The high level optimizes the load allocation and the generator schedule, considering activation dynamics by a hybrid model. At medium level, a robust tube-based Model Predictive Control (MPC) tracks a time-varying demand using a centralized - but aggregate - model, whose order does not scale with the number of subsystems. A nonlinear optimization, at medium level, addresses MPC infeasibility due to abrupt changes of ensemble configuration. Low-level decentralized controllers stabilize the generators. This control scheme enables the dynamical modification of the ensemble configuration and plug and play operations. Simulations demonstrate the approach potentialities.
Hierarchical control of large-scale network systems, Model predictive control.
## I Introduction and problem statement
Steam is widely used in industrial processes, playing a primary role in production. In industrial applications requiring a large and possibly time-varying steam demand, a flexible and efficient generation solution is mandatory. Since boiler operation close to the lower generation limit is largely inefficient, in high-fluctuating demand scenarios the production efficiency can be very unsatisfactory. In these cases, a virtual generation plant constituted of a set of smaller units working in parallel can be a viable alternative to operate a single boiler on a larger range [1]. A set of cooperative smaller units can be reconfigured to produce what demanded, enabling a quick and optimal connection/disconnection of subsystems, and considering the current and/or forecasted demand.
In line with this vision, the main objective of this work is the proposal of a hierarchical control scheme for the optimal unit commitment (UC) and management of a group of steam generators that work in a parallel configuration to sustain a cumulative steam demand.
### _State of the art_
The coordination of independent (or interdependent) subsystems towards a main target characterizes different industrial applications, e.g., smart grids and electrical
generation systems [2], thermal energy grids [3], building heating and cooling systems [4], and distribution networks of steam, water, or compressed air [5]. These complex plants share similar features: several (homogeneous) systems work in parallel to commonly supply an overall demand; each subsystem and the whole plant operate in a constrained range and the subsystems must cooperate in a scenario of limited shared resources. The studies referred above focus on the optimal load sharing among the parallel systems. Actually, two main aspects must be addressed in this context: (i) the unit commitment and economic dispatch of the subsystems; (ii) the dynamic control of the overall plant and of the single subsystems.
The two problems are characterized by different time-scales and are commonly addressed separately. The UC optimization problem has been extensively studied in the context of electrical generation systems, where the scheduling is optimized to minimize the plant operating cost, while satisfying process (and market) constraints. Several approaches have been studied, both in the deterministic and stochastic framework: an extensive discussion about solution techniques can be found in the review papers [6, 7]. While several (meta-) heuristic methods and mathematical programming approaches have been tested in the literature, in this paper we address the solution of UC optimization by Mixed-Integer Programming (MIP), as it guarantees an efficient, flexible and accurate modeling framework.
In the context of combined cycle power plants, a MIP formulation for the scheduling of thermal units has been presented in [8], while a tighter formulation reducing the number of binary variables is presented in [9]. An extended formulation, that provides a generalized-mode model for each unit, is discussed in [10]. Discrete-time state-space model formulations can be easily implemented in MPC strategy to manage the plant in a receding horizon way, as discussed in [11], whose formulation permits only to describe the unit dynamics by ON/OFF modes. The one presented in [12], based on a hybrid system approach - and specifically on a Mixed-Logical Dynamical (MLD) model - can generalize the unit dynamics. Based on a similar approach, in [13], the authors have formulated the high-level UC problem for a small Combined Heat and Power (CHP) unit, composed of a fire-tube boiler and an internal combustion engine for power generation.
In [14], the UC problem is presented for a CHP plant with eight steam boilers working in parallel, where maintenance issues of the flexible boiler array are integrated in the cost function. The authors of [15] focus on the boiler load allocation problem, uncoupled from electricity generation aspects, in a multi-boiler configuration: the optimization is addresses by gradient search methods considering boiler efficiency versus steam load. Crucially, these works focus only on the solution of the scheduling problem and do not consider the dynamic control of these units.
On the opposite front, other researchers are concentrating on dynamic control issues, with particular application on networked steam boilers operating in parallel. In [16] an optimal control scheme is presented for the energy loss minimization and the primary management of heat production for multi-boiler industrial systems, comparing the optimal approach to the traditional cascade control. The control of a multiple boiler configuration based on a MPC is discussed in [17], with application to a paper mill plant, or in [18] for a coal-fired boiler house, where maintaining stable header pressure and boiler availability is of critical importance for downstream consumers.
In the research work [19], a supervisory control, designed by LQR approach, is studied for a set of boilers in parallel configuration: a dynamic feedback strategy allows to continuously change each boiler set-point,
while minimizing a combined cost. Taking into account the dynamics of all the individual boilers, this optimal control can cope with general disturbances. However, the dimension of the model of the group of boilers grows with the number of units, thus encountering scalability issues. Moreover, the scheme is not flexible to dynamically manage the variation of the boiler number, i.e., enabling plug-and-play capabilities.
In the recent years, some efforts have been devoted to provide unitary solutions to these problems. In this respect, decentralized, distributed, and hierarchical methods have many advantages over centralized ones, in view of their flexibility, robustness (e.g., to system changes and demand variations), and scalability. In this work we focus on hierarchical methods, as the elective choice for optimal supervision and coordination of the system ensembles, e.g., as introduced in [20].
An extensive review of hierarchical and distributed approaches is reported in [21]. Recently, different solutions have been proposed based on the receding horizon approach. For example, [22] proposes a multi-rate solution for constrained linear systems based on reference governors, [23, 24]; on the other hand, in [25] a hierarchical scheme is introduced for coordinating independent systems with joint constraints and [26] extends the approach used in [25] in case of dynamically coupled units. Finally, [27] proposes a scalable solution based on finite impulse response models enabling plug-and-play operations, while [28] presents an application on power systems.
_Notation_: Calligraphic letters, \(\mathcal{U},\mathcal{Y},\mathcal{W},\mathcal{Z}\), indicate sets. The Minkowski sum of two sets is denoted by \(\oplus\), while \(\bigoplus_{i=1}^{N_{\text{g}}}\mathcal{W}_{i}=\mathcal{W}_{i}\oplus\cdots \oplus\mathcal{W}_{N_{\text{g}}}\). Ensemble (resp. reference-model) variables are indicated with the notation \(\bar{\cdot}\) (resp. \(\bar{\cdot}\)). Nonlinear models and linear counterparts are denoted by \(\mathcal{S}\) and \(\mathcal{L}\), respectively. Superscript \({}^{\text{cl}}\) (resp. \({}^{\text{OL}}\)) connotes closed (resp. open) loop systems. Superscript \({}^{[\text{hd}]}\) (resp. \({}^{[\text{hd}]}\)) denotes variables with sampling time \(T_{\text{\tiny{M}}}\) (resp. \(T_{\text{\tiny{H}}}\)), whose discrete time index is \(k\) (resp. \(h\)), referred to medium (resp. high) level. The floor operator is \(\lfloor\cdot\rfloor\). Finally, for a generic variable \(v(k)\), we denote \(\Delta v(k)=v(k)-v(k-1)\).
### _Problem statement and paper contribution_
In this work, we propose a hierarchical architecture for the management of an ensemble of steam generators. The aim is to manage a group of \(N_{\text{\tiny{g}}}\) steam generators, working in a parallel configuration to sustain a cumulative steam demand, \(\tilde{q}_{\text{\tiny{L}}}^{\text{\tiny{Dem}}}\). The objective is to guarantee the required steam flow rate, with the minimum operating cost. This implies both the minimization of fuel gas and the optimization of the network configuration (i.e., the partial contribution of each boiler to the overall demand), also considering the activation strategy.
The steam generator network is assumed to be composed of _similar_ dynamical systems, i.e., having homogeneous quantities as inputs and outputs, but that might differ in physical dimensions, nominal production rate, consumption, and efficiency.
Each subsystem \(i\) is a water-tube boiler: a pressurized water, denoted feed-water \(q_{\text{\tiny{$\ell$}},i}\), circulates inside the tube coil, forced to flow by a displacement pump, and it is heated by a natural gas burner, whose flow rate is \(q_{\text{\tiny{g}},i}\). The heat, transmitted to the flowing fluid, induces a phase transition of the feed-water into steam. The steam flow rate generated is \(q_{\text{\tiny{$\ell$}},i}\). This design is characterized by extremely short start-up time and safe steam generation with respect to the fire-tube boiler configuration, due to the limited volume of water. The single subsystems and the network of generators are subject to input and output constraints. Both local and global variables are assumed to be defined in convex and compact sets, \(\mathcal{U}_{\text{\tiny{$\ell$}}}\), \(\mathcal{Y}_{\text{\tiny{$\ell$}}}\), \(\bar{\mathcal{U}}\) and \(\bar{\mathcal{Y}}\), i.e.
\[q_{\text{\tiny{$\ell$}},i}\in\mathcal{U}_{\text{\tiny{$\ell$}}} q_{\text{\tiny{$\ell$}},i}\in\mathcal{Y}_{\text{\tiny{$\ell$}}} \tag{1a}\]
\[\bar{q}_{\rm s}=\sum_{i=1}^{N_{\rm g}}q_{{\rm s},:}\in\tilde{\mathcal{U}}\qquad\quad \bar{q}_{\rm g}=\sum_{i=1}^{N_{\rm g}}q_{{\rm g},i}\in\tilde{\mathcal{Y}} \tag{1b}\]
The proposed hierarchical control scheme consists of three layers.
The _high layer_ (HL) extends the preliminary solution, proposed by the authors in [29] for a constant load demand, considering a time-varying demand and the discrete operating modes dynamics of the generators. To this aim, the model of the high-level behavior of the system is here defined in detail in Section II-D. This model is exploited by the top layer to optimize the strategy, i.e., the generator schedule and the working conditions, in order to minimize the operating costs. The activation/inactivation of units must consider the high-level state of each units and the transition costs. This layer computes the optimal number of units active and the best shares of production to be allocated to each boiler based on the time-varying profile of the demand. With respect to [27] and [29], in this paper, the optimization program is reformulated on local steam flow rates, instead of directly optimizing the sharing factors, which avoids to introduce mixed-integer bilinear constraints.
At the _medium layer_ (ML), a robust MPC scheme is adopted, similarly to [20]. This layer, considering the ensemble model, allows to robustly track the overall demand. The ensemble model is an aggregate low-order model of the network of active systems, defined in a scalable way. Differently from [20], in this work, we assume that the sharing factors can change during time. This condition must be opportunely handled by improving the formulation of the optimal control problem (OCP), in order to ensure at each time instant feasibility of the corresponding optimization program. We propose a procedure - based on an alternative nonlinear MPC program - to drive the ensemble to the new configuration when a sharp transition is not feasible.
At the _lowest layer_ (LL), a set of decentralized controllers is used. Proportional-integral (PI) regulators, as currently used in industrial practice, stabilize the internal pressure to its set-point and track the individual requests. In this work, we opt for state-of-the-art regulators at low level, decoupled on pressure and flow-rate loops. This control layer exploits on purpose the embedded regulators, as provided by the generator producer, since the latter are actually neither open nor accessible for modifications, due to safety and regulatory issues. This choice permits to apply the proposed management architecture on brownfield, also on legacy systems.
## II The Boiler Models
In this section we present the dynamical model of the high-pressure steam generators used at the different layers. For notational simplicity, the index \(i\) will be dropped when clear from the context.
### _Nonlinear physical model_
The continuous-time nonlinear dynamical model of the steam generator is derived from the drum-boiler model presented in [30]. Here the equations are adapted to the considered configuration: differently from drum-boilers, no accumulation exists in the water tubes and the drum is absent.
In particular, the feed-water is forced to flow at high-pressure thro
Figure 1: Steam generator functional scheme.
flow-rate \(q_{t}\). The heat transfer transforms the feed-water either totally or partially into steam. Therefore, the mass conservation equation on the water-tube control volume reads \(q_{t}=q_{\text{r}}+q_{\text{\tiny\textbullet}}\), where \(q_{\text{r}}\) is the steam flow-rate. The portion of flow that persists in liquid phase at the outflow, \(q_{\text{\tiny\textbullet}}\), is assumed to be at saturated temperature, see Figure 1.
The \(i\)-th steam generator is characterized by a nonlinear dynamic model \(\mathcal{S}_{i}^{\text{\tiny CL}}\).
\[\dot{p}=\frac{1}{\phi}(\eta\lambda_{\text{\tiny\textbullet}}q_{\text{s}}+q_{t }(h_{t}-h_{\text{\tiny\textbullet}})-q_{\text{s}}(h_{\text{\tiny\textbullet}}-h _{\text{\tiny\textbullet}})) \tag{2}\]
\[\dot{V}_{\text{\tiny\textbullet}}=\frac{1}{(\rho_{\text{\tiny\textbullet}}-\rho _{\text{\tiny\textbullet}})}(\frac{\partial\rho_{\text{\tiny\textbullet}}}{ \partial p}V_{\text{\tiny\textbullet}}+\frac{\partial\rho_{\text{\tiny\textbullet}}}{ \partial p}V_{\text{\tiny\textbullet}})\dot{p} \tag{3}\]
where
\[\begin{split}&\phi=V_{\text{\tiny\textbullet}}(h_{\text{\tiny \textbullet}}\frac{\partial\rho_{\text{\tiny\textbullet}}}{\partial p}+\rho_{ \text{\tiny\textbullet}}\frac{\partial h_{\text{\tiny\textbullet}}}{\partial p })+V_{\text{\tiny\textbullet}}(h_{\text{\tiny\textbullet}}\frac{\partial \rho_{\text{\tiny\textbullet}}}{\partial p}+\rho_{\text{\tiny\textbullet}} \frac{\partial h_{\text{\tiny\textbullet}}}{\partial p})+\\ & V_{\text{\tiny\textbullet}}+M_{\text{\tiny\textbullet}}c_{\text{ \tiny\textbullet}}\frac{\partial T_{\text{\tiny\textbullet}}}{\partial p}-( \frac{\partial\rho_{\text{\tiny\textbullet}}}{\partial p}V_{\text{\tiny\textbullet }}+\frac{\partial\rho_{\text{\tiny\textbullet}}}{\partial p}V_{\text{\tiny \textbullet}})\frac{(\rho_{\text{\tiny\textbullet}}h_{\text{\tiny\textbullet}}- \rho_{\text{\tiny\textbullet}}h_{\text{\tiny\textbullet}})}{(\rho_{\text{\tiny \textbullet}}-\rho_{\text{\tiny\textbullet}})}\end{split} \tag{4}\]
In equations (2)-(4), the subscripts \(\text{\tiny\textbullet}_{\text{\tiny\textbullet},\text{\tiny\textbullet},\text{ \tiny\textbullet},\text{\tiny\textbullet}}\) refer to feed-water, fuel gas, steam, and internal water, respectively. Steam and internal water are assumed to be at saturated conditions. Therefore, the density \(\rho\), the enthalpy \(h\), and the temperature \(T\) are only function of internal pressure \(p\).
The system is characterized by some specific parameters: the burner efficiency \(\eta\), the gas low heat value \(\lambda_{\text{\tiny\textbullet}}\), the total tubes internal volume \(V_{\text{\tiny\textbullet}}\), the mass \(M_{\text{\tiny\textbullet}}\), and the specific heat coefficient \(c_{\text{\tiny\textbullet}}\).
The states of the nonlinear dynamical model (2)-(3) are the internal pressure \(p\) and the water volume \(V_{\text{\tiny\textbullet}}\). The manipulable inputs are the feed-water flow rate \(q_{t}\) and the natural gas flow rate \(q_{\text{\tiny\textbullet}}\), while the steam demand \(q_{\text{\tiny\textbullet}}\) is considered, at the low-level, as a disturbance term. Similarly, the enthalpy \(h_{t}\) of the feed-water is considered a known measured disturbance.
### _Low-level closed-loop model_
An embedded controller is devoted to the regulation of the pressure at the set-point level, and to guarantee a constant water volume \(V_{\text{\tiny\textbullet}}\) for each subsystem \(\mathcal{S}_{i}^{\text{\tiny CL}}\). This controller acts on the local input variables \(q_{t}\) and \(q_{\text{\tiny\textbullet}}\). Commercially-available boilers are already provided with low-level controllers for pressure regulation, designed on industrial standard configuration: a feedback PI regulator \(\mathbf{R}\) on the fuel flow-rate, to steer the pressure \(p\) to a set-point \(p_{\text{sp}}\), and a disturbance compensator \(\mathbf{C}\) working, with an open-loop action, on the feed-water flow-rate to follow the steam demand, as depicted in Figure 2.
The closed-loop system of the \(i\)-th boiler can be described as a nonlinear dynamic model \(\mathcal{S}_{i}^{\text{\tiny CL}}\), in short denoted as \(q_{\text{g},i}=\mathcal{S}_{i}^{\text{\tiny CL}}(q_{\text{\tiny\textbullet},i})\).
One peculiarity of this closed-loop system is the possibility of considering the steam flow rate as input of the controlled system and the gas flow rate as an output, as shown in Figure 2. This closed-loop representation of the boiler enables the problem formalization in the framework of hierarchical control of ensemble systems, as in [20].
In Figure 3, the input/output static map at steady state is shown: historical static data are compared with data generated simulating the response of the system \(\mathcal{S}_{i}^{\text{\tiny CL}}\) with a multiple step input profile. An affine approximation is also shown. Note that, although this linear model is valid during production where the pressure is regulated at its set-point, non-linearity is still relevant during start-up.
Figure 2: Closed-loop boiler function block diagram.
### _Affine model for medium-level control_
Consistently with the data reported in Figure 3, in production the boiler is maintained close to the nominal conditions, thus the dynamics of \(\mathcal{S}_{i}^{\text{\tiny CL}}\) can be well represented by an affine dynamic model, used to account for transient response. A discrete-time affine system \(\mathcal{L}_{i}^{\text{\tiny CL}}\) with output \(y(k)=q_{\text{\tiny g},i}(kT_{\text{\tiny M}})\) and input \(u(k)=q_{\text{\tiny g},i}(t)\) constant \(\forall t\in[kT_{\text{\tiny M}},(k+1)T_{\text{\tiny M}})\) is identified with the simulation error minimization approach using the data drawn by exciting the controlled nonlinear model \(\mathcal{S}_{i}^{\text{\tiny CL}}\) with multiple-step inputs. Note that the sampling time is \(T_{\text{\tiny M}}\) and the time index \(k\) is the one used for control at medium hierarchical level. The identified discrete-time transfer function (plus constant) is denoted \(G_{i}^{\text{\tiny CL}}\) and is of the type
\[y(k)=\frac{\sum_{j=1}^{n_{\text{\tiny b}}}(b_{j}z^{-j})}{1+\sum_{j=1}^{n}(f_{j }z^{-j})}u(k)+\gamma \tag{5}\]
where \(\gamma\) is the identified bias when \(u(k)=0\). The corresponding state-space form is
\[\mathcal{L}_{i}^{\text{\tiny CL}}:\begin{cases}x(k+1)=Ax(k)+Bu(k)\\ y(k)=Cx(k)+\gamma\end{cases} \tag{6}\]
with state vector, \(x(k)=[\delta y(k),...,\delta y(k-n_{t}+1),u(k-1),...,u(k-n_{u}+1)]^{\tau}\in \mathbb{R}^{n_{\text{f}}+n_{\text{b}}-1}\) and \(\delta y(k)=y(k)-\gamma\). The matrices are \(B=\begin{bmatrix}b_{1}&0_{1\times(n_{\text{f}}-1)}&1&0_{1\times(n_{\text{b}}- 2)}\end{bmatrix}^{\tau}\), \(C=\begin{bmatrix}1&0&\ldots&0\end{bmatrix}\), and
\[A\!=\!\left[\!\!\begin{array}{c|c}\!-f_{1}\cdots-f_{n_{\text{f}}-2}&-f_{\text {\tiny f}}&b_{2}\ldots b_{n_{\text{b}}-1}b_{n_{\text{b}}}\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
on the time spent in the current operating mode, \(\chi(h)\), and possibly on switching binary input \(\beta(h)\in\{0,1\}\) to \(1\). More specifically, a transition happens whenever a guard condition, g, is met:
\[\mathsf{g}:\left\{\begin{array}{ll}\{\chi(h)\geq\chi_{\textsc{GFP-ST}}\}\wedge \{\beta(h)=1\}&m(h)=\mathrm{OFF}\\ \{\chi(h)\geq\chi_{\textsc{ST-ON}}\}&m(h)=\mathrm{ST}\\ \{\chi(h)\geq\chi_{\textsc{ON-OPF}}\}\wedge\{\beta(h)=1\}&m(h)=\mathrm{ON} \end{array}\right.\]
The values of \(\chi_{\textsc{GFP-ST}}\), \(\chi_{\textsc{ST-ON}}\), and \(\chi_{\textsc{ON-OPF}}\), are suitably-defined thresholds. The model output is given by:
\[q_{\mathrm{s}}(h) =g\cdot q_{\mathrm{s}}(h)+\gamma_{\textsc{ON}}\qquad\text{if }m= \mathrm{ON} \tag{7a}\] \[q_{\mathrm{s}}(h) =\gamma_{\textsc{ST}}\qquad\qquad\qquad\text{if }m=\mathrm{ST}\] (7b) \[q_{\mathrm{s}}(h) =0\qquad\qquad\qquad\text{if }m=\mathrm{OFF} \tag{7c}\]
where \(g=C(I_{n}-A)^{-1}B\), \(\gamma_{\textsc{ON}}\), and \(\gamma_{\textsc{ST}}\) are the static gain of the closed-loop system \(\mathcal{L}^{\textsc{CL}}\), the constant fuel gas consumption in production and in start-up modes, respectively. Note that, consistently with the model derived in the previous sections, the affine map (7a) is the one depicted in Figure 3.
To make the model easily manageable in a suitable optimization program, the DHA model is converted into the MLD one [32]. The MLD model is an extended state-space dynamical system where the state vector, \(x^{[\textsc{H}]}=\left\{\chi,x^{[\textsc{H}]}_{\textsc{OFF}},x^{[\textsc{H}]} _{\textsc{ST}},x^{[\textsc{H}]}_{\textsc{ON}}\right\}\in\mathbb{Z}\times\{0,1 \}^{3}\), includes integer and Boolean variables. The inputs are the Boolean command and the steam flow-rate, \(u^{[\textsc{H}]}=\{\beta^{[\textsc{H}]},q^{[\textsc{H}]}_{*}\}\in\{0,1\} \times\mathbb{R}\), while the output is the consumed gas \(y^{[\textsc{H}]}=\left\{q^{[\textsc{H}]}_{\mathrm{g}}\right\}\in\mathbb{R}\), which depends on the active mode, as in (7).
A set of Boolean and continuous auxiliary variables \(\{\delta^{[\textsc{H}]},z^{[\textsc{H}]}\}\in\{0,1\}^{ns}\times\mathbb{R}^{n_ {z}}\) is added to model the FSM evolution, the transition guards, and the reset maps. The MLD model takes the general form:
\[x^{[\textsc{M}]}(h+1)=A^{[\textsc{M}]}x^{[\textsc{M}]}(h)+B^{[ \textsc{M}]}_{u}u^{[\textsc{M}]}(h)+B^{[\textsc{M}]}_{z}z^{[\textsc{M}]}(h)+B ^{[\textsc{M}]}_{\delta}\delta^{[\textsc{M}]}(h)\] \[y^{[\textsc{M}]}(h)=C^{[\textsc{M}]}x^{[\textsc{M}]}(h)+D^{[ \textsc{M}]}_{u}u^{[\textsc{M}]}(h)+D^{[\textsc{M}]}_{z}z^{[\textsc{M}]}(h)+D^ {[\textsc{M}]}_{\delta}\delta^{[\textsc{M}]}(h)\] \[E^{[\textsc{M}]}_{x}x^{[\textsc{M}]}(h)+E^{[\textsc{M}]}_{u}u^{[ \textsc{M}]}(h)+E^{[\textsc{M}]}_{z}z^{[\textsc{M}]}(h)+E^{[\textsc{M}]}_{ \delta}\delta^{[\textsc{M}]}(h)\leq E^{[\textsc{M}]}_{\mathrm{eff}}\]
## III The hierarchical control scheme
In the previous section we derived the single subsystem models to be used at the different levels. Now, we explain how to manage and control them in a unitary and coordinated way.
### _Sketch of the proposed control architecture_
As shown in Figure 5, the medium and high levels of the hierarchical scheme are designed to concurrently define the input \(u_{i}\) of each subsystem (i.e., local steam flow-rate \(q_{\mathrm{s},i}\)) as
\[u_{i}=\alpha_{i}\bar{u}\qquad i=1,\ldots,N_{\mathrm{s}} \tag{8}\]
where \(\alpha_{i}\) is the sharing factor used to partition the overall ensemble input \(\bar{u}\).
Figure 4: Boiler operation mode transitions.
Figure 5: Steam generator ensemble and hierarchical scheme. Typically, \(T_{\textsc{H}}\in[10,30]\) min, \(T_{\textsc{M}}\in[30,60]\) s, \(\tau\in[1,10]\) s.
Sharing factors \(\alpha_{i}\) are computed by the optimization layer. Here, thanks to the DHA models defined in Section II-D, they are optimized in a receding horizon way, minimizing the operating cost of the ensemble to supply the steam demand forecast. The sharing factors are time-varying and defined according to the slow time-scale (i.e., \(T_{\textsc{it}}\)).
The ensemble input \(\bar{u}\) is instead computed by a dynamic optimal reference tracking problem at medium level. To do so, an aggregate model of the whole ensemble is derived by considering the subset of active generation units. The ensemble dynamical model is built by combining opportunely the closed-loop models of the controlled generators. By considering a unique ensemble model, medium level exhibits interesting scalability properties, as its dimensions do not grow with the number of subsystems. A robust reference-tracking MPC scheme is implemented to define the overall gas consumption of the ensemble, operating with a faster time-scale with respect to the high level, with sampling time \(T_{\textsc{it}}=T_{\textsc{it}}\).
### _High-level Optimization_
The high hierarchical level aims to optimize the sharing factor profiles \(\alpha_{i}^{\textsc{[H]}}(h)\) and the modes of all the subsystems by minimizing the operating expenses, including the subsystem activation costs, the actual start-up time, and other constraints, as the ones related to mode transitions and operational range of each subsystem in the ensemble.
The algorithm presented here extends the one presented in [20, 27], and [29] by solving the unit commitment in receding horizon along a prediction window with time-varying demand. We assume its profile to be known for the entire prediction horizon and approximated by a piece-wise constant function, \(\bar{q}_{\textsc{e}}^{\textsc{Demp}}(h)\).
In [29], where both the sharing factors \(\alpha_{i}^{\textsc{[H]}}\) and the ensemble steady-state input \(\bar{u}_{\textsc{w}}(h)\) were considered as decision variables, we obtained a MIP with bilinear inequality constraints. In this work the problem is reformulated as a simpler MIP with linear constraints by considering as optimization variables the partitioned steam flow rates \(q_{\textsc{e}}^{\textsc{[H]}}(h)\). In this formulation, the optimal sharing factors are computed as \(\alpha_{i}^{\textsc{[H]}}(h)=q_{\textsc{e}}^{\textsc{[H]}}(h)/\bar{u}_{\textsc {w}}(h)\).
The optimization problem at high-level reads:
\[\min_{\boldsymbol{\beta}^{\textsc{[H]}},\mathbf{q}_{\textsc{e}}^{ \textsc{[H]}}} \sum_{h=0}^{N_{\textsc{it}}}\sum_{i=1}^{N_{\textsc{g}}}l_{i}(h,\beta_{i} (h),q_{\textsc{e}}^{\textsc{[H]}}(h))\] (9a) s.t. \[\sum_{i=1}^{N_{\textsc{g}}}q_{\textsc{e}}^{\textsc{[H]}}(h)\geq \bar{q}_{\textsc{e}}^{\textsc{Demp}}(h) \tag{9b}\] \[\left\{\begin{array}{l}\sum_{i=1}^{N_{\textsc{g}}}q_{\textsc{e }}^{\textsc{[H]}}(h)=0\\ \text{iff}\quad\sum_{i=1}^{N_{\textsc{g}}}x_{\textsc{Obs}}^{\textsc{[H]}}(h)= 0\\ \bar{u}_{\textsc{w}}\leq\sum_{i=1}^{N_{\textsc{g}}}q_{\textsc{e}}^{\textsc{[H]} }(h)=0\leq\bar{u}_{\textsc{w}}\\ \text{otherwise}\end{array}\right.\] (9c) \[\left\{\begin{array}{l}0\leq\sum_{i=1}^{N_{\textsc{g}}}q_{ \textsc{e}}^{\textsc{[H]}}(h)\leq\bar{g}_{\textsc{ST}}\\ \text{iff}\quad\sum_{i=1}^{N_{\textsc{g}}}x_{\textsc{Obs}}^{\textsc{[H]}}(h)= 0\\ \bar{y}_{\textsc{w}}\leq\sum_{i=1}^{N_{\textsc{g}}}q_{\textsc{e}}^{\textsc{[H]} }(h)\leq\bar{y}_{\textsc{M}}\\ \text{otherwise}\end{array}\right.\] (9d) and, \[\forall i=1,\ldots,N_{\textsc{g}}\] (9e) \[\text{MLD model of unit }i\] (9f) \[u_{\textsc{w},i}x_{\textsc{Obs},i}^{\textsc{[H]}}(h)\leq q_{ \textsc{e}}^{\textsc{[H]}}(h)\leq u_{\textsc{w},i}x_{\textsc{Obs},i}^{\textsc{[ H]}}(h)\] \[y_{\textsc{w},i}x_{\textsc{Obs},i}^{\textsc{[H]}}(h)+\gamma_{ \textsc{ST}},x_{\textsc{ST},i}^{\textsc{[H]}}(h)\leq y_{i}^{\textsc{[H]}}(h)\] \[\leq y_{\textsc{w},i}x_{\textsc{Obs},i}^{\textsc{[H]}}(h)+ \gamma_{\textsc{ST}},x_{\textsc{ST},i}^{\textsc{[H]}}(h)\] \[\forall h=0,\ldots,N_{\textsc{H}}\]
The decision variables are defined as a sequence of vectors along the optimization horizon, i.e., \(\forall h=0,\ldots,N_{\textsc{H}}\): steam flow-rate, \(\mathbf{q}_{\textsc{e}}^{\textsc{[H]}}(h)=[q_{\textsc{e}}^{\textsc{[H]}}(h),\ldots,\)\(q_{\textsc{e}}^{\textsc{[H]}}(h+N_{\textsc{h}})]\) and the Boolean command for FSM transitions \(\boldsymbol{\beta}_{i}^{\textsc{[H]}}(h)=[\beta_{i}^{\textsc{[H]}}(h),\ldots, \beta_{i}^{\textsc{[H]}}(h+N_{\textsc{h}})]\) of each boiler, i.e. \(\forall i=1,\ldots,N_{\textsc{g}}\).
The cost function \(J:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is defined by summing the subsystems' stage costs \(l_{i}(h)\) i.e., the operating cost related to the fuel consumption - based on natural
gas price \(\lambda_{\text{s}}\) - fixed operating cost connected to the production mode \(\lambda_{\text{\tiny ON}\,i}\) and the fixed startup costs \(\lambda_{\text{\tiny ST}\,i}\). The fixed costs are in general specific for each generator: they can include personnel, maintenance and degradation costs, that might increase for frequent start and stops.
\[\begin{array}{l}l_{\text{i}}(h)=\ \ \lambda_{\text{\tiny ON}\,i}(x^{ \text{\tiny ON}}_{\text{\tiny ON}\,i}(h))+\lambda_{\text{\tiny ST}\,i}(x^{ \text{\tiny ON}}_{\text{\tiny ST}\,i}(h))+\\ \ \ \ \ \ \ \ \ \lambda_{\text{\tiny ST}\,i}\frac{T_{\text{\tiny H}}}{\rho_{ \text{\tiny F}}}\ \left[(g_{\text{\tiny H}}q^{\text{\tiny ON}}_{h}(h)+\gamma_{ \text{\tiny ON}\,i})\,x^{\text{\tiny ON}}_{\text{\tiny ON}\,i}(h)+\,\gamma_{ \text{\tiny ST}\,i}x^{\text{\tiny ON}}_{\text{\tiny ST}\,i}(h)\right]\end{array} \tag{10}\]
Note that constraints (9c)-(9d) - enforced to guarantee (1b) - are defined by logical conditions. A so-called "Big-M" reformulation can be adopted to transform these conditional constraints in a set of mixed-integer inequalities [32].
We denote by \(\bar{\text{\tiny$-$}}\) (\(\bar{\text{\tiny$-$}}\)) the minimum (maximum) values of inputs and outputs, while \(\bar{y}_{\text{\tiny ST}}=\sum_{i=1}^{N_{\text{\tiny F}}}x^{\text{\tiny[H]}}_{ \text{\tiny ST}\,i}(h)\gamma_{\text{\tiny ST}\,i}\). At each step \(h\), the optimizer computes the optimal trajectory of the sharing factors \(\alpha^{\text{\tiny[H]}}(j)\) for all \(j=h,\ldots,h+N_{\text{H}}\). Based on the receding horizon principle, the configuration \(\alpha^{\text{\tiny[H]}}(h)\), related to the first step, is broadcast to the network, while the rest of the trajectory is discarded (or, better said, kept as backup solution). At the subsequent step, \(h+1\), the status of the GUs is retrieved, as well as an updated forecast of the future demand, moving forward the prediction horizon by one step. This strategy permits to correct the demand forecast of remote steps as soon as they come closer, thus adjusting inaccurate estimations. A new profile \(\alpha^{\text{\tiny[H]}}(j)\), with \(j=h+1,\ldots,h+N_{\text{\tiny H}}+1\), is computed by (9) and the solution \(\alpha^{\text{\tiny[H]}}(h+1)\) sent to the GUs.
**Remark 1**: _The hard constraint (9b) can be tightened to equality, accelerating the solution convergence - if any feasible solution exists. Otherwise, if the program is infeasible, as the constraint (9b) cannot be satisfied for certain demand profiles, it can be relaxed thanks to a slack variable \(\varepsilon\geq 0\) with the modified objective function (9a), \(\hat{l}=l+\lambda_{\varepsilon}\varepsilon^{2}\) with the constraint \(\sum_{i=1}^{N_{\text{\tiny F}}}q^{\text{\tiny[H]}}_{\text{\tiny$i$}\,i}(h) \geq\bar{q}^{\text{\tiny Dom}}_{\text{\tiny$i$}\,i}(h)-\varepsilon\)._
**Remark 2**: _We solve (9) in a centralized way, since the solution must be available with a frequency \(f_{u}=1/T_{\text{\tiny H}}\). However, its computational complexity scales with the number of generation units, which can be very large in some applications. To overcome this, one may implement (9) in a distributed fashion, as in [33], partitioning the set of generators in clusters._
**Remark 3**: _The accuracy of demand forecast strongly impacts on the solution quality: since the reference is an additional decision variable, the feasibility is guaranteed. However, whenever the mismatch between demand forecast and its actual value is greater than a given threshold, the execution of the HL optimization can be triggered at an event-based "asynchronous" fashion to foster optimal tracking performances._
A good demand forecast is indeed one of the main challenges for practical implementation of any scheme aiming to schedule the generation units. Small scale generators, for medium-pressure steam, are usually operated in the industrial context where steam is considered a commodity resource. Therefore sometimes no demand forecast is available and neither considered for the generator management. Actually, accurate forecasts can be easily obtained by historical data and future production scheduling. Nowadays, companies that aim to implement energy efficiency strategies are increasing their awareness on energy utilization, through the analysis of historical data, and are pushed to implement procedures to correlate energy demand with production, giving the tools for deriving approximated evaluations of future steam demand to be used as input of the proposed management architecture.
### _Medium-level control_
The ML controller regulates the ensemble based on the operating modes and the sharing factors defined by the higher layer, driving the ensemble input \(\bar{u}^{\text{\tiny[H]}}(k)\) to the
steady-state value, \(\hat{u}_{\rm ss}^{\rm[nl]}(h)=\sum_{i=1}^{N_{\rm g}}q_{\rm s}^{\rm[nl]}(h)\), computed by the HL optimizer. The medium-level MPC deals with an aggregate - small scale - model of the whole ensemble.
#### Iii-B1 Reference models and consistency requirements
Medium level controller design requires, first of all, to devise an aggregate model of the ensemble. According to [20], a _reference_ model must be derived for each subsystem, defined as
\[\hat{\mathcal{L}}_{i}:\left\{\begin{array}{rl}\hat{x}_{i}^{\rm[nl]}(k+1)=& \hat{A}_{i}^{\rm[nl]}(k)+\hat{B}_{i}u_{i}^{\rm[nl]}(k)+\hat{w}_{i}^{\rm[nl]}(k) \\ \hat{y}_{i}^{\rm[nl]}(k)=&\hat{C}_{i}^{\rm[nl]}(k)+\hat{y}_{i}\end{array}\right. \tag{11}\]
where this alternative model can be built on a possibly reduced state, defined as \(\hat{x}_{i}^{\rm[nl]}=\beta_{i}x_{i}^{\rm[nl]}\), where \(\beta_{i}\in\mathbb{R}^{\hat{n}\times n_{i}}\) is a suitable map, with \(\hat{n}\leq n_{i}\). In addition, a term \(\hat{w}_{i}^{\rm[nl]}(k)\) is introduced to embed the error due to the mismatch between the reference model (11) and the identified system (6).
By design, the state matrix \(\hat{A}\) and the output matrix \(\hat{C}\) can be generically defined: they just must be the same for all subsystems' reference models. Conversely, the input matrix \(\hat{B}_{i}\) must be accurately defined. It is advantageous to select \(\hat{A}\), \(\hat{B}_{i}\) and \(\hat{C}\) with the same canonical structure of \(A_{i}\), \(B_{i}\) and \(C_{i}\), as defined in Section II-C. Using this convenient choice, the state-reduction map \(\beta_{i}\in\mathbb{R}^{\hat{n}\times n_{i}}\) is merely a selection matrix, whose rows are basis vectors of the new canonical space. In this way the state of the reference models, is \(\hat{x}^{\rm[nl]}(k)=[\delta y^{\rm[nl]}(k),\delta y^{\rm[nl]}(k-1),..., \delta y^{\rm[nl]}(k-\hat{n}_{\rm f}+1),\)\(u^{\rm[nl]}(k-1),...,u^{\rm[nl]}(k-\hat{n}_{\rm b}+1)]^{\rm T}\).
The input matrix of the reference system must be defined in order to satisfy the so-called _gain consistency_ conditions (see [20]): the reference model (11) and the model (6) must guarantee to have the same static gain and a consistent output map. This is verified by imposing:
\[\hat{\gamma}_{i}= \gamma_{\textsc{ON}\,i} \tag{12a}\] \[\hat{b}_{i,i}= \frac{\sum_{j=1}^{n_{\rm b}}b_{i,j}}{1+\sum_{j=1}^{n_{\rm f}}f_{i,j}}(1+\sum_{j=1}^{\hat{n}_{\rm f}}\hat{f}_{j})-\sum_{j=2}^{\hat{n}_{\rm b}} \hat{b}_{j} \tag{12b}\]
where \((b_{i,j},f_{i,j})\) and \((\hat{b}_{i,j},\hat{f}_{i,j})\) are the parameters of the \(i\)-th models (6) and (11), respectively.
#### Iii-B2 Disturbance \(\hat{w}_{i}^{\rm[nl]}(k)\)
As discussed, the term \(\hat{w}_{i}^{\rm[nl]}(k)\) embeds the error due to the mismatch between the reference model (11) and the original one (6) induced by the selection of the same state matrices for the reference models. To apply robust MPC for ensemble control, we need to ensure that \(\hat{w}_{i}^{\rm[nl]}(k)\) is bounded. In [20], it is shown that the set where \(\hat{w}_{i}^{\rm[nl]}(k)\) lies (i.e., \(\mathcal{W}_{i}\)) can be made small by properly restricting the set of \(\Delta u_{i}^{\rm[nl]}(k)=u_{i}^{\rm[nl]}(k)-u_{i}^{\rm[nl]}(k-1)\), i.e., \(\Delta\bar{\mathcal{U}}_{i}\). However, the definition of \(\mathcal{W}_{i}\) used in [20] requires the definition of a suitable invariant set, used to define the low-level MPC controller, which is here absent.
In any case the fact that \(\mathcal{W}_{i}\) depends upon \(\Delta\bar{\mathcal{U}}_{i}\) remains valid also in this framework, i.e., when the low-level controller is unconstrained. This is supported by the fact that \(\hat{w}_{i}(k)=\beta_{i}x_{i}(k+1)-\hat{x}_{i}(k+1)=\beta_{i}[\delta y_{i}(k+ 1),\)\(\delta y_{i}(k),...,\delta y_{i}(k-n_{\rm f}+2),u_{i}(k),...,u_{i}(k-n_{\rm b }+2)]^{T}\) \(-\)\([\delta y_{i}^{o}(k+1),\delta y_{i}^{o}(k),...,\delta y_{i}^{o}(k-\hat{n}_{ \rm f}+2),u_{i}(k),...,\)\(u_{i}(k-\hat{n}_{\rm b}+2)]^{T}=[\delta y_{i}(k+1)-\delta y_{i}^{o}(k+1),\delta y_{i}(k)-\)\(\delta y_{i}^{o}(k),...,\delta y_{i}(k-\hat{n}_{\rm f}+2)-\delta y_{i}^{o}(k- \hat{n}_{\rm f}+2),0,...,0]^{T}\), where \(\delta y_{i}^{o}(k)\) is defined as the output of the "unperturbed" reference system
\[\hat{\mathcal{S}}_{i}^{o}:\left\{\begin{array}{rl}\hat{x}_{i}^{o}(k+1)=&\hat {A}\hat{x}_{i}^{o}(k)+\hat{B}_{i}u_{i}(k)\\ \delta\hat{y}_{i}^{o}(k)=&\hat{C}\hat{x}_{i}^{o}(k)\end{array}\right. \tag{13}\]
In view of this, each non-zero component of vector \(\hat{w}_{i}(k)\) is a lagged version of \(e_{y}(k+1)=\delta y(k+1)-\delta y^{o}(k+1)\). It is possible to show that, thanks to the gain consistency condition, there exists a transfer function \(\Delta\mathcal{G}_{i}(z^{-1})\) such that1
Footnote 1: To retrieve (14) we can write, from (6) and (13), that
\[e_{i}(k+1)=G_{i}(z^{-1})u_{i}(k)-\hat{G}_{i}(z^{-1})u_{i}(k) \tag{14}\]
Following [34], the set \(\mathcal{W}_{i}\) can be explicitly computed based on (14). However, in this work, to quantify set \(\mathcal{W}_{i}\), due to its convexity, we have taken the convex hull of the points - given simulating with a signal \(\Delta u_{i}(k)\) sampled from \(\Delta\bar{\mathcal{U}}_{i}\) - to approximate the set. This solution has permitted to apply the robust MPC approach defined in this section with no constraint violation on the real variables.
#### Iii-B3 Ensemble model
To define the ensemble dynamics, the reference models must be opportunely combined. The state of the ensemble dynamical model \(\bar{\mathcal{L}}\) is composed of the states of the active generators, i.e., with \(x_{\textsc{ON}\,i}^{\textsc{[H]}}=1\). When a boiler is switched off, its contribution to ensemble steam production is immediately removed: in practice, during the transient, its steam is diverted from the ensemble output. Similarly, during start-up, the produced steam is not conveyed to the ensemble output, due to low steam quality - with high percentage of transported condensate. Accordingly, we define the ensemble state as \(\bar{x}^{\textsc{[M]}}=\sum_{i}^{N_{\textsc{g}}}x_{\textsc{ON}\,i}^{\textsc{[H] }}\hat{x}_{i}^{\textsc{[M]}}\), its input as \(\bar{u}^{\textsc{[M]}}\), and its output as \(\bar{y}^{\textsc{[M]}}=\sum_{i}^{N_{\textsc{g}}}x_{\textsc{ON}\,i}^{\textsc{[H] }}\hat{y}_{i}^{\textsc{[M]}}\).
Considering the reference models (11), we can write
\[\bar{\mathcal{L}}:\left\{\begin{array}{ll}\bar{x}^{\textsc{[M]}}(k+1)=& \bar{A}\bar{x}^{\textsc{[M]}}(k)+\bar{B}\bar{u}^{\textsc{[M]}}(k)+\bar{w}^{ \textsc{[M]}}(k)\\ \bar{y}^{\textsc{[M]}}(k)=&\bar{C}\bar{x}^{\textsc{[M]}}(k)+\bar{\gamma}\\ \end{array}\right. \tag{15}\]
where \(\bar{B}=\sum_{i}^{N_{\textsc{g}}}\alpha_{i}^{\textsc{[H]}}\hat{B}_{i}\), \(\bar{\gamma}=\sum_{i}^{N_{\textsc{g}}}x_{\textsc{ON}\,i}^{\textsc{[H]}}\hat{ \gamma}_{i}\), and \(\bar{w}^{\textsc{[M]}}=\sum_{i}^{N_{\textsc{g}}}x_{\textsc{ON}\,i}^{\textsc{[H] }}\hat{w}_{i}^{\textsc{[M]}}\). We also define the static gain of the ensemble as \(\bar{g}=\sum_{i}^{N_{\textsc{g}}}\alpha_{i}^{\textsc{[H]}}g_{i}\).
**Remark 4**: _The gain consistency conditions (12) are necessary to guarantee that the ensemble gain correctly reflects the overall gains of the subsystems, given the specified load partition._
The set containing the reference deviation \(\bar{w}^{\textsc{[M]}}(k)\) is defined as \(\bar{\mathcal{W}}\). It can be computed as discussed in [20]. More specifically, we can enforce - as discussed in Section III-C4 - \(\Delta u_{i}^{\textsc{[M]}}(k)\in\Delta\bar{\mathcal{U}}\), for all \(i=1,\ldots,N_{\textsc{g}}\) and for all values of \(\alpha_{i}^{\textsc{[M]}}\), where \(\Delta\bar{\mathcal{U}}=[-\Delta\bar{u},\Delta\bar{u}]\) for a given threshold \(\Delta\bar{u}\). As discussed, this is done to guarantee that \(\bar{w}_{i}^{\textsc{[M]}}(k)\in\mathcal{W}_{i}\), and also that \(\bar{w}^{\textsc{[M]}}(k)\in\bar{\mathcal{W}}=\bigoplus_{i=1}^{N_{\textsc{g}} }\mathcal{W}_{i}\) in all possible system configurations.
#### Iii-B4 Medium-level controller design
The ML MPC objective is to track the global fuel flow-rate target \(r=\bar{q}_{\textsc{f}}^{\textsc{[Dom]}}\), that depends on the HL solution. At any time instant \(k\), the HL share and mode signals, \((\alpha_{i}^{\textsc{[H]}},x_{\textsc{m}\,i}^{\textsc{[H]}})\), are re-sampled with sampling time \(T_{\textsc{M}}\), as \(\alpha_{i}^{\textsc{[M]}}(k)=\alpha_{i}^{\textsc{[H]}}(\lfloor k/\mu\rfloor)\), and are assumed to remain constant, e.g., \(\alpha_{i}^{\textsc{[M]}}(k+l)=\alpha_{i}^{\textsc{[M]}}(k)\) for the whole control horizon, i.e., \(\forall l=1,\ldots,N_{\textsc{M}}\). This implies that the ensemble model \(\bar{\mathcal{L}}\), (15), is invariant during the optimization horizon, \(N_{\textsc{M}}\). To cope with disturbance \(\bar{w}^{\textsc{[M]}}(k)\) in the ensemble model \(\bar{\mathcal{L}}\), the ML must be designed according to a robust tube-based implementation. The system is augmented and written in velocity form, as in [35]
\[\xi^{\textsc{[M]}}(k+1)=\mathcal{A}\xi^{\textsc{[M]}}(k)+\mathcal{B}\Delta \bar{u}^{\textsc{[M]}}(k)+\mathcal{H}\Delta\bar{w}^{\textsc{[M]}}(k) \tag{16}\]
with state vector \(\xi^{\textsc{[M]}}(k)=[\Delta\bar{x}^{\textsc{[M]}}(k),\varepsilon^{\textsc {[M]}}(k)]\), input \(\Delta\bar{u}^{\textsc{[M]}}(k)\), and disturbance \(\Delta\bar{w}^{\textsc{[M]}}(k)\). Matrices \(\mathcal{A},\mathcal{B},\mathcal{H}\)
can be trivially derived from (15).
The added state is \(\varepsilon^{\text{\tiny[M]}}(k)=\tilde{y}^{\text{\tiny[M]}}(k)-\hat{r}\), where the reference output \(\hat{r}\) is set as a decision variable of the OCP, as in [36], to ensure recursive feasibility and offset-free tracking capabilities in presence of continuous variations of the target values (which can be possibly infeasible): in a few words, \(\hat{r}\) is the closest feasible set-point to \(r\), at least in stationary conditions.
A nominal (undisturbed) model, used to formulated the OCP, can be associated to (16):
\[\tilde{\xi}^{\text{\tiny[M]}}(k+1)=\mathcal{A}\tilde{\xi}^{\text{\tiny[M]}}(k )+\mathcal{B}\Delta\tilde{u}^{\text{\tiny[M]}}(k) \tag{17}\]
whose variables are denoted by \(\tilde{\cdot}\).
To guarantee the feasibility in the disturbed case, the constraints for the OCP with nominal model must be opportunely tightened. The tube-based approach requires the computation of a Robust Positively Invariant (RPI) set \(\mathcal{Z}\) - computed based on [37] - where \(\xi^{\text{\tiny[M]}}(k)-\tilde{\xi}^{\text{\tiny[M]}}(k)\) is guaranteed to lie if the following control law is applied to the real system,
\[\delta\tilde{u}^{\text{\tiny[M]}}(k)=\delta\tilde{u}^{\text{\tiny[M]}}(k)+ \mathcal{K}(\xi^{\text{\tiny[M]}}(k)-\tilde{\xi}^{\text{\tiny[M]}}(k)) \tag{18}\]
where \(\mathcal{K}\) a gain matrix that makes the matrix \(\mathcal{A}+\mathcal{B}\mathcal{K}\) Schur stable. Namely, the real system is kept close to the nominal state, i.e.,
\[\xi^{\text{\tiny[M]}}(k+j)\in\tilde{\xi}^{\text{\tiny[M]}}(k)\oplus\mathcal{Z }\qquad\forall j\geq 1\]
So, the robust MPC problem is formulated on the nominal system (17), leading to a quadratic program (QP), where the optimization variables are the future nominal input trajectory, \(\delta\mathbf{\tilde{u}}(k)=[\delta\tilde{u}^{\text{\tiny[M]}}(k):\delta \tilde{u}^{\text{\tiny[M]}}(k+N_{\text{\tiny M}}-1)]\), the initial condition of the nominal system, \(\tilde{\xi}^{\text{\tiny[M]}}(k)=(\delta\tilde{x}^{\text{\tiny[M]}}(k),\tilde {y}^{\text{\tiny[M]}}(k)-\hat{r})\), and the output reference point, \(\hat{r}\).
\[\min_{\begin{subarray}{c}\tilde{\xi}^{\text{\tiny[M]}}(k),\hat{r},\\ \delta\tilde{\mathbf{u}}(k)\end{subarray}}\|\hat{r}-r\|_{r}^{2}+\sum_{j\in \mathcal{J}}\left\{\|\tilde{\xi}^{\text{\tiny[M]}}(j)\|_{{}_{Q}}^{2}+\|\delta \tilde{u}^{\text{\tiny[M]}}(j)\|_{{}_{R}}^{2}\right\}\] (19a) s.t. \[\quad\xi^{\text{\tiny[M]}}(k)-\tilde{\xi}^{\text{\tiny[M]}}(k) \in\mathcal{Z} \tag{19b}\] \[\tilde{\xi}^{\text{\tiny[M]}}(j+1)=\mathcal{A}\tilde{\xi}^{\text{ \tiny[M]}}(j)+\mathcal{B}\delta\tilde{u}^{\text{\tiny[M]}}(j)\] (19c) \[\tilde{u}^{\text{\tiny[M]}}(j)\in\tilde{\mathcal{U}}\] (19d) \[\alpha_{i}^{\text{\tiny[M]}}(l)\tilde{u}^{\text{\tiny[M]}}(j)\in \tilde{\mathcal{U}}_{i}\] (19e) \[x_{\text{\tiny[ON]}}^{\text{\tiny[M]}}(h)\left[g,\alpha_{i}(h) \tilde{u}^{\text{\tiny[M]}}(j)+\hat{\gamma}_{\text{\tiny ON}\,i}\right]\in \tilde{\mathcal{Y}}_{i}\] (19f) \[\alpha_{i}^{\text{\tiny[M]}}(j)\tilde{u}^{\text{\tiny[M]}}(j)- \alpha_{i}^{\text{\tiny[M]}}(j-1)\tilde{u}^{\text{\tiny[M]}}(j-1)\in\Delta \tilde{\mathcal{U}}\] (19g) \[\qquad\forall j\in\mathcal{J}\] \[\qquad\forall i=1,\ldots,N_{\text{\tiny g}}\] \[\tilde{\xi}^{\text{\tiny[M]}}(k+N_{\text{\tiny M}})=0\] (19h) \[\tilde{x}^{\text{\tiny[M]}}(k+N_{\text{\tiny M}})=\tilde{x}_{ \text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tinytiny{\tiny{\tiny{\tiny}}}}}}}}}}}}}}}\] (19i) \[\tilde{u}^{\text{\tiny[M]}}(k+N_{\text{\tiny M}}-1)=\tilde{u}_{ \text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left}}}}}}}}}}}}}} \tag{19j}\]
where \(\mathcal{J}=\{k,\ldots,k+N_{\text{\tiny M}}-1\}\). Moreover, \(\tilde{x}_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny}}}}}}}}}}}}}}}\), \(\tilde{u}_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{ \tiny{\tiny{\tiny{\tiny}}}}}}}}}}}}}}}}\) are given by
\[\begin{bmatrix}\tilde{x}_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{{\tiny{\tiny{\tiny{\tiny}}}}}}}}}}}}}}}}}\\ \tilde{u}_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{ \tiny{\tiny{\tiny{\tiny}}}}}}}}}}}}}}}}}}\end{bmatrix}= \left[\begin{array}{cc}I_{n}-\hat{A}&-\bar{B}\\ \hat{C}&0_{m}\end{array}\right]^{-1}\left[\begin{array}{c}0_{n\times p}\\ I_{p}\end{array}\right](\hat{r}-\bar{\gamma})\]
The constraints (19i)-(19j) requires the calculation of \(\tilde{x}^{\text{\tiny[M]}}(k-1)\), \(\tilde{u}^{\text{\tiny[M]}}(k-1)\) that can be evaluated based on
\[\begin{bmatrix}\tilde{x}^{\text{\tiny[M]}}(k-1)\\ \tilde{u}^{\text{\tiny[M]}}(k-1)\end{bmatrix}=\left[\begin{array}{cc}\hat{A}-I _{n}&\bar{B}\\ \hat{C}\hat{A}&\hat{C}\bar{B}\end{array}\right]^{-1}\left[\begin{array}{c} \Delta\tilde{x}^{\text{\tiny[M]}}(k)\\ \tilde{y}^{\text{\tiny[M]}}(k)\end{array}\right]\]
Differently from [35], the terminal constraint is a _steady-state_ condition for (17) in the last step of the prediction horizon. The computation of a terminal steady-state condition guarantees that the MPC problem is practically recursively feasible, with auxiliary control law \(\Delta\tilde{u}^{\text{\tiny[M]}}(k)=0\). This formulation avoids the computation of the Maximal Output Admissible Set (MOAS) required in [35]. It is worth noting that - similarly to the computation of the RPI [37] - the calculation of the MOAS [38] is an iterative time-consuming procedure. Any variation of the configuration requires the re-computation online of both the RPI and the MOAS. At least the latter is avoided by forcing the system to reach a steady-state condition at the end of the prediction window; on the other hand, it might affect the promptness of the controller, reducing
the optimal control action, since \(\Delta\tilde{u}^{*[T]}(k)\to 0\) as \(k\to N_{\text{\tiny M}}\). This can be mitigated by selecting a longer prediction window.
The initial condition of the nominal state is enforced by (19b). For all the time steps \(j\in\mathcal{J}\), the ML is committed to impose the constraints (1) through (19d)-(19f). Moreover, as discussed in Section III-C1, in order to keep the disturbance term \(\bar{w}^{\text{\tiny M}}(k)\) bounded, we need to ensure that for each generator the input variation is limited thanks to (the tightened) constraint (19g).
Also constraints (19d)-(19g) are imposed on the nominal system variables: this requires a proper tightening [35] of the original sets \(\tilde{\mathcal{U}}\), \(\tilde{\mathcal{U}}_{i}\),\(\Delta\tilde{\mathcal{U}}_{i}\) allowing us to define \(\tilde{\mathcal{U}}\), \(\tilde{\mathcal{U}}_{i}\), and \(\Delta\tilde{\mathcal{U}}_{i}\).
Note also that, while in [20] constraints on local outputs are not considered, in our application scenario they play a key role. In fact they represent limitations in the gas available to each burner. To enforce \(y_{i}^{\text{\tiny M}}\in\mathcal{Y}_{i}\), we use its simplified "quasi steady-state" version (19f). Set \(\tilde{\mathcal{Y}}_{i}\) is computed by suitably tightening set \(\mathcal{Y}_{i}\).
#### Iii-B5 Transitions among configurations
When configuration transitions occur, i.e., when the high hierarchical level returns a new optimal value of sharing factor \(\alpha_{i}^{*[\text{\tiny M}]}(k)\neq\alpha_{i}^{*[\text{\tiny M}]}(k-1)\) at least for some subsystems, infeasibility issues may occur due to two reasons: (i) the ensemble model is varying with respect to the one used at the previous time step, since \(\bar{B}=\bar{B}\left(\alpha^{*[\text{\tiny M}]}(k)\right)\); (ii) it is not guaranteed that constraints (19d) and (19g) can be enforced in a recursive manner. The procedure adopted when configuration changes occur is the following one:
* Apply \(\alpha_{i}^{\text{\tiny M}}(k)=\alpha_{i}^{*[\text{\tiny M}]}(k)\), \(\forall i=1,\ldots,N_{\text{s}}\), and solve the corresponding MPC optimization problem. If it is feasible, then the configuration change is accepted.
- optimization variables and adding the term \(\sum_{i=1}^{N_{\text{s}}}\|\alpha_{i}^{\text{\tiny M}}(k)-\alpha_{i}^{*[\text {\tiny M}]}(k)\|^{2}\) to the cost function, in order to steer \(\alpha_{i}^{\text{\tiny M}}(k)\) to the values \(\alpha_{i}^{*[\text{\tiny M}]}(k)\), selected as the optimal ones by the high-level optimizer.
**Remark 5**: _The introduction of the sharing factors as additional decision variables transforms the program (19) from QP to a nonlinear one. In fact, the dependence of the model on \(\alpha_{i}^{\text{\tiny M}}\) implies that a number of elements of problem (19) are dependent upon \(\alpha_{i}^{\text{\tiny M}}\) in a non-trivial way, e.g., the gain \(\mathcal{K}\), the RPI set \(\mathcal{Z}\) (to be used in the constraint (19b)), and the set tightening._
We can here address this issue by reformulating (19) in a slightly different, but consistent, way, to be applied exclusively during the transitions. First of all, to avoid the use of \(\mathcal{Z}\), we replace (19b) with the equality \(\tilde{\xi}^{\text{\tiny M}}(k)=\xi^{\text{\tiny M}}(k)\). Also, due to Assumption 1, \(\hat{A}\) is Schur stable. Thus, we can adopt, during the transition, an auxiliary law with \(\mathcal{K}=0\). So, the input applied to the model ensemble is not corrected by (18). A final remark is in order: to support transitions, the tightening operations to be performed on sets \(\bar{\mathcal{U}}\), \(\Delta\bar{\mathcal{U}}\), \(\bar{\mathcal{U}}_{i}\), and \(\tilde{\mathcal{Y}}_{i}\) should be sufficiently general to be compatible with all ensemble models of interest to avoid possible feasibility losses.
## IV Simulations
The hierarchical control scheme is tested in simulation, considering a use-case with \(N_{\text{s}}=5\) steam generators that operate at a pressure of \(57\) bar and cooperate to serve a common load. The boilers that form the ensemble are slightly different among each other, since they are characterized by dissimilar dimensions and efficiencies. Also, they are limited to work in different operating ranges, i.e. minimum/maximum generated steam. Their
parameters are reported in Table I.
The natural gas price \(\lambda_{\text{g}}\) is assumed fixed and constant for all the generators, while the fixed operating cost in ON and startup modes \(\lambda_{\text{\tiny ON}\,i}\) and \(\lambda_{\text{\tiny ST}\,i}\), respectively, are different for each generator. Gas density is \(\rho_{\text{\tiny g}}=0.71\)\([kg/m^{3}]\) and tube specific heat \(c_{\text{\tiny P}}=0.5\)\([kJ/(kgK)]\).
The system is characterized by the following global constraints \(\tilde{\mathcal{Y}}=[0.1227,4.220]\)\([kg/s]\) and \(\tilde{\mathcal{U}}=[0.089,6.0]\)\([kg/s]\), determined by constraints of the distribution network.
The sampling times of the multi-layer architecture are reported in Table II.
The low-level controllers have been implemented in discrete-time with a fast sampling time \(\tau=10\) s; their parameters are tuned to stabilize the system with a settling time of \(120\) s.
All systems are assumed to have the same compensator \(\mathbf{C}\), with \(K_{\text{\tiny P}}=0.30\) and \(K_{\text{\tiny I}}=0.10\) and regulator \(\mathbf{R}\), with \(K_{\text{\tiny P}}=0.87\) and \(K_{\text{\tiny I}}=3.5\cdot 10^{-4}\).
The discrete-time linear model (5) is identified on a data-set generated by simulating the closed-loop nonlinear model \(\mathcal{S}_{i}^{\text{\tiny CL}}\), with sampling time \(T_{\text{\tiny M}}=30\) s. For each boiler, the identified models, \(\mathcal{L}_{i}^{\text{\tiny CL}}\), are characterized by \(n_{\text{\tiny f}}=3\), \(n_{\text{\tiny b}}=2\), and \(n_{\text{\tiny k}}=1\). So that systems \(\mathcal{L}_{i}^{\text{\tiny CL}}\) have the same order \(n\). The high-level optimization is executed in receding horizon with a slow sampling time \(T_{\text{\tiny H}}=10\) min.
The optimization (9) considers a prediction horizon of \(N_{\text{\tiny H}}=10\), which is long enough to consider the high-level dynamics of the sub-systems - by considering their start-up dynamics - and forthcoming fluctuation of the users global demand, \(\bar{q}_{\text{\tiny r}}^{\text{\tiny Poom}}(h)\) for \(h=0,\ldots,N_{\text{\tiny H}}\).
The latter is given as a piece-wise constant forecast of users' demand, which can be opportunely updated at any iteration of the rolling window of the high-level optimization.
Regarding the high-level dynamic models of the steam generators, each unit is characterized by an hybrid automaton, as presented in Section II-D, with the dwell-times reported in Table III.
In this case-study all generators have the same transition times.
It is worth emphasizing that, as reported in Figure 5, the reference trajectory is naturally given in terms of steam demand \(\bar{q}_{\text{\tiny r}}^{\text{\tiny Poom}}\) and converted into equivalent gas target using the static gain of the ensemble \(\bar{q}_{\text{\tiny g}}^{\text{\tiny Poom}}=\bar{g}\bar{q}_{\text{\tiny r}}^ {\text{\tiny Poom}}+\bar{\gamma}\). In particular, the reference target for the fuel flow-rate of the ensemble incorporates only the units in mode ON. While the high-level optimizer considers the consumption and the relative costs of the steam generators also in startup modes, we recall that the ensemble model considers just the producing boiler, i.e., in mode ON. The scope of the MPC layer is indeed a robust reference tracking for the ensemble and not an
\begin{table}
\begin{tabular}{c||c|c|c|c}
**Sample time** & \(\tau\) & \(T_{\text{\tiny M}}\) & \(T_{\text{\tiny H}}\) \\ \hline & \(10\) s & \(30\) s & \(10\) min \\ \hline \end{tabular}
\end{table}
Table II: Multi-layer sampling times.
\begin{table}
\begin{tabular}{c||c|c|c|c|c}
**Boiler n** & 1 & 2 & 3 & 4 & 5 \\ \hline \(V_{\text{\tiny F}}\)\([m^{3}]\) & 1.21 & 1.15 & 1.28 & 1.14 & 1.32 \\ \(M_{\text{\tiny T}}\)\([t]\) & 5.49 & 5.22 & 5.83 & 5.06 & 5.99 \\ \(\eta\)\([-]\) & 0.90 & 0.92 & 0.89 & 0.95 & 0.99 \\ \(q_{\text{\tiny g}}^{\text{\tiny Min}}\)\([kg/s]\) & 0.1 & 0.09 & 0.09 & 0.09 & 0.1 \\ \(q_{\text{\tiny g}}^{\text{\tiny Max}}\)\([kg/s]\) & 1.26 & 1.16 & 1.13 & 1.20 & 1.25 \\ \(q_{\text{\tiny g}}^{\text{\tiny Min}}\)\([kg/s]\) & 0.125 & 0.127 & 0.129 & 0.126 & 0.123 \\ \(q_{\text{\tiny g}}^{\text{\tiny Max}}\)\([kg/s]\) & 0.859 & 0.844 & 0.846 & 0.841 & 0.839 \\ \(\lambda_{\text{\tiny g}}\)\([\![\epsilon/m^{3}]\) & 0.22 & 0.22 & 0.22 & 0.22 & 0.22 \\ \(\lambda_{\text{\tiny ON}}\)\([\![\epsilon/T_{\text{\tiny H}}]\) & 40 & 30 & 22 & 55 & 45 \\ \(\lambda_{\text{\tiny ST}}\)\([\![\epsilon/T_{\text{\tiny H}}]\) & 100 & 130 & 120 & 70 & 80 \\ \hline \end{tabular}
\end{table}
Table I: Boiler parameters.
\begin{table}
\begin{tabular}{c||c|c|c}
**Boiler** & \(\chi_{\text{\tiny OFF}\to\text{\tiny ST}}\) & \(\chi_{\text{\tiny ST}\to\text{\tiny ON}}\) & \(\chi_{\text{\tiny ON}\to\text{\tiny OFF}}\) \\ \hline \(i\) & \(2\) & \(2\) & \(3\) \\ \hline \end{tabular}
\end{table}
Table III: Hybrid automaton dwell times (in HL steps \(T_{\text{\tiny H}}\)).
economic optimization, which is the target of the high-level optimization.
As discussed in Section III-C1, a requirement is that all the reference models share the same dynamic and output matrices \(\hat{A}_{i}\) and \(\hat{C}_{i}\), respectively. Conceptually, they can be arbitrarily chosen by the designer, e.g., by imposing a desired dynamic matrix or an "averaged" one for all the subsystems of the ensemble. In this work, we select a specific unit as the reference dynamic model: therefore, we impose the matrices of the first steam generator for the reference model, i.e. \(\hat{A}_{i}=A_{1}\) and \(\hat{C}_{i}=C_{1}\). As a consequence, the state reduction map is simply \(\beta_{i}=I_{{}_{n}}\) for all the subsystems and gain consistency conditions reduce to (12).
In Figure 6, the comparison of the step response of each system \(\mathcal{L}_{i}^{{}_{\text{CL}}}\) with its reference model \(\hat{\mathcal{L}}_{i}\) is shown: the gain consistency conditions (12) guarantee that at steady state the actual and reference models reach the same value.
The maximum amplitude of the disturbance \(\bar{w}\) in the ensemble model is evaluated by imposing the maximum variation of the input equal to \(\Delta\bar{u}=0.4[kg/s]\), resulting in \(\|\bar{w}\|_{\infty}\leq 1\times 10^{-3}[kg/s]\).
We compare, in simulation, the performance of the proposed control scheme (HL OPT) with two alternative ones, obtainable with different strategies, see Table IV: NO HL, where the sharing factors are not optimized during operation, but predefined and fixed (e.g., by equally splitting the load on available units, \(\alpha_{i}=1/\sum x_{\text{\tiny ON\,\,i}}\)), and HL \(\eta\)-OPT, where the units are activated in a round-robin fashion according to their efficiency ranking. For all these schemes, the robust MPC, see (19), is synthesized on the ensemble configuration defined at HL, using the steady-state terminal condition, with a prediction horizon \(N_{{}_{\text{M}}}=10\). The constraints, imposed according to the tube-based paradigm, are enforced in a tightened way to the unperturbed system variables.
The tracking performances are also compared with the ones of a centralized MPC scheme (C-MPC), which controls directly all the subsystem inputs, \(u_{i}\). Two scenarios are proposed: _Test 1_ considers a maintenance schedule for Boiler 3, with a piece-wise constant demand; _Test 2_ shows the behavior of the schemes with a noisy demand, to assess the operational cost and the tracking performance.
#### Iv-C1 Test 1
We assume Boiler \(3\) to be unavailable in the time range \(t=[50,80)\) min and we compare the behavior of the proposed scheme with HL \(\eta\)-OPT and with NO HL. The idea here is to focus on the role of the HL control layer on the overall performances.
\begin{table}
\begin{tabular}{l||l|l|l|l}
**Strategy** & **HL** & **ML-MPC** & **LL** & **Test** \\ \hline
**HL OPT** & \(\alpha_{i}\leftarrow\)(9) (\(l_{i}\) eq.(10)) & Ensemble & PI & 1\&2 \\
**NO HL** & \(\alpha_{i}\gets 1/\sum x_{\text{\tiny ON\,\,i}}\) & Ensemble & PI & 1\&2 \\
Figure 7 shows the tracking of steam demand with the three considered strategies. Note that the overshoots are due to the plug/unplug operations. Figure 8 shows the reference tracking performances of the natural gas signal. Due to the unequal overall gain of the ensemble in the three alternative configurations, natural gas trajectories are different. This is more evident in the period \(t=[30,70)\) min even if the steam demand is the same. This is related to a difference in the subsystem's efficiency. Recall that the gas consumption is given by (7a) for each boiler and the ensemble efficiency is given by the \(\alpha\)-weighted combination of such equations.
In Figure 9, the operating modes of each unit are shown: at \(t=50\) min, Boiler \(3\) is forced to OFF mode, for prescribed unavailability (e.g., for maintenance reasons) shown by a gray area. With HL optimization, Boiler \(4\) is activated in place of Boiler \(3\). Note that, even if Boiler \(5\) has a higher efficiency with respect to Boiler \(4\), the latter is chosen in the first place due to its lower start-up costs \(\lambda_{{}_{\mathrm{ST}}i}\), with HL \(\eta\)-OPT, instead, the different efficiency-based criterion for boiler activation leading to slightly larger overall costs.
As shown in Figure 10, when the global steam demand rises, new generators are added to the ensemble based, not only on the subsystem efficiency rank, but also on the associated operating costs \(\lambda_{{}_{\mathrm{ON}}i}\) and \(\lambda_{{}_{\mathrm{ST}}i}\), which are different for each generator. In the NO HL scenario, the weights \(\alpha_{i}\) are adjusted only to consider that just four generators are available. When the transition is sharp, the abrupt change of \(\alpha_{i}\) could lead one of the subsystems out of its local ranges. If so, the MPC optimization problem may become infeasible. In response to that, the control architecture will compute a transient solution by considering the sharing factors as an additional set of optimization variables, as discussed in Section III-C5: the nonlinear
Figure 8: Ensemble gas consumption. The reference target depends on the ensemble configuration, since different sharing factors change the overall gain.
Figure 7: Steam demand for the ensemble \(\bar{q}_{{}_{\mathrm{s}}}^{\mathrm{Dom}}\) (black) tracked by Ensemble-MPC at ML, with HL OPT strategy (solid blue), HL \(\eta\)-OPT (dot-dashed orange), and NO HL (dotted red).
Figure 9: Operating mode of each subsystem. Boiler \(3\) temporal unavailability shown by gray region.
program provides the closest feasible configuration to the target computed at the top-level. In Figure 10, the sharing factors computed at high-level, with the three strategies are shown by different line styles. When the high-level solution is reachable in one medium-level step, the optimal and actual points coincide and just target is shown, otherwise the ensemble is guided to the high-level optimal configuration by a smooth shift through temporary configurations (a diamond for the HL OPT and a star for HL \(\eta\)-OPT), computed solving the NLP. Figure 11 shows that also local constraints are respected.
The simulation is executed in Matlab on a Intel(r)Core(tm) i7-8550U CPU 1.80GHz, RAM 16 GB, with SCIP solver [39] for HL optimization and transitional configurations, and quadprog for medium-level QP. The HL optimization takes an average time of \(3.28\)s (\(\pm 0.53\)s), while the ML QP takes \(0.13\)s (\(\pm 0.02\)s), where the RPI computation requires \(0.08\)s. Instead, the NLP for transitional configuration requires up to \(40\)s.
#### Vi-A2 Test 2
A second test is done to compare the performance of the proposed scheme and to demonstrate also the robustness of the control architecture in presence of possibly significant errors on the demand forecast. Here the focus is both on the HL, considering the operational cost, and on the ML, by measuring the tracking performance. The latter is assessed by considering a further alternative scheme consisting of a centralized MPC, which governs directly the input \(u_{i}\) of each subsystem. It is worth noting, that this controller cannot manage the HL dynamics related to mode transitions, i.e., start-ups, and plug-and-play operations, thus for this case it does not really make sense to quantify the related HL operational cost. However, given a fixed number of active generators, this represents the best tracking controller. The performance metric is given by \(J_{\mathrm{M}}^{r_{\mathrm{r}}}=\sum_{k}\|y(k)-r\|^{2}\). Instead, the operational cost \(J^{\mathrm{\tiny sp}}\), is computed as (10).
The disturbed demand is given by \(r\left(k\right)=\bar{q}_{s}^{\mathrm{\tiny{Dom}}}\left(\left\lfloor h/\mu \right\rfloor\right)+v\left(k\right)\) with the noise term \(v\left(k\right)\sim n\left(0,\ \sigma\right)\), with \(\sigma=1.25\%\bar{q}_{s}^{\mathrm{\tiny{Dom}}}\). In addition, at \(t=90\) (resp. \(100\)) min a downward (resp. upward) step disturbance is given, with \(d=\pm 4\%\bar{q}_{s}^{\mathrm{\tiny{Dom}}}\) thus with the
Figure 11: In each subplot, the gas flow-rate \(q_{\mathrm{g,\,i}}\) of the units.
Figure 10: High-level sharing factors with HL OPT strategy (solid, transient \(\hat{\alpha}\) diamonds), HL \(\eta\)-OPT (dot-dashed, transient \(\hat{\alpha}\) stars), and NO HL (dotted).
noise term2\(v\left(k\right)\sim n\left(d,\ \sigma/10\right)\).
Footnote 2: Note that, merely for clarity of the resulting plots, we have reduced the high-frequency component of the noise by setting a lower standard deviation.
In Figure 12, the tracking of the natural gas for the ensemble is reported for the different control strategies. Note that the different overall efficiency of the ensemble leads to distinct gas flow-rates, even if the steam demand is the same, see Figure 13. At medium level this demand is disturbed by an additive noise term, the mismatch between the piece-wise reference is managed by the MPC: as the reference \(\hat{r}\) is a decision variable at ML, this MPC formulation can deal also with infeasible references. Typically, an increased demand might become unreachable, while lower actual demand can be easily managed: in \(t=\left[90,\ 100\right)\) min, with the downward step disturbance, ML can track the actual demand by keeping the same sharing factors. Instead in \(t=\left[100,\ 110\right)\) min, the global generation cannot reach the actual target. However, the controller robustly gives a feasible solution, which minimizes the distance from the target. The event-based optimization of the HL sharing factors is applied at \(t=102.5\) min, when a bias on the demand is detected: a new HL optimization is triggered on an updated demand forecast, which includes the bias. The sharing factors are adapted to achieve the increased demand. The best tracking performance is obtained by C-MPC, that does not operate on an overall ensemble model, but controls directly each subsystem. However, it does not provide flexibility to system changes and scalability properties.
A good tracking performance is apparently given by the NO HL scheme, where demand at ML is tracked by an ensemble-MPC, with all the units sharing equally the load. The controller at ML is the same used for HL \(\eta\)-OPT and HL OPT, where the plug-&-play operations negatively impacts the tracking. Note that, instead, the overall operational cost \(J^{\text{\tiny{op}}}\) is better in case the HL optimization is performed. The operational and tracking costs of the four strategies are compared in Table V. Regarding the computational complexity, the proposed method maintains constant the dimension of the QP problem to be solved at medium level, by relying on the ensemble model. On the contrary, the
\begin{table}
\begin{tabular}{l||c|c|c|c}
**Cost** & NO HL & HL OPT & HL \(\eta\) OPT & C-MPC \\ \hline \(J^{\text{\tiny{op}}}/J^{\text{\tiny{op}}}_{\text{NO HL}}[-]\) & \(1.00\) & \(0.78\) & \(0.80\) & - \\ \(J^{\text{\tiny{re}}}/J^{\text{\tiny{e}}}_{\text{C-MPC}}[-]\) & \(2.30\) & \(3.09\) & \(3.48\) & \(1.00\) \\ \hline \end{tabular}
\end{table} TABLE V: Operational cost, \(J^{\text{\tiny{op}}}\), (scaled on NO HL cost) - and tracking cost, \(J^{\text{\tiny{re}}}\), (scaled on C-MPC cost).
Fig. 12: Natural gas consumption of the ensemble. Comparison of the four strategies.
Fig. 13: Ensemble steam flow-rate. Comparison of the four strategies.
dimension of the QP with the C-MPC grows linearly with respect to the number of considered units, see Table VI. A proportional dimension increase affects also the high-level MIP: in this case the computational impact is greater. Note that it still within the HL sampling time, \(T_{\textsc{h}}=10\) minutes. Note that, however, the HL should run offline, and so its computational complexity does not impact on the real-time feasibility of the hierarchical scheme.
## V Conclusions
In this paper a hierarchical control scheme has been proposed for the coordination of an ensemble of steam generators, which must cooperate to fulfill a common load. The definition of an ensemble reference model, as proposed here, permits to solve the medium level tracking MPC in a scalable and flexible way, as its dimension does not grow with the number of steam generators in the ensemble. Thanks to the model reformulation, the ensemble model can be simply obtained from the high level and updated online. The model configuration is determined by the high-level mixed-integer optimization that computes the optimal number of generators to be included in the ensemble and their shares of steam production by minimizing the operating cost and considering global and subsystem constraints.
The accuracy of demand forecast impacts the solution quality: generally, forecast mismatch is managed at medium level, with a small degradation of the overall cost. However, if units are committed with a greedy policy with active ones working already at maximum, any higher actual demand cannot be fully sustained, as an additional boiler would be needed, but the start-up dynamics might impede it. This can be managed by tightening the subsystem input/output constraints, at HL level, to prevent such condition, even if the feasibility at medium level is guaranteed by the presence of the reference point among the decision variables. How to properly set this tightening will be studied. Future work will consider the improvement of the multi-layer scheme by comparing the overall performance with the implementation of an additional low-level shrinking MPC control to further address the local model mismatch. We also envision to solve the high level optimization in a distributed framework.
|
2310.13787 | Enhancing Illicit Activity Detection using XAI: A Multimodal Graph-LLM
Framework | Financial cybercrime prevention is an increasing issue with many
organisations and governments. As deep learning models have progressed to
identify illicit activity on various financial and social networks, the
explainability behind the model decisions has been lacklustre with the
investigative analyst at the heart of any deep learning platform. In our paper,
we present a state-of-the-art, novel multimodal proactive approach to
addressing XAI in financial cybercrime detection.
We leverage a triad of deep learning models designed to distill essential
representations from transaction sequencing, subgraph connectivity, and
narrative generation to significantly streamline the analyst's investigative
process. Our narrative generation proposal leverages LLM to ingest transaction
details and output contextual narrative for an analyst to understand a
transaction and its metadata much further. | Jack Nicholls, Aditya Kuppa, Nhien-An Le-Khac | 2023-10-20T19:33:44Z | http://arxiv.org/abs/2310.13787v1 | # Enhancing Illicit Activity Detection using XAI: A Multimodal Graph-LLM Framework
###### Abstract.
Financial cybercrime prevention is an increasing issue with many organisations and governments. As deep learning models have progressed to identify illicit activity on various financial and social networks, the explainability behind the model decisions has been lacklustre with the investigative analyst at the heart of any deep learning platform. In our paper, we present a state-of-the-art, novel multimodal proactive approach to addressing XAI in financial cybercrime detection.
We leverage a triad of deep learning models designed to distill essential representations from transaction sequencing, subgraph connectivity, and narrative generation to significantly streamline the analyst's investigative process. Our narrative generation proposal leverages LLM to ingest transaction details and output contextual narrative for an analyst to understand a transaction and its metadata much further.
financial cybercrime, large language models, graph learning, graph neural networks, fraud detection, cryptocurrency +
Footnote †: 2019
detail to aid in the downstream reporting requirements. Another avenue of generation is in the area of Suspicious Activity Reports (SARs), a regulatory requirement from financial institutions and various merchants when engaging in potentially illicit transactions or activity.
In section 2 we present the relevant background information. Then we describe the relevant work and the gaps we have identified to the best our knowledge in section 3. In section 4 we describe our methodology. Section 5 is our discussion of techniques. This includes a discussion on the direction and future work of our multimodal architecture and pipeline including human-in-the-loop review of transaction narratives. Section 6 is our concluding remarks.
## 2. Background
The background section of our paper outlines financial cybercrime and the multiple avenues our framework can be applied to. We also cover background on graphs and their applications in tackling financial cybercrime and the associated graph deep learning algorithms that can be applied to the networks. We cover LLMs and how their embeddings are applied in our multimodal architecture.
### Financial Cybercrime
Financial cybercrime has been defined as "a combination of financial crime, hacking, and social engineering committed over cyberspace for the sole purpose of illegal economic gain"(Bordes, 2017). It encapsulates a large breathe of financial crimes that take place over cyberspace, and are evolving rapidly with the newer avenues of extorting, scamming, and defrauding people with new technology like cryptocurrency enabling transfer of value without the requirement of a financial intermediary.
Machine learning methods have evolved not only to screen transactions of a financial institution to prevent illicit transactions from executing on their networks, but also monitor the behaviour of a user to anticipate anomalous activity to identify unusual behaviour indicative of illicit activity which can protect a customer further.
### Graph Learning
Graphs, or networks, are relational datasets which have connections between data points. Graphs are represented as \(G=V,E\), where \(V\) is the set of nodes, and \(E\) is the sets of edges or connections which captures the relationships between the nodes.
Graph deep learning (GDL) models, such as Graph Attention Networks (GATs) or Graph Convolutional Networks (GCNs) use the graph datasets as inputs to calculate embeddings or representations that capture structure and relationships between data points. They can be used in node classification tasks including transaction monitoring (Bordes, 2017). Our use of GDL is to calculate embeddings that capture subgraph structures on networks with transactions of interest. The embeddings will be extracted and concatenated with GPT embeddings to identify and discover similar structures based on transaction sequence embeddings.
### Large Language Models
Large Language Models (LLMs) are highly scaled pre-trained language models (PLMs) with billions of more parameters increasing the capacity of downstream tasks (Krishnaman et al., 2017). LLMs are predominantly constructed using the Transformer architecture (Krishnaman et al., 2017) wherein layers of multi-head attention are stacked in a deep neural network.
LLMs like ChatGPT by OpenAI have been a breakthrough in artificial intelligence modelling and have seen a wide amount of application in industry and research. A survey by Yang et al. (Yang et al., 2018) presented the various practical applications that people are applying LLMs to in their downstream natural language processing (NLP) tasks. These include: i) traditional language understanding include the basic tasks like text classification and named entity recognition (NER), ii) natural language generation is split into two categories which includes paragraph summarisation, and then "open-ended" generation like code creation, iii) knowledge-intensive tasks would include a closed-book question style examination, where normally a person would take this test with extensive memorisation of real-world knowledge would be required, iv) reasoning ability addresses the models ability to have 'commonsense' where a LLM needs to be able to retain factual knowledge but simultaneously perform several inference steps on that retained fact.
LLMs use text embeddings to make relations between words and structure sentences and are crucial to the ability to perform tasks of reasoning and generation. To leverage from the the use of LLMs for tasks outside of prepackaged LLMs suites like ChatGPT, we require access to the underlying embeddings generated in the models. DaVinci (Dai et al., 2017) is an example of a LLM that generates visible embeddings that will allow integration with other deep learning embeddings.
#### 2.3.1. Critic Prompts
As we generate narratives to capture the context of a given transaction, we wish to validate these outputs. An emerging technique to validate or improve on a generative prompt is to pass it through another LLM (Bordes, 2017; Chen et al., 2017). We deploy a similar methodology in our architecture to verify and validate our generated narratives to ensure that the context provided to the analyst is sound.
### Combining Graph Deep Learning, Transformer and LLMs
The graph applications for LLMs are still at an early stage. Our contribution includes a novel method leveraging of LLMs and Pre Trained transformer models for practical use case in financial cybercrime investigation.
We leverage from several techniques to create our multimodal architecture. Our methodology can be broken up into three core areas: i) Transaction Sequence Embeddings, ii) Graph Embeddings for Connectivity, and iii) Narrative Generation for Contextualisation.
Embeddings provide powerful avenues for tabular deep learning application (Chen et al., 2017). Models like GPTs and GNNs condense large, complex data structures like a cryptocurrency network down to numerical representations that can be used in discovery and inference of specific activity. Our paper aims to concatenate embeddings which capture multiple representations of a transaction network, and through similarity measurements discover potential illicit activity or relevant investigative information for an analyst. This is a novel approach to processing embeddings by multiple models. Many techniques include the use of embeddings as direct features
in another deep learning network with the ambition of increasing evaluation metrics.
## 3. Related Work
Research has been published in the domain of applying LLM to predicting output of graph datasets. Work by Chen et al. (Chen et al., 2015) explored the potential of LLMS in learning on Text Attributed Graphs (TAGs). They explain limitations on the shallow embeddings produced by Bag-of-Words (Bag and Words, 2015) and Word2Vec (DevDev et al., 2015) and their difficulty in processing polysemous words (Krizhevsky et al., 2014). With the advent of LLMs and the breakthrough in products like ChatGPT, they posed two hypotheses for LLMs to tackle graph learning and more specifically node classification. The first was whether they could integrate LLM text embeddings with their data pipeline for GNN classification improvements, and the second was whether they could deploy a LLM as a predictor in a node classification task. They experimented on Cora, PubMed, OGBN-arXiv, and OGBN-products. Their experimentation revealed that enhancing their node features with LLM embeddings improves performance of the GNN classifiers.
There have been other breakthroughs in joint pre-training of different data types by coupling text deep learning images and video. CLIP (Krizhevsky et al., 2014) matches text encoding and image embedding in the same representation space (Krizhevsky et al., 2014) allows application in parsing video and images and extracting text features through computer vision. A survey by Zhang et al. (Zhang et al., 2016) details the extensive applications of LLMs in various fields such as video, music, gaming, and graphs. LLMs application in the area of graphs is predominantly in the area of graph generation, rather than the use of LLMs in node classification demonstrated by Chen et al. (Chen et al., 2015).
To the best of our knowledge, the research is sparse on a multimodal holistic approach to tackling financial cybercirime with the implementation of transformers, LLMs, and GNNs.
## 4. Methodology
The XAI system proposed aims to assist analysts and auditors in their investigative tasks. Users can easily start their investigation by inputting natural language queries into a search interface. In response, the system provides relevant answers, malicious addresses with context descriptions, and transaction graphs supporting the query.
The methodology we adopt is multi-step. First, embeddings are generated to capture transaction sequences, connectivity, and their relevancy to external events. We leverage a Pre-trained Transfomer network which is trained specifically to capture the transaction sequences. To incorporate and contextualise information from external sources, a GPT-based zero-shot method is used to generate narratives from the events surrounding the transaction and its embeddings. To capture the connectivity of transactions and their neighbourhood, we train a graph transformer network, capturing the complexity of interactions and use the embeddings of each transaction at answering time. When an analyst poses a query, we search these embeddings to identify accounts exhibiting behaviours consistent with the question's intent. Additionally, we probe the context embeddings related to the account in question to gather relevant data. Finally, by searching through graph embeddings, we can pinpoint accounts that share similar transactional patterns. Figure 1 displays the multimodal approach to capture multiple layers of embeddings.
### Embeddings
**Transaction Embeddings**: Given a transaction \(T\), its embedding, \(E_{T}\), captures its attributes. We employ a BERT-based model to generate these embeddings, allowing for high-dimensional sequential transaction representations. The model used is a fine-tuned architecture of BERT specific to support transaction sequences (Dev et al., 2015).
**Graph Embeddings for Connectivity**: The Ethereum transaction network is mapped to a graph \(G\) where nodes represent transactions and edges symbolize their connectivity. We employ a GAT to produce embeddings \(E_{G^{\prime}}\) for a transaction graph involving an account that captures localized transactional patterns.
**Narrative Generation for Transaction Contextualisation**:
For each transaction \(T\), an associated narrative \(N\) provides insights in a digestible manner. Such narratives can assist analysts, auditors, or even regular users in understanding the context, significance, and potential ramifications of a transaction without delving into the technical details. First, we gather transaction details (\(T\)),
Figure 1. The unprocessed transactional network is the example used in this diagram which is then passed through a multimodal architecture extracting multiple layers of embeddings capturing different representations and relationships in the dataset. For each transaction in the dataset: BERT extracts transaction sequence embeddings, GAT will capture the subgraph embeddings, and GPT-4 will go through a cycle of transaction narrative contextual generation with narrative self-critiquing and then the final transaction narrative is converted using DaVinci to a visible embedding.
including sender and receiver addresses, amount, date, and other metadata to generate narratives. The meta-information (\(M\)) includes the addresses involved, such as known associations, previous transaction behavior, etc. An external event data (\(E\)), such as significant price fluctuations, security breaches, or network upgrades, which may influence the transaction context, is integrated to understand the transaction context. For example, "On [Date], [Amount] was transferred from [Address A] to [Address B]. This transaction occurred during [External Event]. Notably, [Address A/B] has [Meta Information]." This narrative highlights the significance of the transaction, the account types involved, and the external event relevance. The details about \(T\), \(M\), and \(E\) are given to the LLM and utilize its capability to craft a detailed narrative \(N\) in a zero-shot manner.
[backgroundcolor=gray!10, linewidth=0.5em, title=**Narrative prompt:** "Given the provided cryptocurrency transaction details, its associated metadata, and relevant external events, generate a concise 3-point narrative encapsulating the transaction's essence."
{transaction_details}
{meta_information}
{external_events}
Upon generating a narrative \(N\) for a transaction \(T\), the critic mechanism evaluates \(N\) against set criteria, producing feedback \(F\) that helps in refining the narratives or flagging anomalies. The critic prompt will check for- (a) Coherence: Does the narrative flow logically? (b) Relevance: Does it highlight the most salient features of \(T\)?, (c) Accuracy: Does the narrative correctly represent \(T\)? (d)Completeness: Are all key details of \(T\) captured?. The critic prompt improves the narrative based on the criteria once the \(N\) and \(T\) are submitted to the critic. This mechanism ensures that the narratives captured are of high quality and relevance.
[backgroundcolor=gray!10, linewidth=0.5em, title=**Critic prompt:** "Review the provided narrative generated for the cryptocurrency transaction. Assess its coherence, relevance, accuracy, and completeness. Provide refined output as an improvement if necessary."
{generated_narrative}
{transaction_details}
Next, we concatenate transactions \(E_{T}\) and narratives \(E_{N}\) into representation, \(E_{C}\). \(E_{C}=\text{f}(E_{T},E_{N})\)
### Retrieval of Graph, Transaction Sequence, and Subgraphs Search
Given a query \(Q\) from an analyst, the system performs a multi-layered retrieval process:
**Transaction and Narrative Embedding Search (Sequence)**: Utilising the embedding \(E_{T}\) generated for each transaction, the system performs a cosine similarity or nearest neighbor search to fetch transactions closely resembling the query. This stage ensures we capture transactions with inherent properties similar to the given query. With narrative embeddings in play, analysts can search based on raw transaction attributes and the nature, context, or intent of transactions represented by the narratives. For instance, an analyst could query for "transactions that indicate suspiciously large transfers in a short time," and the system could utilise \(E_{N}\) to identify narratives (and thus transactions) that match this intent. Also, as narratives are reviewed, critiqued, and potentially corrected by analysts or using the critic prompt system, this feedback can be used to fine-tune the embedding models, ensuring that the representations become even more accurate and insightful overtime. This process gives higher flexibility and depth, allowing analysts to dive into the Ethereum data from multiple perspectives and with varied objectives.
\[\text{Similarity}(Q,T)=\frac{Q\cdot E_{T}}{\|Q\|\|E_{T}\|} \tag{1}\]
**Graph-based Retrieval (Connectivity)**: Using graph embeddings, the system scans for nodes (accounts) that have interacted in manners similar to the queried transaction. This fetches the direct participants of similar transactions and their immediate neighbors in the graph, providing a richer context.
For a given node \(v\) with embedding \(E_{v}\), the similarity score with the query is computed, and nodes are ranked accordingly.
\[\text{similarity}(Q,T)=\frac{Q\cdot E_{v}}{\|Q\|\|E_{v}\|} \tag{2}\]
**Subgraph Extraction**: After extracting nodes of interest, the system extracts relevant subgraphs that contain the nodes and their direct neighborhoods. This gives a full view of the transaction environment, capturing patterns, cycles, and anomalies.
Combining transaction embeddings with graph-based techniques provides a comprehensive and contextual view of the queried transactions, aiding analysts in deeper exploration, anomaly detection, and insight extraction. Figure 2 displays the process of querying a transaction and retrieving a subgraph with supplemental contextual narrative.
## 5. Discussion
The key function of this multimodal approach is to aid the investigative analyst in querying, labelling, and understanding transactions in a network based on the features available to them. The analyst investigation process begins at the first level, where a junior analyst (Level 1) will receive a flag or insigate an investigation on a suspicious or unusual transaction. Without sufficient experience, their judgment will be passed on to a higher, more knowledgeable, and/or experienced analyst. This process requires a summarisation of each subsequent analyst's opinions, reasoning, and evidence for why they have flagged or cannot make a sufficient judgment on a transaction. By leveraging our proposed method the flagged alert is contextualized with transaction details, a contextual narrative, and provides transactions of a similar nature to enhance the investigation. Additionally, tasks such as populating suspicious transaction reports (STRs) and other regulatory requirements can be expedied with the assistance of our method.
The proposed method has multiple advantages. For example, the multimodal approach can help identify mixing patterns, CoinJolins, and other obfuscation methods deployed by cybercriminals in the cryptocurrency space. Mixing patterns can include transactions from very distant points on the network, purposely performed to
create a more challenging environment for analysts to investigate (Friedman et al., 2017). Figure 3 demonstrates the opportunity by having identified a mixing operation through the sequencing embedding stage, with the additional subgraph retrieval method it would be possible to identify other mixing patterns scattered across the entire blockchain rather than having to store the entire blockchain and all transactions in memory, as is the case for heuristics applied to the network.
## 6. Conclusion
We introduced a multimodal strategy that harnesses the power of multiple deep learning models to offer a comprehensive and interpretable investigative toolkit for analysts. As the sophistication of financial cybercriminals increases, the experience and knowledge required from analysts increases dramatically. Our method not only alleviates these demands but also enhances the skillset and methodologies of analysts combatting financial crime. As a next step, we plan to conduct an extensive user study on the proposed method to assess its impact on enhancing the overall efficiency and productivity of analysts.
## Acknowledgment
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
|
2308.09595 | Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents | Robustly cooperating with unseen agents and human partners presents
significant challenges due to the diverse cooperative conventions these
partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this
challenge by training an agent with a population of diverse teammate policies
obtained through maximizing specific diversity metrics. However, prior
heuristic-based diversity metrics do not always maximize the agent's robustness
in all cooperative problems. In this work, we first propose that maximizing an
AHT agent's robustness requires it to emulate policies in the minimum coverage
set (MCS), the set of best-response policies to any partner policies in the
environment. We then introduce the L-BRDiv algorithm that generates a set of
teammate policies that, when used for AHT training, encourage agents to emulate
policies from the MCS. L-BRDiv works by solving a constrained optimization
problem to jointly train teammate policies for AHT training and approximating
AHT agent policies that are members of the MCS. We empirically demonstrate that
L-BRDiv produces more robust AHT agents than state-of-the-art methods in a
broader range of two-player cooperative problems without the need for extensive
hyperparameter tuning for its objectives. Our study shows that L-BRDiv
outperforms the baseline methods by prioritizing discovering distinct members
of the MCS instead of repeatedly finding redundant policies. | Arrasy Rahman, Jiaxun Cui, Peter Stone | 2023-08-18T14:45:22Z | http://arxiv.org/abs/2308.09595v2 | # Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents
###### Abstract
Robustly cooperating with unseen agents and human partners presents significant challenges due to the diverse cooperative conventions these partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this challenge by training an agent with a population of diverse teammate policies obtained through maximizing specific diversity metrics. However, these heuristic diversity metrics do not always maximize the agent's robustness in all cooperative problems. In this work, we first propose that maximizing an AHT agent's robustness requires it to emulate policies in the minimum coverage set (MCS), the set of best-response policies to any partner policies in the environment. We then introduce the L-BRDiv algorithm that generates a set of teammate policies that, when used for AHT training, encourage agents to emulate policies from the MCS. L-BRDiv works by solving a constrained optimization problem to jointly train teammate policies for AHT training and approximating AHT agent policies that are members of the MCS. We empirically demonstrate that L-BRDiv produces more robust AHT agents than state-of-the-art methods in a broader range of two-player cooperative problems without the need for extensive hyperparameter tuning for its objectives. Our study shows that L-BRDiv outperforms the baseline methods by prioritizing discovering distinct members of the MCS instead of repeatedly finding redundant policies.
## 1 Introduction
The _Ad Hoc Teamwork_ (AHT) problem [12] is concerned with learning ways to quickly cooperate with previously unseen agents or humans (henceforth referred to as "_unseen_" or "_novel_" teammates, or when unambiguous, simply "teammates"). In problems with multiple ways to coordinate, agents co-trained with a limited set of teammates may settle on cooperation conventions that only work when they collaborate with each other. Specialization towards these conventions diminishes an agent's ability to collaborate with unseen partners that adopt other conventions [13].
Recent works address this problem by optimizing diversity metrics to generate sets of teammate policies for AHT training [14, 15, 16]. Through interaction with the generated broadly representative teammate policies, an agent learns a policy to interact with previously unseen partners. State-of-the-art methods optimize adversarial diversity to generate _incompatible_ teammate policies [1, 17, 18, 19]. They seek sets of teammate policies, each maximizing their returns when playing with a designated AHT agent policy while minimizing returns with other policies.
Such existing diversity metrics are heuristic in nature and are not well-justified. It is unclear whether and how optimizing them can lead to improved robustness in general cooperative problems. We further demonstrate that optimizing these diversity metrics can fail to discover teammate policies under certain conventions even in simple cooperative games, specifically if following a convention yields high returns against the best-response policy to another generated teammate policy. Optimizing adversarial diversity can also generate teammates adopting _self-sabotaging_ policies [23], which cause teammates to undermine collaboration with an unseen teammate if it behaves differently from the agent co-trained with that generated teammate.
In this work, we make three contributions that improve existing teammate generation methods for training robust AHT agents. First, we outline formal concepts describing an ideal set of teammate policies for training robust AHT agents, which can emulate the best-response policy to any teammate during interaction [1]. The importance of playing the best-response policies to design a robust agent provides the motivation to estimate the **minimum coverage set (MCS)**, which is the set of best-response policies to any teammate policy in an environment, before interacting with unknown teammates. Second, we use the concept of MCS to propose the **L-BRDiv** algorithm that jointly estimates the MCS of an environment and utilizes it to generate teammates for AHT training by solving a constrained optimization problem. L-BRDiv's generated set of teammate policies encourages AHT agents to emulate policies in the MCS through AHT training. Third, we provide experiments that empirically demonstrate that L-BRDiv produces more robust AHT agents than state-of-the-art teammate generation methods while requiring fewer hyperparameters to be tuned.
## 2 Related Work
**Ad Hoc Teamwork** Existing AHT methods equip an agent with two components to achieve near-optimal performance when interacting with any unknown teammate policy [14]. The first is a _teammate modeling component_ that infers an unknown teammate's policy via observations gath
ered from limited interactions with the unknown teammate. The second is an _action selection component_ that estimates the best-response policy to the inferred teammate policy, which selects actions that maximize the AHT agent's returns when collaborating with an unknown teammate. PLASTIC-Policy Barrett et al. (2016) is an early example AHT method that defines an AHT agent policy based on the aforementioned components. Recent works Rahman et al. (2021); Zintgraf et al. (2021); Papoudakis et al. (2021); Gu et al. (2021) implement these two components as neural network models which are trained to optimize the AHT agent's returns when dealing with a set of teammate policies seen during training.
#### Diversity in Multi-agent Learning
Introducing diversity in training partners' policies is one way to generate robust response policies in multi-agent systems. A popular line of methods leverages population-based training and frequent checkpointing Strouse et al. (2021); Vinyals et al. (2019); Cui et al. (2023); Bakhtin et al. (2022). These methods rely on random seeds to find diverse policies, resulting in no guarantee that the generated policies are sufficiently diverse. Other studies optimize various types of diversity metrics directly into reinforcement learning objectives or as constraints. Xing et al. (2021) introduce a target-entropy regularization to Q-learning to generate information-theoretically different teammates. MAVEN Mahajan et al. (2019) maximizes the mutual information between the trajectories and latent variables to learn diverse policies for exploration. Lupu et al. (2021) propose generating policies with different trajectory distributions. Trajectory diversity, however, is not necessarily meaningful for diversifying teammate policies Charakorn et al. (2023); Rahman et al. (2023), so we do not consider these methods as baselines in our work. Other work in single-agent settings introduces Quality Diversity Mouret and Clune (2015); Pugh et al. (2016) or Behavior Diversity Wu et al. (2023), which rely on domain-specific heuristics, while our method is domain-independent.
#### Adversarial Diversity
Our research is related to work on _Adversarial Diversity_Cui et al. (2023); Charakorn et al. (2023); Ramananopong et al. (2023). These approaches generate diverse agents by maximizing _self-play_ scores while minimizing _cross-play_ scores in a policy pool. _Self-play_ refers to an interaction with a designated teammate, and _cross-play_ means playing with teammates other than the designated teammate. These approaches impose strong penalties for high cross-play returns. As a result, they may not discover teammate policies that produce high cross-play returns with other policies' best-response policies. Instead of discovering meaningfully diverse conventions, they also encourage agents to _self-sabotage_ by deliberately undermining their collaboration with any policies other than the policy encountered during self-play training, as identified from their observed behaviour. Although our method resembles prior work in the optimization objective, we formulate the problem as a constrained optimization problem that allows us to generate a better set of teammate policies for AHT training. We compare our method against BRDiv Rahman et al. (2023) and LIPO Charakorn et al. (2023). However, we do not include ADVERSITY Cui et al. (2023) as a baseline since it shares the same objective as LIPO while adding methods to eliminate self-sabotaging behaviour in Hanabi Bard et al. (2020), which we do not focus on since self-sabotaging behaviour can be desirable in other environments where a teammate's slightest deviation from a utilized convention yields low rewards.
## 3 Problem Formulation
The interaction between agents in an AHT environment can be modeled as a decentralized partially observable Markov decision process (Dec-POMDP). A Dec-POMDP is defined by an 8-tuple, \(\langle N,S,\{\mathcal{A}^{i}\}_{i=1}^{|N|},P,R,\{\Omega^{i}\}_{i=1}^{|N|},O,\gamma\rangle\), with state space \(S\), discount rate \(\gamma\), and each agent \(i\in N\) having an action space \(\mathcal{A}^{i}\) and observation space \(\Omega^{i}\). Each interaction episode between the AHT agent and its teammates starts at an initial state \(s_{0}\) sampled from an initial state distribution \(p_{0}(s)\). Denoting \(\Delta(X)\) as the set of all probability distributions over set \(X\), at each timestep \(t\) agent \(i\) cannot perceive \(s_{t}\) and instead receives an observation \(o^{i}_{t}\in\Omega^{i}\) sampled from the observation function, \(O:S\mapsto\Delta(\Omega^{1}\times\cdots\times\Omega^{|N|})\). Each agent \(i\in N\) then decides its action at \(t\), \(a^{i}_{t}\), based on its policy, \(\pi^{i}(H^{i}_{t})\), that is conditioned on the observation-action history of agent \(i\), \(H^{i}_{t}=\{o^{i}_{\leq t},a^{i}_{<t}\}\). The action selected by each agent is then jointly executed as a joint action, \(\mathbf{a}_{t}\). After executing \(\mathbf{a}_{t}\), the environment state changes following the transition function, \(P:S\times\mathcal{A}^{1}\times\cdots\times\mathcal{A}^{|N|}\mapsto\Delta S\), and each agent receives a common scalar reward, \(r_{t}\), according to the reward function, \(R:S\times\mathcal{A}^{1}\times\cdots\times\mathcal{A}^{|N|}\mapsto\mathbb{R}\).
Existing AHT methods learn policies for a robust AHT agent by interacting with teammate policies from the training teammate policy set, \(\Pi^{\mathrm{train}}=\{\pi^{-1},\pi^{-2},\ldots,\pi^{-K}\}\). The AHT agent then optimizes its policy to maximize its returns in interactions with policies from \(\Pi^{\mathrm{train}}\). The objective of these existing AHT methods can be formalized as:
\[\pi^{*,i}(\Pi^{\mathrm{train}})=\operatorname*{argmax}_{\pi^{i}}\underset{ \begin{subarray}{c}\pi^{-i}\sim\mathbb{U}(\Pi^{\mathrm{train}}),\\ a^{i}_{t}\sim\pi^{i},\\ a^{-i}_{t}\sim\pi^{-i},P,O\end{subarray}}{\mathbb{E}_{\pi^{-i}\sim\Pi^{-i},P,O}}\Bigg{[}\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})\Bigg{]}, \tag{1}\]
with \(\mathbb{U}(X)\) denoting a uniform distribution over set \(X\). The learned AHT agent policy, \(\pi^{*,i}(\Pi^{\mathrm{train}})\), is then evaluated for its robustness. Given an evaluated \(\pi^{*,i}(\Pi^{\mathrm{train}})\), this robustness measure, \(M_{\Pi^{\mathrm{train}}}\left(\pi^{*,i}(\Pi^{\mathrm{train}})\right)\), evaluates the expected returns when the AHT agent deals with teammates uniformly sampled from a previously unseen set of teammate policies, \(\Pi^{\mathrm{eval}}\). We formally define \(M_{\Pi^{\mathrm{eval}}}\left(\pi^{*,i}(\Pi^{\mathrm{train}})\right)\) as the following expression:
\[\mathbb{E}_{\pi^{-i}\sim\mathbb{U}(\Pi^{\mathrm{eval}}),a^{i}_{t}\sim\pi^{-i},P,O}\Bigg{[}\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})\Bigg{]}. \tag{2}\]
The dependence of \(\pi^{*,i}(\Pi^{\mathrm{train}})\) on \(\Pi^{\mathrm{train}}\) then implies that Expression 2 is also determined by \(\Pi^{\mathrm{train}}\).
The goal of an AHT teammate generation process is to find \(\Pi^{\mathrm{train}}\) producing an AHT agent policy that maximizes Expression 2 amid unknown \(\Pi^{\mathrm{eval}}\). Given the objective of AHT training from Equation 1 and the definition of the robustness
measure from Expression 2, the objective of an AHT teammate generation process is to find the optimal set of training teammate policies, \(\Pi^{*,\mathrm{train}}\), formalized as:
\[\operatorname*{argmax}_{\Pi^{\text{train}}}\mathbb{E}_{\Pi^{\text{test}}\sim \mathcal{U}(\Pi)}\left[M_{\Pi^{\text{test}}}\left(\pi^{*,i}(\Pi^{\text{train}}) \right)\right], \tag{3}\]
While uniformly sampling \(\Pi^{\text{train}}\) from \(\Pi\) may appear to be a reasonable solution to produce \(\Pi^{*,\mathrm{train}}\), training an AHT agent using \(\Pi^{\text{train}}\) may produce low returns if we only sample a limited number of policies from \(\Pi\). When \(\Pi\) contains many possible teammate policies, the exact policies included in \(\Pi^{\text{train}}\) becomes important to ensure that the AHT agent is robust when collaborating with any teammate policy in \(\Pi\).
## 4 Creating Robust AHT Agents By Identifying Minimum Coverage Sets
Assuming knowledge of \(\Pi^{\text{eval}}\), the robustness of an AHT agent as defined by Expression 2 can be optimized by using \(\Pi^{\text{eval}}\) as teammate policies for AHT training. Given a teammate modeling component that accurately infers an unknown teammate's policy from \(\Pi^{\text{eval}}\) and an action selection component that can emulate any policy in the set of best-response policies to policies in \(\Pi^{\text{eval}}\), \(\text{BR}(\Pi^{\text{eval}})\), an AHT agent's robustness is maximized by following the best-response policy to the inferred teammate policy. Unfortunately, \(\Pi^{\text{eval}}\) being unknown makes this ideal training process impossible.
Improving an AHT agent's robustness without knowing \(\Pi^{\text{eval}}\) is still possible by identifying the _coverage set_ of an environment. Denoting an environment characterized by a DecPOMDP as E, any set containing at least one best-response policy to each teammate policy in \(\Pi\) is a coverage set of an environment, CS(E). CS(E) is formally characterized as:
\[\begin{split}&\forall\pi^{-i}\in\Pi,\exists\pi^{*}\in\text{CS(E)}:\\ &\mathbb{E}_{s_{0}\sim p_{0}}\left[\textbf{R}_{*,-i}(H_{t})\right] =\operatorname*{max}_{\pi^{i}\in\Pi}\mathbb{E}_{s_{0}\sim p_{0}}\left[ \textbf{R}_{i,-i}(H_{t})\right],\end{split} \tag{4}\]
where \(\textbf{R}_{i,-i}(H)\) denotes the following expression:
\[\textbf{R}_{i,-i}(H)=\mathbb{E}_{\begin{subarray}{c}a_{T}^{i}\sim\pi^{i}(,|H_{T}),\\ a_{T}^{i}\sim\pi^{-i}(,|H_{T}),\\ P,\end{subarray}}\left[\sum_{T=t}^{\infty}R_{T}(s_{T},a_{T})\middle|H_{t}=H \right]. \tag{5}\]
Given this definition, a CS(E) remains a coverage set when policies are added. Thus, \(\Pi\) itself is trivially a coverage set.
Irrespective of \(\Pi^{\text{eval}}\), CS(E) will contain at least a single best-response policy to any \(\pi^{-i}\in\Pi^{\text{eval}}\) since \(\Pi^{\text{eval}}\subseteq\Pi\). An AHT agent capable of emulating any policy from CS(E) consequently can follow any policy from \(\text{BR}(\Pi^{\text{eval}})\) for any \(\Pi^{\text{eval}}\). Therefore, training an AHT agent to emulate any policy from CS(E) gives us a solution to design robust AHT agents even when \(\Pi^{\text{eval}}\) is unknown.
Considering CS(E) may contain policies that are not a best-response policy to any member of \(\Pi\), we ideally only train AHT agents to emulate a subset of CS(E) that consists of policies that are the best-response to some \(\pi^{-i}\in\Pi\). Based on this idea, we define the _minimum coverage set_ of an environment, \(\text{MCS}(\text{E})\subseteq\Pi\), that is a coverage set ceasing to be a coverage set if any of its elements are removed. This characteristic of MCS(E) is formalized as:
\[\forall\pi^{i}\in\text{MCS(E)}:\text{MCS(E)}-\{\pi^{i}\}\text{ is not a coverage set}. \tag{6}\]
In the example provided in Figure 0(a), \(\text{MCS}(\text{E})=\{\pi^{1},\pi^{2},\pi^{3}\}\) is an MCS since the elimination of any policy, \(\pi\), from it cause a subset of \(\Pi\) to not have their best-response policy in \(\text{MCS}(\text{E})-\{\pi\}\).
Our work aims to design AHT agents capable of emulating any policies from MCS(E) by constructing \(\Pi^{\text{train}}\) in a specific way. If \(\Pi^{\text{train}}\) is constructed for each \(\pi^{i}\in\text{MCS(E)}\) to have a \(\pi^{-i}\in\Pi^{\text{train}}\) such that \(\pi^{i}\in\text{BR}(\{\pi^{-i}\})\), using \(\Pi^{\text{train}}\) while optimizing Equation 1 enables us to achieve this goal. The role of MCS(E) in our teammate generation process is visualized in Figures 0(b) and 0(c).
## 5 L-BRDiv: Generating Teammate Policies By Approximating Minimum Coverage Sets
This section introduces our proposed teammate generation method based on estimating MCS(E). Section 5.1 details a constrained objective we use to estimate MCS(E). Finally, Section 5.2 provides a method that solves the constrained objective to jointly estimate MCS(E) while generating \(\Pi^{\text{train}}\).
Figure 1: **Leveraging MCS(E) for generating robust AHT agents.** Figure 0(a) visualizes how teammate policies (points in the large triangle) can be grouped based on their best-response policies. The rectangle then shows an example MCS(E). From each subset of \(\Pi\) sharing the same best-response policy (colored small triangles), Figure 0(b) visualize how one policy is sampled from each subset to create \(\Pi^{\text{train}}\) for AHT training. As visualized in Figure 0(c), using our generated \(\Pi^{\text{train}}\) for AHT training should encourage agents that emulate the best-response policy (dashed squares) to any \(\pi^{-i}\in\Pi\) when dealing teammates from \(\Pi^{\text{eval}}\) (squares whose color represent its best-response policy).
### Jointly Approximating MCS(E) and Generating \(\Pi^{\text{train}}\)
Discovering MCS(E) by enumerating the AHT agent's best-response policy to each teammate policy is intractable given the infinite policies in \(\Pi\). Instead, we can estimate MCS(E) by eliminating policies from a finite CS(E) to generate MCS(E). Given a finite CS(E), an AHT agent policy is not a member of MCS(E) if it is not the best response to any teammate policy.
We check if \(\pi^{i}\in\text{CS(E)}\) is the best-response policy of at least one policy from \(\Pi\) by solving the _feasibility problem_, which is the following constrained optimization problem:
\[\underset{\pi^{-i}\in\Pi}{\text{max}}\mathbb{E}_{s_{0}\sim p_{0}}\left[\mathbf{ R}_{i,-i}(H_{t})\right], \tag{7}\]
with the following constraints:
\[\begin{split}&\forall\pi^{j}\in(\text{CS(E)}-\{\pi^{i}\}):\\ &\mathbb{E}_{s_{0}\sim p_{0}}\left[\mathbf{R}_{j,-i}(H_{t}) \right]\leq\mathbb{E}_{s_{0}\sim p_{0}}\left[\mathbf{R}_{i,-i}(H_{t})\right]. \end{split} \tag{8}\]
Any CS(E) member that violates the above constraint for all \(\pi^{-i}\in\Pi\) is not a member of MCS(E). While this approach relies on knowing a finite CS(E), note that knowledge of a finite CS(E) is sometimes available. For instance, the set of all deterministic policies is a finite CS(E) for environments with a finite action space and state space.
Applying the above procedure to find MCS(E) can still be impossible for two reasons. First, a finite CS(E) can be unknown. Second, the size of CS(E) may be prohibitively large, which prevents solving the feasibility problem for all \(\pi^{i}\in\text{CS(E)}\). Amid these challenging problems, we resort to estimating MCS(E) by only discovering its subset with \(K\) policies, \(\text{MCS}^{\text{est}}\text{(E)}=\{\pi^{i}\}_{i=1}^{K}\).
We now describe an alternative constrained optimization objective that jointly finds MCS\({}^{\text{est}}\text{(E)}\) while generating a set of teammate policies for AHT training, \(\Pi^{\text{train}}=\{\pi^{-i}\}_{i=1}^{K}\), according to the method illustrated in Figure 1. Two characteristics are desired when finding MCS\({}^{\text{est}}\text{(E)}\). First, we require each AHT agent policy from MCS\({}^{\text{est}}\text{(E)}\) to only be the best-response policy to one teammate policy from \(\Pi^{\text{train}}\), \(\pi^{i}\). The second characteristic prioritizes the discovery of MCS(E) members that enables the AHT agent to produce high returns with a designated teammate policy, \(\pi^{-i}\in\Pi\). These two requirements are formulated as the following constrained optimization problem:
\[\underset{\{\pi^{i}\}_{i=1}^{K}\subseteq\Pi,}{\text{max}}\sum_{i\in\{1,2, \ldots,K\}}\mathbb{E}_{s\sim p_{0}}\left[\mathbf{R}_{i,-i}(H_{t})\right], \tag{9}\]
with the following constraints that must be fulfilled for all \(i,j\in\{1,2,\ldots,K\}\) and \(i\neq j\):
\[\mathbb{E}_{s\sim p_{0}}\left[\mathbf{R}_{j,-i}(H_{t})\right]+\tau\leq\mathbb{ E}_{s\sim p_{0}}\left[\mathbf{R}_{i,-i}(H_{t})\right], \tag{10}\]
\[\mathbb{E}_{s\sim p_{0}}\left[\mathbf{R}_{i,-j}(H_{t})\right]+\tau\leq\mathbb{ E}_{s\sim p_{0}}\left[\mathbf{R}_{i,-i}(H_{t})\right]. \tag{11}\]
Note that a near-zero positive threshold (\(\tau>0\)) is introduced in the constraints to prevent discovering duplicates of the same \(\pi^{i}\) and \(\pi^{-i}\), which turns Constraints 10 & 11 into equality when \(\tau=0\).
### Lagrangian BRDiv (L-BRDiv)
We present the **L**agrangian **B**est **R**esponse **D**iversity (L-BRDiv) algorithm to generate \(\Pi^{\text{train}}\) that encourages an AHT agent to emulate MCS\({}^{\text{est}}\text{(E)}\). L-BRDiv generates \(\Pi^{\text{train}}\) by solving the Lagrange dual of the optimization problem specified by Expressions 9-11, which is an unconstrained objective with the same optimal solution. The Lagrange dual for our
Figure 2: **Lagrangian Best Response Diversity (L-BRDiv). The L-BRDiv algorithm trains a collection of policy networks (purple and orange boxes) and Lagrange multipliers (green cells inside the black rectangle). The purple boxes represent a policy from \(\{\pi^{i}\}_{i=1}^{K}\subseteq\Pi\) while the policies visualized as an orange box is from \(\{\pi^{-i}\}_{i=1}^{K}\subseteq\Pi\). Estimated returns between any possible pairs of policy, \((\pi^{j},\pi^{-k})\in(\{\pi^{i}|\pi^{i}\in\Pi\}_{i=1}^{K}\times\{\pi^{-i}|\pi ^{-i}\in\Pi\}_{i=1}^{K})\), and their associated Lagrange multipliers are used to compute the optimized term in the Lagrangian dual form (right red box) via a weighted summation operation (black dotted lines connect weights and multiplied terms). The policy networks are then trained via MAPPO (Yu et al., 2022) to maximize this optimized term, while the Lagrange multipliers are trained to minimize the term via stochastic gradient descent.**
optimization problem is defined as:
\[\min_{\begin{subarray}{c}A\subseteq\mathbb{R}_{\geq 0}^{K(K-1)} \end{subarray}}\max_{\begin{subarray}{c}\{\pi^{j}\}_{i=1}^{K}\subseteq\Pi_{1} \end{subarray}}\Big{(}\sum_{i\in\{1,\ldots,K\}}\mathbb{E}_{s_{0}\sim p_{0}} \left[\mathbf{R}_{i,-i}(H_{t})\right]+\] \[\sum_{\begin{subarray}{c}i,j\in\{1,\ldots,K\}\\ i\neq j\end{subarray}}\alpha_{1}^{i,j}\left(\mathbb{E}_{s_{0}\sim p_{0}}\left[ \mathbf{R}_{i,-i}(H_{t})-\tau-\mathbf{R}_{j,-i}(H_{t})\right]\right)+\] \[\sum_{\begin{subarray}{c}i,j\in\{1,\ldots,K\}\\ i\neq j\end{subarray}}\alpha_{2}^{i,j}\left(\mathbb{E}_{s_{0}\sim p_{0}}\left[ \mathbf{R}_{i,-i}(H_{t})-\tau-\mathbf{R}_{i,-j}(H_{t})\right]\right)\Big{)}, \tag{12}\]
with \(\mathbb{A}=\{(\alpha_{1}^{i,j},\alpha_{2}^{i,j})|\alpha_{1}^{i,j}\geq 0, \alpha_{2}^{i,j}\geq 0\}_{i,j\in\{1,2,\ldots,K\},i\neq j}\) denoting the set of optimizable Lagrange multipliers.
L-BRDiv learns to assign different values to Lagrange multipliers in \(\mathbb{A}\) of (12). Optimizing Lagrange multipliers gives L-BRDiv two advantages over previous methods, which treat these hyperparameters as constants. First, we demonstrate in Section 6 that L-BRDiv creates better \(\Pi^{\text{train}}\) by identifying more members of MCS(E). Second, it does not require hyperparameter tuning on appropriate weights associated with cross-play return, which in previous methods require careful tuning to discover members of MCS(E) [10] and prevent the generation of incompetent policies not achieving high returns against any AHT agent policy [1].
We provide details of the teammate generation process undergone in L-BRDiv in Algorithm 1. L-BRDiv implements the policies optimized in the Lagrange dual as neural networks trained with MAPPO [23] to maximize the weighted advantage function (14), whose weights correspond to the total weight associated with each expected return term in (12). At the same time, L-BRDiv trains a critic network to bootstrap the evaluation of (12) instead of a Monte Carlo approach, which can be expensive since it requires all generated policy pairs to initially follow the observation-action history, \(H_{t}\). Meanwhile, the Lagrange multipliers are trained in Lines 12-13 to minimize (12) while ensuring it is non-negative.
## 6 Experiments
In this section, we describe the environments and baseline algorithms in Sections 6.1 and 6.2. Section 6.3 then details the experiment setups for evaluating the robustness of AHT agents in L-BRDiv and baseline methods via their generated training teammate policies. Finally, we present the AHT experiment results and an analysis of MCS\({}^{\text{est}}\)(E) policies identified by L-BRDiv in Sections 6.4 and 6.5.
### Environments
We run our experiments in four two-player cooperative environments. The first environment is a repeated matrix game where agents have three actions, whose reward function is provided in Figure 2(a). Since eliminating self-sabotaging behaviour [13] is not the focus of our work, we remove teammate-related information and actions from an agent's observation such that self-sabotaging behaviour is not a member of possibly discovered teammate behaviours, \(\Pi\). We also do experiments in the Cooperative Reaching environment [10] where two agents can move across the four cardinal directions in a two-dimensional grid world. Both agents are given a reward of 1 once they simultaneously arrive at the same corner grid. The third environment is Weighted Cooperative Reaching, which is similar to Cooperative Reaching except for a modified reward function (Figure 2(c)) that provides lower rewards if both agents arrive at different corner cells. The last environment is Level-based Foraging (LBF) [11], where both agents must move along the four cardinal directions to a cell next to the same object and retrieve it by simultaneously selecting actions for collecting objects. Successful object collection gives both agents a reward of 0.33.
### Baseline Methods
Our experiments compare L-BRDiv against BRDiv [10] and LIPO [1]. Comparing L-BRDiv and BRDiv helps inves
tigate the detrimental effect of using fixed uniform weights instead of L-BRDiv's optimized Lagrange multipliers (\(\mathbb{A}\)). Meanwhile, including LIPO as a baseline enables us to investigate the advantage of L-BRDiv and BRDiv's use of weights with a larger magnitude for self-play maximization (i.e. \(w^{i,i}(\mathbb{A})\) in Eq. 14) compared to the weights for cross-play minimization (i.e. \(w^{i,j}(\mathbb{A})\) in Eq. 14). As justified in Section 2, these two policies are more appropriate baselines for L-BRDiv than any other teammate generation algorithms that we are aware of.
### Experiment Setup
We start our experiments for each environment by generating \(K\) training teammate policies using the compared methods. We ensure fairness in our experiments by using RL\({}^{2}\) algorithm [1] to find an optimal AHT agent policy defined in Equation 1 based on \(\Pi^{\text{train}}\) generated by each teammate generation algorithm. Since our partially observable environments provide no useful information to infer teammate policies except for rewards obtained at the end of each interaction episode, we choose RL\({}^{2}\) since it can use reward information to create agent representations maintained and updated across multiple episodes. For each of the compared algorithms, the teammate generation and AHT training process are repeated under four seeds to allow for a statistically sound comparison between each method's performance. As a measure of robustness, we then evaluate the average returns of the AHT agent trained from each experiment seed when collaborating with policies sampled from \(\Pi^{\text{eval}}\). We construct \(\Pi^{\text{eval}}\) for each environment by creating heuristic-based agents, whose behaviour we describe in Appendix A. Finally, we compute the mean and 95% confidence interval of the recorded returns across four seeds and report it in Figure 4.
### Ad Hoc Teamwork Experiment Results
Figure 4 shows the results of the AHT experiments. We find that L-BRDiv significantly outperforms other compared methods in the repeated matrix game, Weighted Cooperative Reaching, and LBF. While BRDiv slightly outperforms L-BRDiv in Cooperative Reaching, overlapping confidence intervals among the last few checkpoints suggest that the difference is only marginally significant.
L-BRDiv outperforms the compared baselines in all environments except Cooperative Reaching since these environments all have reward functions that cause some members of the MCS, \(\pi^{i}\in\text{ MCS(E)}\), to yield high expected returns in cross-play interactions against a generated teammate policy, \(\pi^{-j}\in\Pi^{\text{train}}\), that is not its intended partner, \(\pi^{-i}\in\Pi^{\text{train}}\). Meanwhile, all \(\pi^{i}\in\)MCS(E) for Cooperative Reaching have equally low (i.e. zero) returns against the intended partner of other MCS(E) members. The large cross-play returns disincentivize BRDiv and LIPO's optimized objective from discovering \(\pi^{i}\) and \(\pi^{-i}\) during teammate generation. The inability to discover \(\pi^{i}\in\text{ MCS(E)}\) and \(\pi^{-i}\) will then lead towards diminished robustness since the trained AHT agent will yield lower returns against teammates whose best-response policy is \(\pi^{i}\). In contrast, Cooperative Reaching's reward structure makes MCS(E) (i.e. the set of four policies moving towards each distinct corner cell) consist of policies yielding equally low cross-play returns of zero among each other.
Although both BRDiv and LIPO are equipped with a hyperparameter, \(\alpha>0\), that can change weights associated with self-play returns maximization and cross-play returns minimization in their learning objective, it is possible to find simple scenarios where no feasible \(\alpha\) facilitates the discovery of a desirable \(\Pi^{\text{train}}\) to maximize an AHT agent's robustness. Such a desirable \(\Pi^{\text{train}}\) is characterized by all AHT agent policies in MCS(E) having at least one teammate policy in \(\in\Pi^{\text{train}}\) whom it is the best-response policy to. Appendix B shows that the Repeated Matrix Game and Weighted Cooperative Reaching environment are examples of such scenarios. Even in environments like LBF where there may exist an \(\alpha\) enabling both BRDiv and LIPO to discover a desirable \(\Pi^{\text{train}}\) by optimizing their learning objectives, finding an appropriate \(\alpha\) is costly if we factor in the computational resources required to run a single teammate generation process. Unlike BRDiv and LIPO, L-BRDiv's inclusion of Lagrange multipliers as learned parameters enables it to discover desirable \(\Pi^{\text{train}}\) in a wider range of environments while reducing the number of hyperparameters that must be tuned.
Note that L-BRDiv and the baseline methods all successfully discover MCS(E) in Cooperative Reaching. However, each teammate policy generated by L-BRDiv and LIPO which has one of the MCS(E) members as its best-response policy ends up being less optimal than their BRDiv-generated counterparts. These suboptimal policies require more steps to complete an episode by occasionally moving away from
Figure 3: **Environments Used in AHT Experiments. We provide experiments in a repeated matrix game whose reward function is displayed in Figure 2(a). Figure 2(b) displays an example state of the Cooperative Reaching environment where the green stars represent corner cells that provide agents rewards once they simultaneously reach it. If we start from the top-left corner cell in Figure 2(b) and assign IDs (A-D) to corner cells in a clockwise manner, Figure 2(c) shows the reward function of the Weighted Cooperative Reaching environment where agents’ rewards depend on which pair of destination cells the two agents arrive at. Finally, Figure 2(d) shows a sample state of Level-based Foraging (LBF) where the apples represent the collected objects.**
their destination corner cell. Learning from these suboptimal agents made the AHT agent less decisive when selecting which corner cell to move towards and finally ends up producing agents with slightly lower returns.
### Behaviour Analysis
The AHT agent policies that L-BRDiv discovers as members of MCS(E) in all environments are provided in Figures 4(a)-4(c). Unlike the compared baseline methods that only discover two members of MCS(E), results from the Repeated Matrix Game show L-BRDiv is capable of consistently finding all three deterministic policies that are members of MCS(E). While all compared methods successfully discover AHT policies in the MCS(E) of Cooperative Reaching, L-BRDiv is the only method capable of finding all four members of MCS(E) corresponding to movement towards each corner grid in Weighted Cooperative Reaching. As we show in Appendix B, BRDiv and LIPO's failure to discover all members of MCS(E) in the Repeated Matrix Game and Weighted Cooperative Reaching is because discovering MCS(E) does not optimize their optimized objective for any constant and uniform \(\alpha\). In the LBF environment, none of the methods perfectly discover MCS(E) consisting of all six possible permutations of collecting objects in the environment. However, L-BRDiv is closer to estimating MCS(E) than the baseline algorithms by discovering four MCS(E) members in one seed and five MCS(E) members in the remaining seeds. Compared to the baseline methods, L-BRDiv's ability to discover more MCS(E) members eventually enables it to create more robust AHT agents that can emulate the best-response policy to a wider range of teammate policies.
## 7 Conclusion & Future Work
In this work, we propose that an appropriate set of teammate policies for AHT training must enable agents to emulate all policies in MCS(E), the smallest set of policies containing the best-response policy to any teammate policy in \(\Pi\). To generate such teammate policies for robust AHT training, we introduce and evaluate L-BRDiv. By solving a constrained optimization problem using the Lagrange multiplier technique, L-BRDiv then learns to jointly approximate the MCS of an environment and generate a set of teammate policies for AHT training. Our experiments indicate that L-BRDiv yields more robust AHT agents compared to state-of-the-art teammate generation methods by identifying more members of the MCS while also removing the need for tuning important hyperparameters used in prior methods.
In the future, we consider extending L-BRDiv to more complex environments where more than two agents must collaborate together. Another promising research direction is to explore other objectives than Expression 9 to use as prioritization criteria when estimating MCS(E) with a limited set of policies, such as objectives to discourate the discovery of self-sabotaging policies [14]. Finally, the application of our method in fully competitive and general-sum games is another promising direction to create robust agents since the concept of minimum coverage sets is not only limited to fully cooperative problems.
Figure 4: **Generalization Performance Against Previously Unseen Teammate Types. Figures 3(a), 3(c), and 3(d) show that L-BRDiv produced significantly higher episodic returns when dealing with unknown teammate policies in all environment except for Cooperative Reaching. Figure 3(b) also show that L-BRDiv obtained episodic returns close to BRDiv’s when evaluated in the Cooperative Reaching environment.**
Figure 5: **MCS(E) yielded by L-BRDiv. L-BRDiv is capable of estimating all members of MCS(E) in all environments except LBF. Meanwhile in LBF, it discovers at least four conventions, which is still more than what LIPO and BRDiv discovered. The discovery of more MCS(E) results in L-BRDiv producing more robust AHT agents.** |
2306.09039 | Improving Image Tracing with Convolutional Autoencoders by High-Pass
Filter Preprocessing | The process of transforming a raster image into a vector representation is
known as image tracing. This study looks into several processing methods that
include high-pass filtering, autoencoding, and vectorization to extract an
abstract representation of an image. According to the findings, rebuilding an
image with autoencoders, high-pass filtering it, and then vectorizing it can
represent the image more abstractly while increasing the effectiveness of the
vectorization process. | Zineddine Bettouche, Andreas Fischer | 2023-06-15T10:59:29Z | http://arxiv.org/abs/2306.09039v1 | # Improving Image Tracing with Convolutional Autoencoders by High-Pass Filter Preprocessing
###### Abstract
The process of transforming a raster image into a vector representation is known as image tracing. This study looks into several processing methods that include high-pass filtering, autoencoding, and vectorization to extract an abstract representation of an image. According to the findings, rebuilding an image with autoencoders, high-pass filtering it, and then vectorizing it can represent the image more abstractly while increasing the effectiveness of the vectorization process.
image quality; vector graphics; principal component analysis; neural networks; autoencoders; high-pass filters; vectorization; complexity theory; and information technology.
## I Introduction
Object recognition is considered a complex task in the processing field. Its complexity far exceeds simple arithmetic operations. With the massive amount of data generated each year, manual calculations done by hand are completely ignored. Therefore, data processing and evaluation are automated for all operations.
In recent years, many studies have emerged to contribute to the advancement of knowledge in the field of object recognition. Two of the pillars of this field are image processing and artificial intelligence (AI). AI is a fascinating subject that has attracted a lot of attention in the last decade, especially with its use in computer vision. Now not only filter-based models, e.g., Haar Cascade, can be trained to classify images, but also neural networks can be wired to learn how to detect various shapes and objects. The models generally learn from the pixel values and model their structures in mathematical equations, which begs the question of whether it would be more efficient for the models to learn from vector images as they are closer to the nature of the trained models than spatial data in the form of pixel arrays. Thus, this article is an attempt to improve the tracing of images by using autoencoders and high-pass filters to obtain an abstract representation of images in vector form. The highpass filters are chosen since they emphasize the important features of an image. This work is considered a step forward to achieving a better training rate in object recognition with ANN.
This paper is an extended version of the previously published paper [1]that discussed the summarized content of the findings that this work produced. A more in-depth discussion about the techniques used in the work, such as Image Tracing, Potrace, and autoencoders, has been added in the background section. The previous papers that touched on the topic of image tracing have been further discussed in detail to illustrate the place our work takes in relation to what has already been accomplished in the field. Concerning the methodology followed, a detailed description of the autoencoding network built is provided, and the choice of the layers is justified. When it comes to the experimentation part, other experiments are added, such as the attempt at reducing the noise without blurring the images. The experiments already introduced in the previous paper are extended to further discuss their findings, and detailed images that visualize those findings are added.
In other words, concerning the added value of this paper over its previous conference version, it can be stated that every section has gone through many further details, to present a richer methodology section (as for the ease of future building over our findings), to underline the networks built, and technologies used (such as the trained autoencoders that were described layer by layer and Potrace as a vectorization tool), and to provide an extended experimentation section, as the experiments' discussions are lengthened, detailed more with visualization of their results, and assisted with other experiments (such as a blur-free noise reduction attempt).
At first, there was the question: if the autoencoding of an image can improve its vectorized format by reconstructing its important features, how can high-pass filters come into play in the process? In other words, "Can a high-pass filter be used in combination with an autoencoding model to achieve an abstract representation of the image through the process of vectorization?" Thus, various ideas branched from this node, leading to the different pipelines that can be built to experiment with high-pass filter integration. For instance, the filters can be put before the autoencoding stage of a model that is already trained with filtered images to better reconstruct the significant data, leading to better vectorization. More
systematically, the autoencoding stage can act as a smoothing process, removing the noise from the images while reducing their complexity, while the filters come afterward to further enhance the quality of the important features, leading to a more abstract representation.
The remainder of this paper is structured as follows: In Section II, an introduction is given to image tracing, autoencoders, and high-pass filters. Section III discusses related work. Section IV introduces the methodology of this paper, including the evaluation methods used and the reasons why they were chosen. Section V presents the experiments and their results. This is the part that attempts to eliminate inefficient processing algorithms so that only a few pipelines that score closely are put forward for further evaluation. Section VI includes the evaluation of the different processing pipelines built and closes with a summarizing interpretation. Finally, Section VII concludes the paper and discusses future work.
## II Background
### _Image Tracing_
Image tracing is the process of converting a bitmap into a vector graphic. As Selinger writes in his tracing algorithm [2], vector graphics are described as algebraic formulas of the contours, typically in the form of Bezier curves. The advantage of displaying an image as a vector outline is that it can be scaled to any size without loss of quality. They are independent of the resolution and are used, for example, for fonts, since these must be available in many different sizes. However, most input and output devices, such as scanners, displays, and printers, generate bitmaps or raster data. For this reason, a conversion between the two formats is necessary. Converting a vector graphic into a bitmap is called "rendering" or "rasterizing." Tracing algorithms are inherently imperfect because many possible vector outlines can represent the same bitmap. Of the many possible vector representations that can result in a particular bitmap, some are more plausible or aesthetically pleasing than others. For example, to render bitmaps with a high resolution, each black pixel is represented as a precise square that creates staircase patterns. However, spikes are neither pleasant to look at nor are they particularly plausible interpretations of the original image. Bezier curves are used to represent the outlines.
As seen in Figure 1, a cubic Bezier curve consists of four control points, which determine the curvature of the curve. As a rule, the vector graphics are saved as SVG files (Scalable Vector Graphics). This file format is a special form of an XML file. XML stands for Extensible Markup Language. It is used to present hierarchically structured data in a human-readable format. As can be seen from Figure 2, the structure of this file is based on the Extensible Markup Language scheme. The file header defines which versions of XML and SVG are used. The height and width of the graphic in points are also specified. In this case, the g element represents the drawing area on which to draw. The elements to be drawn consist of tags stored as XML elements. They are particularly important in connection with the path elements. Quadratic and cubic Bezier curves, as well as elliptical arcs and lines, can be put together as best fits. The entries here determine which form the path takes.
### _Potrace as a Vectorization Tool_
Potrace is a tracing algorithm that was developed by Peter Selinger [2]. It is considered simple and efficient as it produces excellent results. Potrace stands for "polygon tracer," where the output of the algorithm is not a polygon but a contour made of Bezier curves. This algorithm works particularly well for high-resolution images. Potrace generates grayscale images as a threshold vector rather than as the output. The conversion from a bitmap to a vector graphic is done in several steps. First, the bitmap is broken down into several paths that form the boundaries between black and white areas. The points adjoining four pixels are given integer coordinates. These points are saved as vertices when the four adjacent pixels are not the same color. The connection between two vertices is called the edge. A path is thus a sequence of vertices, whereby the edges must all be different. The path composition in Potrace works by moving along the edges between the pixels. Every time a corner is found, a decision is made as to which direction the path will continue based on the colors of the surrounding pixels. If a closed path is defined, it is removed from the image by inverting all pixel colors inside the path. This will define a new bitmap on which the algorithm will be applied recursively until there are no more black pixels. Then its optimal polygon is approximately determined for each path. The criterion for optimality with Potrace is the number of segments. A polygon with a few segments is therefore more optimal than one with several segments. In the last phase, the polygons obtained are converted into a smooth vector outline. Here, the vertices are first corrected so that they correspond as closely as possible to the original bitmap. Furthermore, in the
Figure 1: Bezier curve
Figure 2: Header of an SVG file by potrace
main step, the corners and curves are calculated based on the length of the adjacent segments and the angles between them. Optionally, the curves can be optimized after this process so that they match the original bitmap as closely as possible. Then, in the main step, the corners and curves are calculated based on the length of the adjacent segments and the angles between them. Optionally, the curves can be optimized after this process so that they match the original bitmap even more closely. Then, in the main step, the corners and curves are calculated based on the length of the adjacent segments and the angles between them. Finally, the curves can be optimized after this process. Figure 3 shows the output vector image when applying Potrace to an input raster image.
### _Autoencoder_
A typical use of a neural network is for supervised learning. It involves training data, which contains an output label. The neural network tries to learn the mapping from the given input to the given output label. Nevertheless, if the input vector itself replaces the output label, then the network will try to find the mapping from the input to itself. This would be the identity function, which is a trivial mapping. However, if the network is not allowed to simply copy the input, then it will be forced to capture only the salient features. This constraint opens up a different field of applications for neural networks, which was unknown. The primary applications are dimensionality reduction and specific data compression. The network is first trained on the given input. The network attempts to reconstruct the given input from the features it has picked up and outputs an approximation of the input. The training step involves the computation of the error and backpropagating the error. The typical architecture of an autoencoder resembles a bottleneck. Figure 4 depicts the schematic structure of an autoencoder.
The encoder part of the network is used for encoding and sometimes even for data compression purposes, although it is not very effective as compared to other general compression techniques like JPEG. Encoding is achieved by the encoder part of the network, which has a decreasing number of hidden units in each layer. Thus, this part is forced to pick up only the most significant and representative features of the data. The second half of the network performs the decoding function. This part has an increasing number of hidden units in each layer and thus tries to reconstruct the original input from the encoded data. Therefore, autoencoders are an unsupervised learning technique. Training an autoencoder for data compression: For a data compression procedure, the most important aspect of compression is the reliability of the reconstruction of the compressed data. This requirement dictates the structure of the autoencoder as a bottleneck.
1. **Encoding the input data:** The autoencoder first tries to encode the data using the initialized weights and biases.
2. **Decoding the input data:** The autoencoder tries to reconstruct the original input from the encoded data to test the reliability of the encoding.
3. **Backpropagating the error:** After the reconstruction, the loss function is computed to determine the reliability of the encoding. The error generated is backpropagated. The above-described training process is reiterated several times until an acceptable level of reconstruction is reached.
After the training process, only the encoder part of the autoencoder is retained to encode a similar type of data used in the training process. The different ways to constrain the network are:
* **Keep small Hidden Layers:** If the size of each hidden layer is kept as small as possible, then the network will be forced to pick up only the representative features of the data thus encoding the data.
* **Regularization:** In this method, a loss term is added to the cost function which encourages the network to train in ways other than copying the input.
* **Denoising:** Another way of constraining the network is to add noise to the input and teach the network how to remove the noise from the data.
* **Tuning the Activation Functions:** This method involves changing the activation functions of various nodes so that a majority of the nodes are dormant thus effectively reducing the size of the hidden layers.
### _High-pass Filters_
A high-pass filter can be used to make an image appear sharper. These filters (e.g., Sobel [3] and Canny [4]) emphasize fine details in the image. The change in intensity is used by high-pass filtering. If one pixel is brighter than its immediate neighbors, it gets boosted. Figure 5 shows the result of applying a high-pass filter (Sobel) on a random image.
Figure 4: Example structure of an autoencoding network
Figure 3: Portrace vectorization
## III Related Work
Image segmentation can be considered an extension of image classification where localization succeeds the classification process. It is a superset of image classification with the model pinpointing where a corresponding object is present by outlining the object's boundary. Image segmentation techniques can be divided into two classes:
* Classical computer vision approaches: such as thresholding, edge, region- or cluster-based segmentation techniques.
* AI-based approaches using mainly autoencoders. For instance, DeepLab made use of convolutions to replace simple pooling operations and prevent significant information loss while downsampling.
In our paper, we focus on the use of high-pass filters with autoencoders, which succeeded with a vectorization process. Hence, the relevant work on these topics is introduced in this section.
To create better vectorize vectors, Lu et al. [5] leverage additional depth information stored in RGB-D images. Although they anticipate consumer gear will soon be able to produce photos with depth information, this still has to happen. The method described here, however, operates with standard RGB photos without the need for additional gear.
Bera [6] offers a different method for image vectorization. It emphasizes the advancement made possible by edge detection techniques. This study, in contrast, looks into the benefits of dimensionality reduction.
A method for vector pictures based on splines rather than Bezier curves is presented by Chen et al. [7] To create a combination of raster and vector graphics, they concentrate on data structures that facilitate real-time editing.
Solomon and Bessmeltsev [8] investigated the usage of frame fields in an MIT study. Finding a smooth frame field on the image plane with at least one direction aligned with neighboring drawing outlines is the basic goal of their method. The two directions of the field will line up with the two intersecting contours at X- or T-shaped junctions. The frame field is then traced, and the traced curves are then grouped into strokes to extract the drawing's topology. Finally, they produced a vectorization that was in line with the frame field using the extracted topology.
Lacroix [9] examined several R2V conversion issues, and a method utilizing a preprocessing stage that creates a mask from which edges are eliminated and lines are retained has been suggested. Then clustering is carried out using only the pixels from the mask. In this situation, a novel algorithm called the median shift has been suggested. The labeling procedure that follows should also take into account the type of pixel. The final stage entails a regularization process. In various examples, the significance of the pre-processing ignoring edge pixels while keeping lines has been demonstrated. Additionally, tests demonstrated the superiority of the median shift over both the mean shift and the Vector-Magic clustering method. This paper also showed that better line vectorization can be obtained by enabling the extraction of dark lines, which can support the use of high-pass filters as a preprocessing stage to put further emphasis on those dark lines.
On the straightforward job of denoising additive white Gaussian noise, Xie et al. [10] developed a unique strategy that performs on par with conventional linear sparse coding algorithms. In the process of fixing damaged photos, autoencoders are used to lower image noise.
An approach that completes the automatic extraction and vectorization of the road network was presented by Gong et al. [11], first, varied sizes and strong connection; second, complicated backgrounds and occlusions; and third, high resolution and a limited share of roads in the image are the key barriers to extracting roads from remote sensing photos. Road network extraction and vectorization preservation make up the two primary parts of the road vectorization technique in this paper. This study also demonstrates the benefits of dense dilation convolution, indicating the potential for adopting autoencoding models to maintain vectorization.
Fischer and Amesberger [12] showed that preprocessing the raster image with an autoencoder neural network can reduce complexity by over 70% while keeping a reasonable image quality. They proved that autoencoders perform significantly better compared to PCA in this task. We base our work on this previous work, having a closer look at the effect of high-pass filters on autoencoding in an image vectorization pipeline.
## IV Methodology
In this section, the general approach is described. First, the selected dataset is introduced. The structure of the employed autoencoder is explained next. Details about the software implementation are given, and the processing pipeline is highlighted. Finally, evaluation methods are discussed.
### _CAT Dataset - as Data_
A dataset with over 10,000 cat images is used as the basis for training the autoencoder for evaluating the results. The CAT dataset was published in 2008 by Zhang et al. [13]. The content of the images is secondary for this work: The main reason this dataset is used is the fact that features such as ears, eyes, and noses are relatively easy to see in these images. The autoencoding model can thus be trained on these features and reliably reproduce them.
Fig. 5: Applying Sobel derivatives on a random image
### _Autoencoder - Functional Structure_
The starting point is input with the size 256 x 256 x 1 (a 256 x 256 grayscale image). The first layer of the autoencoder is a convolution layer that contains 16 different trainable filter kernels. Each kernel can result in a different representation of the input image. A Max-Pooling layer is connected to the convolutional layer to increase the density of the data and reduce the necessary computing power by reducing the number of trainable neurons. This 2x2 layer halved the size of the original image. This convolutional-max-pooling layer cascade is repeated twice for the next two layers, with the convolutional layer having 8 different filters and the same 2x2 max-pooling layer resulting in 64x64 and 32x32 sizes. In the last convolution layer of the encoder, which receives a 32x32 matrix as input, only four convolution kernels are used. The point of highest data density is here reached; therefore, the Max-Pooling layer is omitted. This layer of the autoencoder contains the most compact coding or representation of the data set. Figure 6 shows the encoder part of the autoencoder.
The decoder follows the layer with the highest data density. This part of the autoencoder is responsible for reconstructing the learned encoding. It uses transposed convolution layers and batch normalization layers. The transposed convolution layer works in a similar way to a convolution layer. The difference between the two is that by transposing the input, the layer is no longer compressed but decompressed. Here, the principles of the convolution layer are reversed. The filter kernel is used to determine how the input value is broken down into the larger grid. By using this layer, the image matrix is again enlarged. The transposed convolution layer is followed by a batch normalization layer. These layers, also known as batch norms, serve to accelerate and stabilize the learning process of neural networks. They reduce the amount by which the values of the neurons can shift. On the one hand, the network can train faster because the batch norm ensures that the activation value is neither too high nor too low. On the other hand, using this layer also reduces overfitting since less information is lost through dropouts.
The decoder connects directly to the encoder to take over the most compact representation of the data set passed by the encoding layers. First, the decoder receives a tensor with a size of 32x32x4 as input. The first function that is applied to this tensor is a transposed convolution layer. This results in an enlargement of the image matrix to 64x64. Four 3x3 filter kernels are used here. This is followed by a batch-norm layer to normalize the results and accelerate the learning process. The same process is repeated with a different number of filter kernels to maintain the symmetrical structure of the autoencoder after reaching the original matrix size of 256x256; another transposed convolution layer is added. This ensures that the output of the first layer and the input of the last layer have the same size. The final layer reduces the tensor dimension to one to produce a grayscale image as output. Figure 7 shows the decoder part of the autoencoder.
### _Software Implementation_
The test/evaluation framework was implemented in Python. The autoencoder was implemented with TensorFlow [14] and Keras [15]. The convolutional neural network was built with convolution and pooling layers in three steps to a 32x32 bottleneck. The decoder mirrors this structure with three steps of transposed convolutional layers and batch normalization layers. The autoencoder input is set to a 255x255 image (gray-scaled). The high-pass filters used in this paper are the standard implementations in OpenCV [16].
### _General Approach of Processing_
Regardless of the path an image takes in any pipeline that will be built, the first processing stage is always going to be converting the image into grayscale. The focus of this work is on single-channel images; however, it can be extended in the future for multi-channel (RGB) processing. Therefore, when a pipeline is demonstrated visually, the initial version of the image displayed is going to be grayscale, but this is implying that the raw RGB images were all grayscaled, which will be a common branch for all the pipelines built in this work.
After an image is grayscaled, it will go through a certain cascade of processing stages. In this paper, the stages concerned are high-pass filtering, autoencoding, and vectorization. The experiments in this work are going to tune the different parameters that these stages can take. More importantly, the outputs of all pipelines possible are going to be in a vector format because we are attempting to enhance the vectorization process while aiming for an abstract representation of the image. Therefore, a rasterization stage is going to always be placed at the end of every pipeline. Converting images back into their raster format is mandatory to perform a comparison between the grayscale image that was initially fed to a pipeline and its resulting vector format. Hence, we rasterize the vector output to be able to evaluate the efficiency of the pipeline. A general processing approach for the different pipelines is shown in Figure 8.
### _Evaluation Methods_
The case at hand deals with both vector and raster images. Therefore, for a comparison to take place, a comparison method for each format needs to be selected.
* **Vector:** Various methods can be used to measure the level of complexity in a vector image. One is the file size, which can be used to calculate the length of all path entries in the file. Furthermore, investigating the reduction of complexity can be done by analyzing the longest path tags. The number of path tags can be taken as a characteristic value of the complexity. In this paper, it is assumed that the number of SVG path entries is directly related to its complexity.
* **Raster:** There are mainly two common ways of comparing raster images. The first one is comparing images based on the mean squared error (MSE) [17]. The MSE value denotes the average difference of the pixels all over the image. A higher MSE value designates a greater
difference between the original image and the processed image. Nonetheless, it is indispensable to be extremely careful with the edges. A major problem with the MSE is that large differences between the pixel values do not necessarily mean large differences in the content of the images. The Structural Similarity Index (SSIM) [18] is used to account for changes in the structure of the image rather than just the perceived change in pixel values across the entire image. The implementation of the SSIM used is contained in the Python library Scikit-image (also known as "Scikit") [19]. The SSIM method is significantly more complex and computationally intensive than the MSE method, but essentially, the SSIM tries to model the perceived change in the structural information of the image, while the MSE estimates the perceived errors.
In the experiments conducted for this paper, the results of MSE and SSIM drive the same conclusion. Therefore, to avoid redundancy, only the SSIM graphs are displayed in this paper.
## V Experimentation
Firstly, a sample of five images was filtered with the initial high-pass filters. The results are shown in Figure 9.
The first impression is that the Gaussian filter results in some significant noise. Both the Sobel and the Canny filters were acceptable, with the Sobel seemingly having better results for the human eye. Because it made more sense to have the detected lines drawn black on a white image than the opposite case, the three filters were inverted.
### _Blur-Free Noise-Reduction Filtering_
In an attempt to reduce the noise the Gaussian filter was causing, two trials were done. They both worked by cascading a filter on top of each high-pass filter. This smoothing filter should result in noise reduction while avoiding blurring the image. Hence, two filters were chosen: difference and grain-extract filters. Figure 10 shows the result of applying the two chosen filters on the high-pass filters.
Although the image is still too noisy to be fed into a neural network, the noise-reduction filters may provide a roughly improved version of the Gaussian filter. The difference and
Figure 8: General processing approach
Figure 6: Encoder part of autoencoder
Figure 7: Decoder part of autoencoder
Figure 9: Applying different filters to five random images
grain-extract filters, however, resulted in a decline in image quality and a sizable data loss as compared to the Sobel and Canny filters. The experiment therefore suggests that these two recommended filters are unsuitable for use in a subsequent preprocessing stage and that the Gaussian filter should be categorically excluded from any further use in the project due to its inherent noise.
### _Filter-Inversion Effect on Autoencoding_
The second experiment done in this section is obtaining the difference between training an autoencoder with images whose lines are drawn in black on a white background and training it with the same images but inverted.
Therefore, four models of autoencoders were trained with 5000 epochs each in addition to the default model, which makes them five models each trained with the following types of images respectively: grayscale images, Sobel-direct images, Sobel-inverse images, Canny-direct images, and Canny-inverse images (direct: dark background and white features. inverse: inverse of direct). Five images were selected randomly and put through the five trained models as shown in Figure 11.
The first conclusion drawn was that, when training an autoencoder, the semi-supervised neural network responds better when the training images have darker lines in their important features. However, a rough estimation with the human eye would not do, but rather an exact mathematical calculation. Therefore, a measurement of similarity was done between every image and its decoded version. This was a better way of using the SSIM than comparing them with the default images, as the goal was to determine how close the autoencoding was. For this part, 50 images were used to dampen the image-specific features and make the measurement more generalized.The measured values were plotted in Figure 12.
For the sobel-direct, the mean and standard deviation values were 0.202 and 0.044, respectively. Their inverse scores were 0.699 and 0.124, respectively. For the Canny-direct, the mean and standard deviation values were 0.234 and 0.090, respectively. The inverse scored 0.741 and 0.150, respectively.
These values support our first observation, which is that the autoencoder learns faster when the image's most important features are darker than the rest of the data. The experiments so far have resolved into using the Sobel and Canny filters, and more specifically, their inverted results. At the start, it was thought that the experiments would resolve into choosing only one filter as a preprocessing stage for the autoencoding, but as calculated previously, the quality of images between the Sobel and Canny images is so close that it does not imply the disregard of one of the two filters.
Nevertheless, there is a significant drop in quality when applying a high-pass filter to the original image and then passing it through an autoencoding stage. This raised a flag that perhaps the pipeline's order might not be thorough. For
Figure 11: Comparison between the autoencoding of the Sobel and Canny filtered images with both of their versions
Figure 10: Applying the difference and grain-extract to a random image after being filtered
instance, the autoencoder is perceived to work as a reconstruction algorithm. Simultaneously, it can be considered to smooth the image, or in other words, to represent it with more coherence between the pixel values. As a result, the high-pass filters may be more efficient if applied after image reconstruction rather than before autoencoding, which appears to cancel out some of the emphasis generated by the filters. Hence, an experiment on the matter should be performed.
### _Autoencoders as a Preprocessing Stage to High-Pass Filters_
In this experiment, random images were taken, reconstructed with an autoencoder, filtered, and then vectorized. This experiment aims to display the effect of high-pass filters on reconstructed image vectorization. The five random resulting images are shown in Figure 13.
The first impression the experiment gives off is that the filters brought more definition to the lines in the images, which made the shapes appear clearer. This can lead to better vectorization, as it depends on the definitions of the shapes represented in the tags.
However, there are two versions of each of the two filters, which suggest an evaluation of the vectorization of each of the four result groups. Therefore, an SSIM calculation was done between every filtered image and its vector format in a pool of 50 images, randomly selected. The results are displayed in Figure 14.
The box plots show the better fitness of white images with black lines when compared to the darker images in vectorization. Visually speaking, the Sobel filter results were more recognizable to the naked eye. However, it left more complexity in the image, which made it harder for the vectorization to be more exact. Therefore, it is concluded that the darker shapes are going to be used in both filters, while there is not yet a clear endpoint to resolve depending on only one of the two filters. Hence, a parallel stage of execution is introduced, which takes the autoencoder images and filters them with one filter before passing them to the global vectorization stage.
## VI Evaluation
Evaluation is concerned with how abstract the resulting images are. As there are two pre-processing blocks (filtering and autoencoding), four different pipelines can be built: autoencoding, filtering, autoencoding-filtering, and filtering autoencoding. After one of these selections is fed the images, a vectorization process is always cascaded at the end.
First, all of the resulting images are going to be evaluated based on their path count (size) and similarity to the input images. Then, a summary of the evaluation is going to be introduced for each of the pipelines individually.
Before engaging in the evaluation, it is good to elaborate on the column naming of the upcoming plots:
Fig. 14: SSIM comparison of the vectorization of each of the four groups of images
Fig. 12: SSIM of different autoencoding approaches
Fig. 13: Filtered autoencoder images with Sobel and canny (both versions each)
* default: the default image.
* Sobel, canny: the filtered version of the image by the respective filter.
* dec: the decoded version.
* vect: the vectorized version.
* A combination of two or more indicates the case of cascaded stages. A default-dec-sobel label represents the following: the default image is reconstructed with the autoencoder and then filtered with the sobel filter.
### _Evaluating the size of the produced images_
To evaluate the size of the image, we count the number of path objects generated in the SVG file. From Figure 15 (note that the graph is in logarithmic scale) we see that the autoencoder (*-dec-*) significantly reduced the size of images, as it keeps only the most important features. The reconstructed filtered images (canny-dec, sobel-dec) had a similar path count. Although it was much smaller than the ones that did not go through that step, it was still above the default images that were reconstructed and vectorized without any filtering. Finally, when filters were applied to the default images that were put through an autoencoding stage (default-dec-sobel, default-dec-canny), these images scored in size calculations very similarly to the filtered images when only reconstructed (canny-dec, sobel-dec).
### _Evaluating the quality of the produced images_
A more accurate way of examining the efficiency of the vectorization process of each pipeline is to compare the images and their vector versions (Figure 16). The pipeline of autoencoding-filtering-vectorization (two last groups on the most-right) seems to experience the highest SSIM, which indicates its fitness in vectorization. It made more sense for the autoencoder to reconstruct the images and then for the filters to come afterward, emphasizing the important features of each image.
### _Implemented Pipelines: an evaluation summary_
This is a summary of the evaluation of the results for each of the pipelines individually.
* **Autoencoding-Vectorization:** This pipeline was based on the work of Fischer and Amesberger [12]. However, the implementation was different, and the evaluation was about the abstractness of the results. The quality of the vectorization is acceptable only in terms of general similarity. However, an abstract representation of the image is not achieved (Figure 17).
* **Filtering-Vectorization:** In this pipeline (Figure 18), the vectorization algorithm finds difficulty in vectorizing the filtered images. This is due to the noises caused by the applied filters. Although the experiments showed that the quality of the vectorization increased when the images were taken as a light background with dark features, the noise involved created an obstacle for Potrace to convert thoroughly the images into a vector format, which resulted in losing data.
* **Filtering-Autoencoding-Vectorization:** This pipeline was built as an attempt to enhance the _Autoencoding-Vectorization_ pipeline. Although the autoencoding stage was efficient in reducing the size of the images, it did not result in an abstract view of the image features. Therefore, a filtering stage was placed before the autoencoding process. Unfortunately, this pipeline does not achieve the result intended. The autoencoding stage was supposed to
Figure 16: Vectorization accuracy of different pipelines
Figure 17: Autoencoding-vectorization pipeline
Figure 15: Path count of the resulted groups of vector images
reconstruct the filtered images in a lower complexity; but the case at hand is that the autoencoding model is attempting to smooth the images, canceling the effect of the high-pass filters. This has resulted in a significant drop in the quality of the vector images, which is seen in Figure 19.
* **Autoencoding-Filtering-Vectorization:** Due to the results in the _Filtering-Autoencoding-Vectorization_ pipeline, it was clear that the filtering stage would act more appropriately if it succeeded the autoencoding process, rather than preceding it. This was concluded when the autoencoding model was seen to reduce the complexity of the images while introducing a smoothing effect. The filters were placed after the reconstruction stage to preserve the important features of the reduced-complexity image. This cascade shows an acceptable vectorization quality while resulting in the intended abstract representation of the images as shown in Figure 20.
As for providing more visualizations of the results that can be obtained with this pipeline, Figure 21 shows some random images that were fed to the Autoencoding-filtering-vectorization pipeline along with their respective output images. As can be seen, the features of the cats are extracted very clearly in all examples.
## VII Conclusion
This paper discusses the autoencoding step and the use of high-pass filters in vectorization pipelines. As demonstrated, high-pass filters can improve the training of an autoencoder, which in turn improves the efficiency of vectorization by maintaining key aspects of an image.
The images that underwent the cascade of autoencoding-filtering scored the greatest in similarity and the lowest in error after the vectorization algorithm's effectiveness in each pipeline was assessed. This indicates that the most crucial elements of the reconstructed images were maintained and that the filtering step that came after the reconstruction enhanced those features even further, resulting in a better vectorization and a more abstract representation of the image.
Although the results from this cascade of autoencoding-filtering were respectable and met the initial objectives, more work needs to be done on the training dataset and model structures.
Regarding future work, experiments showed that dark features on a light background in images can improve both the training of autoencoder models and the process of vectorization. This will be an issue for further investigation. As this paper deals with single-channel (i.e., gray-scale) images, another aspect of the investigation will be the vectorization of multi-channel images.
|
2306.11676 | Emergence of tip singularities in dissolution patterns | Chemical erosion, one of the two major erosion processes along with
mechanical erosion, occurs when a soluble rock like salt, gypsum or limestone
is dissolved in contact with a water flow. The coupling between the geometry of
the rocks, the mass-transfer and the flow leads to the formation of remarkable
patterns, like scallop patterns in caves. We emphasize the common presence of
very sharp shapes and spikes, despite the diversity of hydrodynamic conditions
and the nature of the soluble materials. We explain the generic emergence of
such spikes in dissolution processes by a geometrical approach. Singularities
at the interface emerge as a consequence of the erosion directed in the normal
direction, when the surface displays curvature variations, like those
associated to a dissolution pattern. First, we demonstrate the presence of
singular structures in natural interfaces shaped by dissolution. Then, we
propose simple surface evolution models of increasing complexity demonstrating
the emergence of spikes and allowing us to explain at long term by coarsening
the formation of cellular structures. Finally, we perform a dissolution pattern
experiment driven by solutal convection and we report the emergence of a
cellular pattern following well the model predictions. Although the precise
prediction of dissolution shapes necessitates to perform a complete
hydrodynamic study, we show that the characteristic spikes which are reported
ultimately for dissolution shapes are explained generically by geometrical
arguments due to the surface evolution. These findings can be applied to other
ablation patterns, reported for example in melting ice. | Martin Chaigne, Sabrina Carpy, Marion Massé, Julien Derr, Sylvain Courrech du Pont, Michael Berhanu | 2023-06-20T16:54:11Z | http://arxiv.org/abs/2306.11676v2 | # Emergence of tip singularities in dissolution patterns.
###### Abstract
Chemical erosion, one of the two major erosion processes along with mechanical erosion, occurs when a soluble rock like salt, gypsum or limestone is dissolved in contact with a water flow. The coupling between the geometry of the rocks, the mass-transfer and the flow leads to the formation of remarkable patterns, like scallop patterns in caves. We emphasize the common presence of very sharp shapes and spikes, despite the diversity of hydrodynamic conditions and the nature of the soluble materials. We explain the generic emergence of such spikes in dissolution processes by a geometrical approach. Singularities at the interface emerge as a consequence of the erosion directed in the normal direction, when the surface displays curvature variations, like those associated to a dissolution pattern. First, we demonstrate the presence of singular structures in natural interfaces shaped by dissolution. Then, we propose simple surface evolution models of increasing complexity demonstrating the emergence of spikes and allowing us to explain at long term by coarsening the formation of cellular structures. Finally, we perform a dissolution pattern experiment driven by solutal convection and we report the emergence of a cellular pattern following well the model predictions. Although the precise prediction of dissolution shapes necessitates to perform a complete hydrodynamic study, we show that the characteristic spikes which are reported ultimately for dissolution shapes are explained generically by geometrical arguments due to the surface evolution. These findings can be applied to other ablation patterns, reported for example in melting ice.
Cusps, tips or pinches that can occur at the free surface of liquids have always attracted attention, notably because they often present self-similar shapes [1; 2; 3; 4]. Mathematically, the location where the interface curvature diverges is generically called a singularity [5]. But singular shapes can also occur on evolving solid surfaces carved by hydrodynamic processes, which is of prime importance in geomorphology. At the surface of the Earth, landscapes are indeed shaped by erosion [6]: water and wind carve rocks and mountains, dig valleys and caves, and sometimes produce spectacular structures. In mechanical erosion, sediments are ripped from the rock and carried by the flow. In chemical erosion, minerals dissolves into water before being carried as solutes [7; 8]. In nature, this is the main erosion mechanism for limestone, gypsum or salt. When water flows over these rocks, it can lead to the formation of remarkable patterns. For example, sloping rain-exposed soluble ground can be covered by Rillenkaren (parallel grooves), in which wide concave channels are separated by narrow crests [9; 10; 11]. Even more impressive, stone forest of sharp pinnacles are observed on limestone in tropical Karst regions [12; 13]. Recent experimental, theoretical and numerical studies [14; 15; 16; 17] report the similar formation of sharp pinnacles by dissolution of fast dissolving material like home-made candies, under the action of a gravity-driven convection flow, when the boundary layer charged in solute remains attached to the dissolving interface.
Below the Earth surface, on the limestone walls of caves carved by underground rivers, scallop patterns are another important example of sharp structures generated by dissolution. The scallops appear as concave depressions rounded by sharp crests [12; 18] with typical length scales ranging from centimeters to meters. They originate from a coupling between the topography and a flow which generates wall undulations that could be linear transverse (called ripples or flutes [19; 20]) or more complex (scallops) [19; 21; 18]. Recent modeling works [20; 22] explains the emergence of the undulations as a linear instability mechanism at the laminar-turbulent transition, enhancing dissolution in the troughs. This two-dimensional mechanism predicts the wavelength as a function of the current velocity, but does not explain the sharp crests of the scallops and their evolution in the nonlinear regime of flow-dissolving surface interaction. Analogous scallop patterns have also been generated in dissolution experiments without imposing flow, when the solute boundary layer detaches from the dissolving interface, generating a convection flow [23; 24; 25]. The roughness and stripes generated at short time by dissolution evolve into an assembly of concave troughs delimited by sharp crests [25].
In all of these examples, the ultimate shapes result from different, specific and complex out of-equilibrium hydrodynamic problems, but generically exhibit sharp structures and spikes.
In this article, we show that a simple geometrical model of interface evolution predicts the emergence of tip singularities in finite time, which explains the generality
of singular shapes on structures eroded by dissolution. First, we detect and characterize the presence of crests on a field example, a scalloped wall in the Saint-Marcel cave. Then, following the pioneering idea of A. Lange in the fifties [18; 26], we expose a model of interface evolution with a normal ablation leading to the emergence of singularities in finite time. Simulations of one and two dimensional interfaces show that surfaces whose curvature is initially not uniform evolve into a surface exhibiting discontinuities of the surface gradient. We also show that this observation is robust even when the erosion rate is non uniform, and generates by coarsening a cellular pattern. Finally, for the example of dissolution experiments driven by solutal convection, we analyze the emergence and the evolution of the scallop patterns in relation with the outcomes of our model.
## I Characterisation of singularities in the field.
The Saint-Marcel cave, located in the Ardeche department in southern France, is an impressive limestone cave open to the public, famous for its dissolution and precipitation patterns. We performed a 3D reconstruction of one of its nearly vertical walls by photogrammetry (see Methods IV.1). Fig. 1A shows the corresponding orthophotograph, on which spectacular'scallops' of similar sizes and orientations can be observed. For a fairly homogeneous region of about \(5\,\mathrm{m}^{2}\), we plot the wall elevation field \(\eta(x,y)\) in Fig. 1B. It consists of an array of concavities, a few dozen centimeters wide and a few centimeters deep, surrounded by narrower crests. Crests delimiting the scallops appear very well as areas of intense negative curvature forming thin contour lines on the corresponding mean curvature field \(\kappa(x,y)\) (see Methods IV.2.1 and Fig. 1C). By plotting a longitudinal cross-section profile in Fig. 1D, we visualize indeed that the curvature minima are precisely located at the top crests of the signal.
Schematically, these crests can be seen as singularities of the surface, _i.e._ local discontinuities of the first derivatives \(\partial\eta/\partial x\) or \(\partial\eta/\partial y\), or equivalently divergences of the second derivatives and of the curvature.
In reality, in a physical case like this, singularities are regularized: divergences of the curvature are replaced by localized peaks. But although the radius of curvature at the crests is not zero, it is much smaller than the typical dimensions of scallops. On a large scale, it remains therefore relevant to describe the cave surface as a set of singular structures. As these are randomly distributed in space, we use two statistical methods to detect and quantify them, by computing the curvature distribution of the surface and its Fourier spectrum.
As observed in Fig. 1E, the mean curvature distribution of \(\eta(x,y)\) differs significantly from the centered Gaussian distribution of same standard deviation (pink solid line) in two respects: it has a tail at negative curvature and its maximum is shifted toward positive values. The tail shows the presence of very localized areas of strongly negative curvature: the crests. The maximum is shifted because most of the points are within a wide concave area of small but positive curvature. The probability density function is thus asymmetric, which can be evidenced by the computation of the skewness \(\tilde{\mu_{3}}=[\mathbb{E}\left((\kappa-\mu)/\sigma\right)]^{3}\), with \(\mu\) and \(\sigma\) the mean and the standard deviation of \(\kappa\), respectively. We indeed find a negative skewness \(\tilde{\mu_{3}}=-0.33\).
As observed in Fig. 1F, the power spectral density of the surface \(S_{\eta}\) (see Methods IV.2.2) follows a clear power law in \(k^{-4}\) at intermediate to low wavelength \(\lambda=2\pi/k\). The existence of this power law can be linked to the presence of crests using an approach proposed initially to address the turbulent spectrum of steep waves [27]. Mathematically, singularities are scale-invariant objects which are known to have a very wide spectral signature, and more specifically a power law spectrum. The corresponding exponent can be intuited as follows. Let's assume that the interface is composed of randomly distributed spikes, whose first derivatives are discontinuous, surrounded by smoother zones. The Laplacian of the surface can then be approximated as the sum of randomly distributed Dirac delta functions, located at the position \(\mathbf{r_{i}}\) in a 2D space, and of a regular function \(f(\mathbf{r})\):
\[\frac{\partial^{2}\eta}{\partial x^{2}}+\frac{\partial^{2}\eta}{\partial y^{2 }}=\sum_{i}\,\Gamma_{i}\,\delta(\mathbf{r}-\mathbf{r_{i}})+f(\mathbf{r})\,. \tag{1}\]
Assuming that the irregular part dominates the spectrum, by applying a 2D Fourier transform of wavevector \(\mathbf{k}\) we find:
\[k^{2}\,\tilde{\eta}\sim\sum_{i}\,\Gamma_{i}\,\mathrm{e}^{-\imath\left(k_{x}\, x+k_{y}\,y\right)}\qquad\text{with}\quad k^{2}=k_{x}^{2}+k_{y}^{2}\,. \tag{2}\]
The 2D Fourier transform of the surface therefore verifies \(\tilde{\eta}\sim k^{-2}\). According to the definition of the power spectrum \(S_{\eta}(k)\) (see Methods), which is integrated over the directions, we obtain finally \(S_{\eta}(k)=2\,\pi\,k\,|\tilde{\eta}|^{2}\sim k^{-3}\). This power-law is thus associated to the presence of point-like singularities. Now if the spikes are better depicted by line singularities, the space spectrum becomes proportional to \(k^{-4}\). By extension, if the crests have a non-integer fractal dimension \(D\), we have \(S_{\eta}\propto k^{-3-D}\)[28]. Here, the observed power law is therefore compatible with the presence of one-dimensional crests.
We have thus identified two indicators of the presence of crests on a natural wall. Although regularized on a small scale, these crests can be assimilated to singular structures, characterized by an asymmetric mean curvature distribution and a characteristic power-law in \(k^{-4}\) in their Fourier spectrum. Because of the time scales required to form scallops on limestone (typically several thousand years), it is impossible to observe directly
the evolution of the pattern and the appearance of singularities. However, in the following sections, we will present a numerical model and an experimental study to shed light on the generic emergence of scallops in erosion by dissolution.
## II Normal ablation model
### Theoretical basis of erosion by dissolution
Dissolution is a mass transfer phenomenon between a solid and a liquid phase, in which solid is transported in the liquid as a solute concentration field \(c(x,y,z)\). Dissolution dynamics and solid morphogenesis thus depend, via the boundary condition at the interface, on solute transport by the flow. While the latter can be described by the classic advection-diffusion and Navier-Stokes equations, the macroscopic description of dissolution at the solid/liquid interface is slightly more complex. In the simplest modeling of chemical kinetics, the reaction rate can be shown to be proportional to the distance to the thermodynamic equilibrium [29] (_i.e._\(c=c_{\rm sat}\)) leading to the following expression for the interface velocity \(\mathbf{v_{d}}\):
\[-\rho_{\rm s}\,\mathbf{v_{d}}=\alpha(c_{\rm sat}-c_{\rm i})\,\mathbf{n} \tag{3}\]
where \(c_{\rm i}\) is the solute concentration at the fluid-solid interface, \(\rho_{\rm s}\) the density of the solid, \(\alpha\) a coefficient depending on the chemical properties of the involved solid/liquid system and \(\mathbf{n}\) the unit vector normal to the interface.
Then, conservation of the solute flux at the interface also gives [25; 30]:
\[\rho_{\rm s}\,\mathbf{v_{d}}\,\left(1-\frac{c_{\rm i}}{\rho_{\rm i}}\right)=D \left(\nabla c|_{\rm i}\cdot\mathbf{n}\right)\mathbf{n}\,, \tag{4}\]
Figure 1: Field measurements: scallops in the Saint Marcel cave (France). A: Orthophotograph of a vertical wall of the cave, approximately 2 meters high and 10 meters long, covered by scallops. B: 3D reconstruction of a portion of the wall (area surrounded by a pink dotted line on the orthophotograph) using photogrammetry. C: Mean curvature \(\kappa\) of the topography for the same portion of the wall. Lines with highly negative curvature indicate location of crests. D: Longitudinal cross-section at the location of the pink dotted line on the orthophotograph (black solid line) and corresponding mean curvature (pink dashed line). E: Normalized mean curvature distribution. Solid pink line indicates Gaussian distribution with zero mean and same standard deviation. Inset: shape of the distribution around zero revealing a shift toward positive values. F: Power Spectral Density of the topography. Black dashed line corresponds to a power-law in \(k^{-4}\).
with \(\rho_{\rm i}\) the density of the fluid at the interface and \(D\) the diffusion coefficient of the solute in the fluid.
In most cases for usual soluble materials (limestone, gypsum, salt, sugar...), \(\mathbf{v_{d}}\) is several orders of magnitude smaller than the typical velocity of the fluid, so that the interface can be considered as quasi-static. The hydrodynamics adapts to the new boundary conditions in a time far faster than those needed to have significant shape changes. In addition, if the chemical dissolution kinetics is fast compared to diffusion, \(c_{\rm i}\) increases quickly to approach \(c_{\rm sat}\)[31] and the concentration profile decreases to reach the value of the surrounding water bath on a small scale \(\delta\), the thickness of the concentration boundary layer. The erosion rate -and potential patterning due to its heterogeneity- are controlled by the concentration gradient and thus by \(\delta\), which results from the balance between diffusion and fluid advection and can be modulated by a coupling between the topography and the flow. To obtain the exact shape evolution of a dissolving solid in a specific case, the hydrodynamic problem should therefore be solved, either analytically or numerically, to access \(\delta\). It often proves to be particularly difficult.
But the main point of this paper is to argue that the appearance of crests and spikes is largely independent of the exact behaviour of \(\delta\). We show for instance that they can emerge even when \(\delta\) is uniform, as long as the erosion velocity is normal to the solid interface.
Let us first consider the simple case where the initial interface is one-dimensional, and more precisely a sinusoid. We compute the erosion in propagating each point of the initial interface by a fixed normal distance \(d=0.05\) while ensuring that the new interface does not self-intersect (see Appendix Section 1. Algorithm of interface propagation). We repeat the process 10 times and plot the resulting interface at each step (see Fig. 2). In the vocabulary of differential geometry, the successive interfaces drawn in this figure are said to be parallels of the initial interface. Parallel curves or also called offset curves have been the object of several studies, mainly in the context of computer-aided design [32; 33]. A parallel curve does not usually have an analytical expression, even if this is the case for the initial curve. Yet if one considers a regular initial curve, parameterized by a parameter \(s\), of curvature \(\kappa_{0}(s)\), the curvature \(\kappa_{d_{\rm ur}}(s)\) of one of its parallel curves is simply expressed as:
\[\kappa_{d_{\rm ur}}(s)=\frac{\kappa_{0}(s)}{1+\kappa_{0}(s)d_{\rm er}}, \tag{5}\]
with \(d_{\rm er}\) the distance between both curves [34]. Here, \(d_{\rm er}=n_{\rm it}d\) with \(n_{\rm it}\) the number of iterations. If \(\kappa_{0}(s)>0\), then \(\kappa_{d_{\rm ur}}(s)\) decreases when \(d_{\rm er}\) increases and tends toward 0: the convex portions of the sinusoid, located around the minima, become increasingly flattened. But if \(\kappa_{0}(s)<0\), \(\kappa_{d_{\rm ur}}(s)\) decreases (its absolute value increases) and eventually diverges when \(d_{\rm er}=-1/\kappa_{0}(s)\), which means that \(d_{\rm er}\) is equal to the radius of curvature. We plot the evolution of the initial sinusoid, which is the locus of its centers of curvature, on Fig. 2. We indeed observe that a tip singularity appears when the parallel curve crosses the evolute, and the location of the tip coincides with the intersection point. Alternatively, a singularity of \(\eta(d_{\rm er},x)\) can be seen as a shock of \(\partial_{x}\eta\)[35]. Indeed, in the limit of small deformations, \(\partial_{x}\eta\) follows the inviscid Burgers equation, which is well known to give rise to shocks, _i.e._ discontinuities of \(\partial_{x}\eta\); and \(\eta\) follows the deterministic Kardar-Parisi-Zhang equation with no surface tension. Shock formation has also been theoretically investigated in the context of crystal growth [36; 37] but not for ablation.
With the exception of flat and spherical shapes, any interface displaying variation of the curvature subjected to a normal ablation process will form in finite time sharp crests corresponding to singular points where the surface gradient is discontinuous. We emphasize thus that generally a random surface experiencing normal ablation will exhibit the same singularities, as shown in the next section.
### Uniform dissolution rate for a 1D interface
Let us now consider what happens when a random interface undergoes uniform normal ablation in the one-dimensional case. We model the interface by a set of points \((x,\eta(x))\) forming a line. We generate the initial interface by applying Gaussian filters on a random list to select dimensionless wavelengths between 1 and 1.5, and we set its standard deviation to 1. Then, we obtain the subsequent interface by propagating each point in the direction of its normal by a given length \(d=0.05\). We repeat the process 250 times and plot the interface
Figure 2: Numerical model: normal ablation of a sinusoid with uniform dissolution rate. From top to bottom, successive interfaces resulting from the normal ablation, for selected steps. Pink dotted line: evolute curve of the initial sinusoid shape, _i.e._ locus of its centers of curvature.
once every 10 iterations in Fig. 3A. The color indicates the total ablation length \(d_{\rm er}=n_{\rm it}d\), with \(n_{\rm it}\) the number of iterations.
Because normal vectors deviate from each other at the local minima, we see that the troughs widen and their curvature decreases. At the same time, as the normal vectors converge at the local maxima, humps narrow with each iteration until they form tips (see inset of Fig. 3A). Adjacent tips eventually merge which leads to a global coarsening of the pattern, as evidenced by the grey dots indicating the position of tips in Fig. 3A. Finally, the amplitude of the height variations of the interface decreases. We highlight this by plotting \(\sigma_{\eta}\) the standard deviation of \(\eta\) as a function of \(d_{\rm er}\) in Fig. 3B.
Then, focusing more precisely on the beginning of the process, we plot the curvature distribution of the first ten interfaces in Fig. 3C, and notice that it becomes increasingly asymmetric. The apparition of a tail on the left part of the distribution shows the emergence of points with a highly negative curvature: the tips. The skewness of the distribution, plotted versus the ablation length in inset, decreases sharply for \(d_{\rm er}\simeq 0.25\). The inflection point seems to coincide with the apparition of the first singularities. Concurrently, we plot the power spectral density \(S_{\eta}\) as a function of the inverse of the wavelength in Fig. 3D. The contribution to the signal of small wavelengths increases gradually, and the right part of the spectrum eventually converges towards a power-law in \(k^{-4}\), characteristic of singular structures (see Section 1). The exponent is expected to be \(-4\) because, unlike crests on a surface, tips are point-like singularities, but there is no integration over the directions in 1D.
This model explains the evolution of any corrugations towards sharp tips, but it does not explain the very emergence of a pattern. Indeed, it predicts a decrease of the
Figure 3: Numerical model: normal ablation of a random interface with a uniform dissolution rate. A: Successive interfaces resulting from the uniform ablation of a random initial interface, consisting of \(10^{6}\) points in the interval \([-500,500]\). The location of tips is highlighted with grey dots. Inset: focus on the first interfaces with emergence of a tip. B: The standard deviation of the interface \(\sigma_{\eta}\) decreases with erosion distance \(d_{\rm er}\). C: The curvature distribution becomes asymmetric with a tail at highly negative \(\kappa\) evidencing the apparition of tips. We focus on the beginning of the process, and the color code is the same as in the inset of panel A. Inset: skewness of the distribution \(\tilde{\mu_{3}}\) as a function of \(d_{\rm er}\). Note that in absence of regularizing mechanisms of singularities, \(\tilde{\mu_{3}}\) reaches here strongly negative values. D: The power spectral density \(S_{\eta}\) of the interface rapidly collapses at small wavelength on a characteristic power-law in \(k^{-4}\).
corrugations amplitude. Pattern formation necessitates differential dissolution. In the following subsection, we will therefore introduce a non-uniform dissolution rate on a two-dimensional interface to allow three dimensional patterning as observed in natural systems.
### Heterogeneous dissolution rate for a 2D interface
We start from a flat surface \(\eta(x,y)=0\) consisting of a square grid of \(1800\times 1800\) points. We obtain the subsequent interface by propagating each point in the direction of its normal by a now spatially-varying length \(d(x,y)\), and we then repeat the process. We introduce space correlation so that, at each iteration,
\[d(x,y)=d\left(1+\xi(x,y)\right), \tag{6}\]
with \(d=0.05\) (very small compared to the other length scales) and \(\xi(x,y)\) a random function, varying between \(-0.4\) and \(0.4\), obtained by applying Gaussian filters on a random matrix to select wavelengths between \(2\) and \(3\). The spatial variations in \(d(x,y)\) first imprint on the interface and create a smoothly varying pattern (see Fig. 4A). Then, because the ablation is normal, troughs become wider and humps become narrower (Fig. 4B) until they form lines of crests. At later times, we recover the main features of scallop patterns: sharp crests encircling broad concave zones, forming a cellular pattern at large scale (Fig. 4C). On Fig. 4D-F, we plot the mean curvature at each point of the surfaces shown above. A network of crests appears progressively, characterized by a very negative mean curvature.
While the amplitude of the pattern continuously increases with time (Fig. 4G), the two indicators of singularities identified in the previous sections hold. The curvature distribution becomes increasingly asymmetric with a tail at highly negative mean curvature and a maximum shifted towards positive values. Its skewness decreases first slowly then abruptly, before reaching a plateau (Fig. 4H). At small wavelength, the power spectral density scales like \(k^{-4}\) evidencing one-dimensional singular structures (Fig. 4I).
We were thus able, with a simple model containing no hydrodynamics, to reproduce scallop patterns. All that is required is to impose a normal ablation and a heterogeneous ablation rate correlated in space. Then, even though the ablation rate varies smoothly, a cellular pattern with sharp crests robustly appears. We therefore emphasize that the tip formation mechanism mentioned in the previous subsections is robust and holds when erosion is no longer uniform. The average size of the cells, which initially reflects the typical wavelength of the ablation rate variations, increases as cells merge; it results in a coarsening of the pattern. It can be noted that the same results are obtained, qualitatively, if \(\xi(x,y)\) varies at each iteration instead of being constant. But the shorter the correlation time, the slower the amplitude growth. The whole hydrodynamics is in fact hidden behind two parameters: the typical wavelength of the erosion rate variations and the typical "correlation time". In a physical case, they would be typically imposed by the hydrodynamic instability involved and by the specific feedback between the flow and the topography.
In the following section, we demonstrate the relevance of this model to explain the appearance of scallops with an experimental example. A hydrodynamic instability developing at a water-soluble material interface produces a spatially varying dissolution rate. In accordance with the model, scallops appear consequently on the material surface.
## III Comparison to scallop patterns in solutal convection experiments.
In order to experimentally study the emergence of scallops in a simple and controlled way, we suspend horizontally blocks of soluble material in a large aquarium filled with water. The blocks of Himalayan pink salt of dimensions \(20\times 10\times 2.5\,\mathrm{cm}^{3}\) are polished with abrasive papers of decreasing roughness so that their bottom surface is initially flat and smooth. Once placed in water, blocks start to dissolve and a salt-rich concentration layer grows around the solid. Below the block and once its thickness reaches a critical threshold \(\delta_{\mathrm{c}}\), this dense layer is subjected to a solutal Rayleigh-Benard instability. It then destabilizes into plumes sinking in the bottom of the tank. The characteristic distance between plumes scales like \(\delta_{\mathrm{c}}\), which is given by a constant Rayleigh number criterion [23; 25; 30; 31]:
\[\frac{\Delta\rho\ g\ \delta_{\mathrm{c}}^{3}}{\eta D}=\mathrm{Ra}_{\mathrm{c}}, \tag{7}\]
with \(\Delta\rho\) the difference of density between the saturated fluid and the fluid in the bath, \(\eta\) the dynamic viscosity of the fluid, \(D\) the diffusion coefficient of the solute in the fluid, and \(\mathrm{Ra}_{\mathrm{c}}\) a critical value of the solutal Rayleigh number.
Now, because the thickness of the concentration boundary layer is locally larger at the vertical of a plume than between two plumes (where fresh water comes up), the dissolution rate is heterogeneous according to Eq. [4], which could induce a patterning of the solid surface. We therefore regularly take the blocks out of the tank and scan their lower surface with a laser profilometer, in order to access their topography with an accuracy of a tenth of a millimeter.
Let us now present the results obtained in a typical experiment. The aquarium was filled with an almost-saturated brine (\(\rho_{0}=1.195\,\mathrm{kg}\,\mathrm{L}^{-1}\)), in order to increase the size of the critical boundary layer thickness (see Eq. [7]) and thus the size of the patterns[25].
The bottom surface of the salt block first becomes increasingly rough. After an average erosion depth of \(2\,\mathrm{mm}\), the surface indeed displays height variations of typically \(0.1\,\mathrm{mm}\). It can be seen on Fig. 5A, where the topography of a \(20\times 30\,\mathrm{mm}^{2}\) area of the surface is shown. Then, cavities appear and broaden (see Fig. 5B). After an average erosion depth of \(14\,\mathrm{mm}\), the surface is covered by scallops, of a few millimeters width, forming a cellular pattern (see Fig. 5C). As can be seen on the maps showing the mean curvature (see Fig. 5D-F), the cavities are surrounded by increasingly narrow crests that progressively merge into a connected network.
The amplitude of the pattern first increases, almost linearly. Yet as shown Fig. 5G, it eventually passes through a maximum before decreasing slightly to reach a plateau. In order to explain the initial phase, we assume that the crests channel the plumes so that their position is locked. We are thus in the case of the model proposed previously, where the erosion rate is spatially variable (maximum at the level of the troughs and minimum at the level of the crests from where the salt-laden plumes flow) but constant in time. The fact that the pattern then stops growing could mean that it stops channeling plumes when its amplitude is too large. Another origin of the pattern amplitude saturation may be the competition between differential dissolution, which initiates the pattern, and normal ablation, which reduces the amplitude once crests are formed. The physical mechanisms at stake will be the object of a future study.
What is of particular interest to us here is that we once again find the two indicators of sharp structures. The mean curvature distribution of the surface becomes more and more asymmetric as dissolution proceeds (see
Figure 4: Numerical model: normal ablation of an initially flat surface with non-uniform dissolution rate. A-C: Topography of the surface at three different iterations (only a detail is shown and the average value of \(\eta\) is subtracted for clarity). D-F: Corresponding mean curvature field. G: Standard deviation of the topography versus erosion length. H: Distribution of the mean curvature. It becomes increasingly asymmetric as \(d_{\text{cr}}\) increases. Inset: skewness of the distribution. I: The power spectral density of the topography collapses at small wavelength on a characteristic power-law in \(k^{-4}\).
Fig. 5H). Its skewness, initially zero, decreases significantly before reaching a plateau. Finally, the power spectral density converges, at small wavelength, to the characteristic power-law in \(k^{-4}\) (Fig. 5I).
## IV Discussion and Conclusion
In this article, we demonstrate that the common generation of sharp shapes appearing in nature by dissolution erosion corresponds to the emergence of tip singularities in the evolution of the interface. We first show that a statistical analysis of natural dissolution patterns reveals the presence of tip singularities. We then explain their appearance by a simple model of normal ablation. First considering the basic case of a one-dimensional interface and uniform erosion rate, we show that initial humps thin until forming tips, while troughs widen. The first singularities appear when the moving interface crosses the evolute. Therefore, any surface presenting curvature variations will evolve to display singularities in finite time. This approach follows the idea of Arthur Lange, who was the first to propose a one-dimensional process of uniform ablation to explain the sharp shapes observed in caves [26]. This mechanism was then evoked in the formation of dissolution flutes and scallops [18], but had never been studied quantitatively.
Yet a uniform ablation model cannot explain the emergence of a pattern. We therefore propose a new model, this time two-dimensional, in which the erosion rate varies spatially with a given typical wavelength. We first logically observe that these variations in the erosion rate are imprinted on the surface, thus creating a pattern. Then, hills become narrower and troughs become wider, until a cellular pattern of cavities surrounded by sharp crests is obtained. This pattern is strikingly reminiscent of the scallops observed in limestone caves and described in the first section (see for
Figure 5: Experiment: dissolution of the bottom surface of a salt block in a water tank. A-C: Topography of the surface at three different moments (only a detail from the center of the block is shown and the average value of \(\eta\) is subtracted for clarity). D-F: Corresponding mean curvature field. G: Standard deviation of the topography versus erosion length. H: Distribution of the mean curvature. It becomes increasingly asymmetric as \(d_{\rm cr}\) increases. Inset: skewness of the distribution. I: The power spectral density of the topography collapses at small wavelength on a characteristic power-law in \(k^{-4}\).
another example Fig. 6A). Finally, we set up a simple experimental device to obtain an erosion rate varying spatially with a well defined wavelength. To do this, we immerse a block of salt in an aquarium and look at the evolution of its bottom surface. Underneath it, a solutal convection instability leads to the emission of plumes which induce a variable erosion rate. We then observe the appearance of scallops, quite similar to the natural ones, in accordance with the predictions of the previous model.
In all three cases, the statistical analysis of the interface reveals the presence of singular structures (even if the singularities are in fact regularized on a small scale). We emphasize that these singularities appear even if the erosion rate varies smoothly: they are only due to the fact that the erosion rate is at any point normal to the interface. In two dimensions, singularities get robustly organized in interconnected crest lines surrounding concave areas (_i.e._ scallops). Therefore, every soluble surface subjected to a non-uniform erosion rate varying on a typical scale is expected to evolve into scallops. This typical scale could be selected, for instance, by any hydrodynamic instability. It probably explains why scallops appear so often, in a wide variety of materials and hydrodynamic conditions.
To conclude, we note that we have focused our study on the case of dissolution, which allows us to perform field measurements, experiments and numerical modeling, but our findings apply to a larger class of ablation phenomena. In particular, scallops patterns are also reported for melting interface and specifically for the ice/water interface in presence of a strong longitudinal current [38, 39] or of gravity driven convection flows [40, 41, 42, 24] (see for example Fig. 6B). Analog to the dissolution flutes [20, 22, 18], the sublimation of icy substrates under turbulent flows generates ripples, which can be observed on Earth and other planets [43, 20]. In different hydrodynamic conditions, cellular patterns created by sublimation are also reported like the sun cups on high altitude glaciers [44] and the ablation hollows on snow [45] (see Fig. 6C). Mechanical abrasion may also produce similar shape like in the ablation at high temperature [46]. We note finally the striking resemblance of scallop patterns with the finger-like imprints at the surface of meteorites fallen on Earth, which are called Regmaglyptes [47], like the one depicted in Fig. 6D. These patterns result from the ablation of meteorite surface during their entry in the atmosphere.
## Methods
### Field protocol
The 3D reconstruction was done by photogrammetry. 118 pictures were acquired on the wall with a Nikon D750 camera. The light was provided by two spotlights placed on each side of the scene. The spotlights were intentionally not placed directly in front of the wall but with a slight angle. As the rock is very white, the studied landforms were almost invisible with direct light. Two different rulers were placed on the wall for scaling the model. The DTM (Digital Terrain Model) processing was done with Agisoft Metashape software.
### Surface characterization
#### v.2.1 Curvature field computation
If \(\mathbf{n}\) is the local normal vector to the surface, the mean curvature is defined as \(\kappa=-\nabla\cdot\mathbf{n}\). With our parametrization, the mean curvature is computed as:
\[\kappa=\frac{(1+\eta_{y}^{2})\,\eta_{xx}+(1+\eta_{x}^{2})\,\eta_{yy}-2\,\eta_ {x}\,\eta_{y}\,\eta_{xy}}{(1+\eta_{x}^{2}+\eta_{y}^{2})^{3/2}}\,, \tag{8}\]
with the notation \(\eta_{x_{i}}\) and \(\eta_{x_{i}x_{j}}\) for the first and second derivative relatively to the coordinates \(x_{i}\) and \(x_{j}\), respectively. Nevertheless, for experimental signals the computation of first and second derivative are known to amplify the noise at small scale. We apply thus low-pass Gaussian filter to the field \(\eta\) before.
#### v.2.2 Space Power spectrum computation
After subtracting a second order fit, to remove the large scale structure and avoid biases due to an imper
Figure 6: Natural examples of spike and scallop patterns.A: Sharp scallops on marble limestone from the Korallgrottan cave in Jamtland, Sweden. Width about 0.5 m. Credits Johannes Lundberg 2014. B: Ice scallops on the immersed surface of an iceberg. Credits Alban Michon. C: Suncups formed by sublimation of snow in Vercors, France, Credits Emilien Dilly. D: Scallop patterns on the Murnpeowie Meteorite, South Australian Museum, Adelaide, Australia. Width about 1 m. Credits James St. John.
fect leveling, we compute the spatial-power spectrum of the the topography, \(S_{\eta}\) (or power spectral density, PSD), after integrating along the different directions:
\[S_{\eta}(k_{x},k_{y})=\frac{1}{L_{x}L_{y}}\,\left|\int_{0}^{L_{y}}\int_{0}^{L_{x }}\eta(x,y)\,\mathrm{e}^{-i(k_{x}x+k_{y}y)}\,\mathrm{d}x\,\mathrm{d}y\right|^{2} \tag{9}\]
\[S_{\eta}(k)=\int_{0}^{2\pi}\,S_{\eta}(k_{x},k_{y})\,k\,\mathrm{d}\theta \tag{10}\]
with \(k_{x}=k\cos\theta,k_{y}=k\sin\theta\). In order to better compare with the physical dimensions, it can be useful to plot the spectrum as a function of \(k/2\pi=1/\lambda\).
###### Acknowledgements.
We acknowledge for scientific discussions Etienne Courtier (MSC), Marc Durand (MSC), Olivier Devauchelle (IPG Paris). We thank for the field data acquisition in the Saint-Marcel cave, Olivier Bourgeois (LPG) and Delphine Dupuy (curator of the cave). This research was funded by the ANR grants Erodisis ANR-16-CE30-0005 and PhysErosion ANR-22-CE30-0017 as well as the Idex Emergence Grant Riverdiss from the Universite Paris Cite. The field measurements were funded by the Tellus program "Caracterisation et modelisation des formes de dissolution periodiques sur les parois calcaires soumises a des ecoulements d'eau" project." from CNRS (INSU).
|
2305.07544 | The extremely X-ray luminous radio-loud quasar CFHQS J142952+544717 at
$z=6.18$ under Chandra high-angular resolution lens | We present the first X-ray observation at sub-arcsecond resolution of the
high-redshift ($z=6.18$) radio-loud quasar CFHQS J142952+544717 (J1429). The
~100 net-count 0.3-7 keV spectrum obtained from $\sim 30$ ksec Chandra exposure
is best fit by a single power-law model with a photon index $\Gamma=2.0\pm0.2$
and no indication of an intrinsic absorber, implying a 3.6-72 keV rest-frame
luminosity $L_{\rm X}=(2.3^{+0.6}_{-0.5})\times10^{46}$ erg s$^{-1}$. We
identify a second X-ray source at 30 arcsec, distance from J1429 position, with
a soft ($\Gamma\simeq 2.8$) and absorbed (equivalent hydrogen column density
$N_{\rm H} <13.4\times 10^{20}$ cm$^{-2}$) spectrum, which likely contaminated
J1429 spectra obtained in lower angular resolution observations. Based on the
analysis of the Chandra image, the bulk of the X-ray luminosity is produced
within the central $\sim 3$ kpc region, either by the disk/corona system, or by
a moderately aligned jet. In this context, we discuss the source properties in
comparison with samples of low- and high-redshift quasars. We find indication
of a possible excess of counts over the expectations for a point-like source in
a 0.5 arcsec-1.5 arcsec ($\sim 3-8$ kpc) annular region. The corresponding
X-ray luminosity at J1429 redshift is $4\times 10^{45}$ erg s$^{-1}$. If
confirmed, this emission could be related to either a large-scale X-ray jet, or
a separate X-ray source. | G. Migliori, A. Siemiginowska, M. Sobolewska, C. C. Cheung, Ł. Stawarz, D. Schwartz, B. Snios, A. Saxena, V. Kashyap | 2023-05-12T15:15:52Z | http://arxiv.org/abs/2305.07544v2 | The extremely X-ray luminous radio-loud quasar CFHQS J142952+544717 at \(z=6.18\) under Chandra high-angular resolution lens
###### Abstract
We present the first X-ray observation at sub-arcsecond resolution of the high-redshift (\(z=6.18\)) radio-loud quasar CFHQS J142952+544717 (J1429). The \(\sim 100\) net-count 0.3-7 keV spectrum obtained from \(\sim 30\) ksec _Chandra_ exposure is best fit by a single power-law model with a photon index \(\Gamma=2.0\pm 0.2\) and no indication of an intrinsic absorber, implying a 3.6-72 keV rest-frame luminosity \(L_{\rm X}=(2.3^{+0.6}_{-0.5})\times 10^{46}\) erg s\({}^{-1}\). We identify a second X-ray source at 30'' distance from J1429 position, with a soft (\(\Gamma\simeq 2.8\)) and absorbed (equivalent hydrogen column density \(N_{\rm H}<13.4\times 10^{20}\) cm\({}^{-2}\)) spectrum, which likely contaminated J1429 spectra obtained in lower angular resolution observations. Based on the analysis of the _Chandra_ image, the bulk of the X-ray luminosity is produced within the central \(\sim 3\) kpc region, either by the disk/corona system, or by a moderately aligned jet. In this context, we discuss the source properties in comparison with samples of low- and high-redshift quasars. We find indication of a possible excess of counts over the expectations for a point-like source in a 0.5''-1.5'' (\(\sim 3-8\) kpc) annular region. The corresponding X-ray luminosity at J1429 redshift is \(4\times 10^{45}\) erg s\({}^{-1}\). If confirmed, this emission could be related to either a large-scale X-ray jet, or a separate X-ray source.
keywords: galaxies: active, galaxies: high-redshift, galaxies: nuclei, X-rays: general, individual: CFHQS J142952+544717
## 1 Introduction
The formation and growth of early black holes and their impact on the evolution of structures across the Universe is at the forefront of current astrophysical research. One of the main open problems relates to the formation and evolution of radio sources and the significance of radio phenomena (i.e., jets and lobes) produced by growing black holes in the feedback and co-evolution of galaxies and clusters of galaxies. We still do not understand why only a small fraction of quasars exhibits powerful radio emitting structures extending to large, in some cases even Mpc, scales. And yet, the existence of jetted quasars at high redshift challenges models of structure formation, as their radio power requires a very massive black hole, M\({}_{\rm blb}>10^{9}-10^{10}\)M\({}_{\odot}\)(e.g., Croton et al., 2006; Volonteri and Natarajan, 2009; Valiante et al., 2016, and references therein).
The energy released by the jet into the interstellar medium (ISM) may impact the evolution of the host galaxy (Fragile et al., 2004; Gaibler et al., 2012; Mukherjee et al., 2018; Meenakshi et al., 2022). Observational evidence of such effect is still limited and its interpretation controversial, with positive and negative feedback considered to play a role. (e.g. Bicknell et al., 2000; Croft et al., 2006; Nesvadba et al., 2010; Salome et al., 2015; Lanz et al., 2016; Nesvadba et al., 2020; Girdhar et al., 2022). In a recent work, Poitevineau et al. (2023) found indication that the supermassive black holes (SMBHs) powering radio-loud1 active galactic nuclei (AGN) in the \(0.3<z<4\) range, are overall more massive than what expected by the scaling relation between the masses of SMBHs and their host spheroids in the local Universe. The proposed explanation involves a relevant role of "radio-mode" AGN feedback, leading to a rapid growth of SMBHs at early epochs while influencing the star-formation history of the AGN host galaxy (see also Jolley and Kuncic, 2008; Diana et al., 2022).
Footnote 1: Here we assume the classical separation between radio-loud (RL) and radio-quiet (RQ) quasar based on the rest-frame radio to optical flux density ratio \(R\), with the radio being measured at 5 GHz and the optical at 4400 Å (Kellermann et al., 1989). The divide is set at \(R=10\).
In order to investigate radio-mode feedback in the AGN evolution, sizeable samples of radio-loud AGN at high redshift are needed. Indeed, the number of known high-redshift radio sources (\(z>5\)) has increased significantly during the past several years (e.g Bahados et al., 2015; Banados et al., 2018; Fan et al., 2022, for a general review), with a rapid sequence of record breaking discoveries (Willott et al., 2010; Saxena et al., 2018; Belladitta et al., 2020; Banados et al., 2021; Connor et al., 2021; Endsley et al., 2022; Iighina et al., 2023, to name
a few). At the time of writing this article, there are 10 radio quasars known at \(z>6\). In addition, Gloudemans et al. (2022) report the discovery of 24 radio-bright (21 radio-loud) quasars at \(4.9\lesssim z\lesssim 6.6\) by combining DESI Legacy Imaging Surveys and LOFAR Two-metre Sky Survey (LoTSS), while two more targets selected from the Rapid ASKAP Continuum Survey (RACS) and the Dark Energy Survey (DES) have been spectroscopically confirmed at \(z\sim 6.1\)(Ighina et al., 2022).
X-ray observations provide important constraints on the physical processes associated with the accretion onto a SMBH. X-ray selected samples are key to investigate the accretion history of AGN free from absorption biases that affect other wavelengths (Wolf et al., 2021; Barlow-Hall et al., 2023, and references therein). X-ray observations can also help to constrain radiative processes at work in quasar jets and help to infer their physical properties. For example, the contribution of the inverse-Compton scattering off the Cosmic Microwave Background photons by relativistic electrons (IC/CMB; see Tavecchio et al., 2000; Celotti et al., 2001) to the X-ray luminosity of jets in the local Universe, is highly debated (e.g., Stawarz et al., 2004; Hardcastle et al., 2006; Meyer & Georganopoulos, 2014; Breiding et al., 2023). Importantly, the IC/CMB component should be more easily observable in high-\(z\) jets because of the \((1+z)^{4}\) dependence of the CMB photon density (Schwartz, 2002). Indeed, the IC/CMB model appears to account well for the emission of several high-\(z\) jets detected by the _Chandra_ X-ray observatory (Siemiginowska et al., 2003; Cheung et al., 2006, 2012; Simionescu et al., 2016; Wu et al., 2017; Napier et al., 2020; Snios et al., 2022; Ighina et al., 2022) even though, thus far, sample studies have not provided robust indication of the expected X-ray emission enhancement with redshift (Wu et al., 2013; McKeough et al., 2016; Ighina et al., 2019; Ighina et al., 2021), a major limitation to this test being the paucity of radio-loud quasars known at high redshifts (\(z>5\)) and the even smaller number of X-ray detected radio quasars.
Of the \(z>6\) radio quasars known to date (see Momjian et al., 2018; Liu et al., 2021; Ighina et al., 2023, for a recent compilations), currently only three have reported X-ray detections (Khorunzhev et al., 2021). Of these, CFHOS J142952+544717 (hereafter, J1429) is a remarkable source in light of its high X-ray luminosity (exceeding \(10^{46}\) erg s\({}^{-1}\)in the 2-10 keV rest-frame energy band, Medvedev et al., 2020, 2021) and its radio properties suggesting a young radio phase (Frey et al., 2011).
Indeed, the source was part of the sample of high-\(z\) candidate young radio sources selected by our group for _Chandra_ observations with the goal of investigating the high-energy properties and the evolution of newly born radio jets. The first part of the sample was presented in Snios et al. (2020), while a publication on the remaining targets is in preparation. Here, we present the results of the first X-ray study at arc-second resolution of J1429. In the \(\sim 2.2-50\) keV rest-frame band covered by _Chandra_, different radiative processes could be at work. Broadly summarizing, these can be related either the AGN or the radio structures (jets and lobes) of J1429. Comptonization of the ultraviolet (UV) disk photons by the electrons in a hot (\(10^{8-9}\) K) corona (Haardt & Maraschi, 1993) surrounding the SMBH produces a power-law with a characteristic roll-over at energies of a few hundred keV. Non-thermal X-ray emission in the extended radio structures is produced via IC/CMB but also via IC of the jet synchrotron photons and off the nuclear photons, which include direct UV disk photons or disk photons reprocessed in the broad line regions (BLR) or in the torus (see also the discussion in Medvedev et al., 2021).
The paper is organized as follows: after summarizing the main information on J1429 (Sec 1.1), we present the _Chandra_ observation and X-ray analysis results in Sec. 2. The source properties are discussed in Sec. 3 and we draw our conclusions in Sec. 4.
### Cfhqs j142952+544717
J1429 was first observed spectroscopically as part of the Canada-France High-\(z\) Quasar Survey (CFHQS, Willott et al., 2005). The quasar redshift is taken to be \(z=6.1837\pm 0.0015\)(hereafter, \(z=6.18\)), as determined from the CO (1-0) line emission (Wang et al., 2011). Its absolute magnitude (\(M_{1450}=-25.85\)) places the source at the bright end of the quasar luminosity function at \(z\sim 6\)(Willott et al., 2010). It is classified as a radio-loud quasar based on a reported radio loudness parameter \(R=109\pm 9\)(see Banados et al., 2015). J1429 was repeatedly observed in the optical band over more than a decade without displaying large amplitude variability. Radio observations cover the 120 MHz to 32 GHz band providing a good characterization of the radio spectrum, which is flat below 5 GHz and steepens (up to \(\alpha\sim 1.0\)) at higher frequencies. The steep spectrum, lack of strong variability, and VLBI observations showing a compact (\(<100\) pc) but marginally resolved morphology (Frey et al., 2011), make J1429 a candidate young radio source. Frey et al. (2011) imaged J1429 with the European VLBI Network (EVN) at 1.6 and 5 GHz and noted a faint extension to the SE in the 1.6 GHz map only (see Table 1 therein). The extension is at a position angle PA= 138 deg (indicated in Figure 1, bottom) and is offset from the putative radio core by 6.4 mas (36 pc, projected). The intrinsic compactness of the radio structure is also supported by the measured brightness temperature (\(T_{\rm B}\simeq 10^{9}\) K), which disfavours Doppler-boosted radio emission (Frey et al., 2011).
Observations at 32 GHz (Wang et al., 2011) and 250 GHz (Omont et al., 2013) investigated the host galaxy properties unveiling vigorous star formation and possibly the presence of a companion galaxy at \(\sim 7\) kpc distance. These results found support in recent NOEMA observations of [CII] line emission and of the underlying continuum (Khusanova et al., 2022), which constrained the star-formation rate as SFR \(=520-870\) M\({}_{\odot}\) yr\({}^{-1}\). The [CII] line properties can be explained with two merging galaxies or, alternatively, with an AGN-driven outflow.
In X-rays, J1429 was first detected by the extended ROentgen Survey with an Imaging Telescope Array (eROSITA) in December 2019 (Medvedev et al., 2020). The short (160 s) exposure was sufficient to measure a 0.3-2 keV flux of \(\sim 8\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) and a rest-frame 2-10 keV luminosity of a few \(10^{46}\) erg s\({}^{-1}\), that makes J1429 one of the most X-ray luminous high-\(z\) quasars known to date. A follow-up 20 ksec XMM-_Newton_ director's discretionary time observation on 2020 July 24 improved the count statistics of the X-ray spectrum, which is best-fit by a power-law with a steep photon index, \(\Gamma=2.5\pm 0.2\) and a moderate level of absorption, \(N_{\rm H}\equiv(3\pm 2)\times 10^{22}\) cm\({}^{-2}\) at the source redshift (Medvedev et al., 2021). The source did not display any significant flux variability among the eROSITA and XMM-_Newton_ observations, separated by \(\sim 7.5\) months.
We adopt a flat \(\Lambda\)CDM cosmological model with \(h=0.70\) and \(\Omega_{\Lambda}=0.7\). The source redshift, \(z=6.18\), corresponds to a luminosity distance of 59.8 Gpc and a scale of 5.6 kpc\(\arcsec\). The observed 0.3-7 keV energy band translates into a 2.2-50.3 keV rest frame band.
## 2 X-ray observations
J1429 was included in the sample of approved high redshift _Chandra_ targets (Cycle AO21, PI: Siemiginowska) selected from Coppejans
et al. (2016, 2017) catalog of radio-loud AGN with redshifts above \(z>4.5\). The selection was based on the shape of radio spectra and available morphology in the VLBI observations. The selected sources have in particular steep or peaked radio spectra, likely dominated by compact radio lobes rather than relativistically beamed jets (Readhead et al. 1996; O'Dea 1998). Radio morphologies confirm the presence of double or single extended structures on relatively small scales (a few kpc), suggesting that these sources have not grown to large-scale radio galaxies either because they are young, or because of the impact of the environment preventing their growth (see O'Dea & Saikia 2021, for a review). The analysis of the full sample will be presented in a forthcoming paper.
### Chandra data reduction & analysis
The 30.56 ksec _Chandra_ observation was performed on 2021-08-03 using the Advanced CCD Image Spectrometer (ACIS-S) with the readout of the full CCD in the VFAINT mode. The target was located on the S3 chip at Y\(=-0.1\) arcmin off-set from the nominal aim-point. We used the CIAO v.4.14 (Fruscione et al. 2006) software for the X-ray data analysis and reprocessed the data using Chandra_repro script to apply the most recent calibrations (CALDB v.4.9.8) and the sub-pixel adjustment algorithm (pix_adj=EDSER) for the best angular resolution of the image.
A visual inspection of the 0.3-7 keV image in ds9 clearly shows the presence of a point-like source at the optical coordinates of J1429 (see Figure 1). The X-ray centroid is located at the coordinates (J2000) RA\(=\)14:29:52.12 DEC=+54:47:16.99. The _Chandra_ image confirms the presence of a second X-ray source (src1) at 44''.8 south-west of J1429, already identified in Medvedev et al. (2021). In addition, it unveils a second field source (src2), at 30''.5 north-east of J1429, spatially coincident with the infrared source WISEA J142955.47+544727.8 (\(w=17.96\pm 0.18\) mag, \(w2>17.90\) mag, \(w3>12.91\) mag, \(w4>9.40\) mag).
We used the ds9 DAX aperture photometry to extract the net counts in the full (0.3-7 keV), soft (0.3-2 keV) and hard (2.0-7.0 keV) energy bands of J1429 and of the two field sources from circular regions with \(\sim 2\arcsec\) radius (95% PSF ECF at 1.5 keV) centered on the respective X-ray centroids. The same regions were used to extract the spectra and response files of the three sources with specextract. The background spectrum was extracted from a region surrounding the source and free of known point sources. All model fitting was performed in _Sherpa_(Freeman et al. 2001) assuming the C-statistics based on the Poisson likelihood, and using Nelder-Mead optimization algorithm. Uncertainties are reported at 1\(\sigma\) confidence level.
### Results of Spectral Analysis
We assumed an absorbed power-law model and performed a fit to the X-ray spectrum of each source, initially leaving the absorption column free to vary. For J1429 and src1, we did not find any evidence for an absorption parameter value in excess over the measured Galactic column density (\(N_{\rm H,~{}Gal}=1.15\times 10^{20}\) cm\({}^{-2}\); HI4PI Collaboration et al. 2016), while the src2 spectrum appears best modeled assuming a moderately absorbed, steep (\(\Gamma\simeq 2.8\)) power-law, although the limited statistic provides us only with an upper limit on the intrinsic column density \(N_{\rm H}<1.3\times 10^{22}\) cm\({}^{-2}\). More complex spectral models, including a cut-off power-law, or the addition of a thermal component, do not significantly improve the fit of J1429. The resulting best-fit model parameters are listed in Table 1 and the J1429 X-ray spectrum and model are shown in Figure 2. For J1429, the best fit photon index is \(\Gamma=2.0\pm 0.2\) and the 0.5-10 keV unabsorbed flux is \(5.4^{+1.4}_{-1.2}\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\).
The results summarized above differ from those of Medvedev et al. (2021) based on the XMM-_Newton_ observation. Their best-fit model includes a steep power-law (\(\Gamma=2.5\pm 0.2\)) with an intrinsic absorber (\(N_{\rm H,~{}int}=(3\pm 2)\times 10^{22}\) cm\({}^{-2}\)), although with marginal significance, and gives an unabsorbed 0.2-10 keV flux of \(1.3\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) (the 0.2-10 keV unabsorbed _Chandra_ flux of J1429 being \(6.6\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\)). However, the angular resolution of XMM-_Newton_ allowed the authors to identify only one of the two sources in the field of J1429, src1, while src2 remains blended with J1429. Given the XMM-_Newton_-PN Point Spread Function2, the two field sources could have contaminated the XMM-_Newton_ spectrum of J1429. The spectral analysis indicates that src1 has likely
Figure 1: Upper panel: _Chandra_ 0.3–7.0 keV image of J1429 field. The pixel size is set to half the original ACIS pixel (0.246′′/pix). The image was smoothed using a Gaussian function with \(\sigma=1.5\). The large circle corresponds to the 30′′ radius extraction region for the PN spectrum in Medvedev et al. (2021). Lower panel: zoom view on J1429 displayed at the high resolution binning of 0.123′′/pix. The green diagonal line indicates the direction of the elongation of the radio structure as seen in the 1.6 GHz EVN map presented in Frey et al. (2011). The color scales are logarithmic.
a minimal impact given its greater distance and fainter flux. Here, we re-analyze the XMM-_Newton_ data in order to evaluate the contribution of src2 emission to the J1429 spectrum. We followed the standard data reduction and extracted the spectrum from a circular region of \(r=30\arcsec\) centered on the position reported in Medvedev et al. (2021).
Fitting the XMM-_Newton_ spectrum with an absorbed power-law returned values \(\Gamma=2.5\pm 0.2\) and \(N_{\rm H,\;int}=(2^{+2}_{-1})\times 10^{22}\) cm\({}^{-2}\), which are fully consistent with the analysis by Medvedev et al. (2021). We then tested a composite model, consisting of the sum of two absorbed power laws, and fixed the parameters of one of the two to the best-fit values obtained for J1429. The underlying assumption is that J1429 has not varied and that the excess flux is due to contamination by src2. In this way, we obtained a moderately absorbed (\(N_{\rm H}=1.2^{+1.2}_{-0.7}\times 10^{21}\) cm\({}^{-2}\)) and very steep (\(\Gamma=3.7^{+1.0}_{-0.7}\)) power-law with a 0.2-10 keV absorbed flux of \(2.1\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\), which is consistent, within uncertainties, with the spectral parameters of src2 in the _Chandra_ spectrum.
We conclude that the contamination from src2 is likely responsible for the softer photon index and higher flux measured in the XMM-_Newton_ data. We note that the resulting observed broadband X-ray luminosity \(L_{0.1-100\,{\rm keV}}=4.2\times 10^{46}\) erg s\({}^{-1}\), based on our _Chandra_ analysis, is still high, thus J1429 remains among the most luminous \(z>6\) quasars.
### Image Analysis
_Chandra_ observations result in the highest angular resolution X-ray images to date and allow us to study the X-ray morphology on sub-arcsecond scales. We performed an analysis of the J1429 _Chandra_ image to understand if the X-ray data are consistent with a point source emission, and also to look for the presence of a diffuse extended component. The _Chandra_ image of J1429 contains a relatively small number of counts, \(96.8\pm 9.9\), however, the background contamination is low (estimated \(<1\) count in the source circular region with \(r=2\arcsec\) based on the background surface brightness of 0.054 cts arcsec\({}^{-2}\)). Figure 1 shows the X-ray counts in the quasar region with the 0.123''pixel size. No evidence of an extended emission is present beyond 2''(\(\sim 11\) kpc) radius from the centroid of the X-ray source and we measured a 0.5-7.0 keV upper limit (90% confidence limit) of \(2.2\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\).
Next, we investigated whether the central emission is consistent with a point-like source. We used CHART3(Carter et al., 2003) to simulate the _Chandra_ PSF centered on the location of the quasar (see the centroid given above). We assumed the input spectrum to be the best fit spectral model with \(\Gamma=2.0\) and Galactic absorption, and selected the dither option to match the observation's aspect solution (asol) file. We simulated 500 realizations of the PSF with matched observed exposure time using CHART and projected each of the simulated rays onto the ACIS-S detector using MARX (v.5.2) with the pixel adjustment algorithm setting as pix-adj=EDSER, the same asol file for the dither, and the addition of AspectBlur = 0.25''(see POG and CIAO threads)4.
Footnote 3: [https://cxc.harvard.edu/ciao/PSFs/chart2/](https://cxc.harvard.edu/ciao/PSFs/chart2/)
Footnote 4: Note that this adds additional broadening of the PSF resulting in a broader spread of the photons on the detector; this is a conservative setting for this parameter.
For each simulation we calculated the point source counts in an annulus with 0.5''and 1.5''radii with the PSF encircled energy fraction at 1 keV of \(\sim 75\)% and \(\sim 95\)%, respectively (calculated from the _Chandra_ image in ds9 using DAX). In terms of physical scales we probe the region from 2.8 kpc to 8.4 kpc. The distribution of the point source counts is shown in Figure 3 together with the observed number of counts. We performed a two sample Kolmogorov-Smirnov (KS) test (using scipy.ks_2sampl) obtaining the p-value \(<<0.001\), which indicates that the two distributions are different at high confidence level. Our analysis points therefore to an excess of X-ray counts with respect to expectations in case of a point-like emission. We investigated the distribution of the counts, by dividing the annulus in four quadrants. The two north and south-east quadrants have the highest number of counts, however these are basically comparable among each other (11 to 14). As a further test, we extracted the surface brightness profile of the emission in the direction parallel and
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Object & 0.3–7 keV & 0.3–2 keV & 2–7 keV & \(N_{\rm H}\) & \(\Gamma\) & \multicolumn{1}{c}{c} & \multicolumn{1}{c}{stat} & Flux \\ & counts & counts & counts & cm\({}^{-2}\) & & & & (\(\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\)) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline J1429 & 96.8\(\pm\)9.9 & 59.3\(\pm\)7.7 & 37.5\(\pm\)6.1 & \(N_{\rm H,\;Gal}\) (f) & 2.0\(\pm\)0.2 & 159/148 & \(5.4^{+1.4}_{-1.2}\) \\ src1 & 20.4\(\pm\)4.6 & 12.8\(\pm\)3.6 & 7.6\(\pm\)2.8 & \(N_{\rm H,\;Gal}\) (f) & 2.1\(\pm\)0.5 & 93/92 & \(1.2^{+1.7}_{-0.5}\) \\ src2 & 25.3\(\pm\)5.1 & 15.4\(\pm\)0.0 & 9.7\(\pm\)3.2 & \(<1.3\times 10^{22}\) & \(2.8^{+1.4}_{-1.3}\) & 98/116 & \(4.0^{+2.7}_{-2.0}\) \\ \hline \end{tabular} (1) Object name; (2), (3) & & & & & & \\ \end{tabular}
\end{table}
Table 1: _Chandra_ best fit spectral models.
Figure 2: _Chandra_ 0.3-7.0 keV spectrum and best-fit absorbed power law model. The spectrum has been rebinned only for visualization purposes.
perpendicular to the observed radio elongation, and compared them with those of the simulated PSF. The observed and simulated orthogonal profiles match each other. The longitudinal profile of J1429 appears instead to display an asymmetry in the wings, the south-east one being broader, however the difference between the observed and simulated profiles could not be confirmed with the K-S test.
To summarize, the imaging analysis points to a count excess with respect to a point-source predictions, but we cannot make conclusions on the counts being clustered in a specific location. Moreover, while compelling, we stress that the result of a count excess should be taken with caution given the relatively low count statistics and the systematic uncertainties in the PSF which cannot be properly included in the simulations (see Ma et al., 2023, for a recent discussion of the _Chandra_ PSF uncertainties). Indicatively, the excess (\(20\pm 13\) counts in excess) corresponds to an unabsorbed flux of \((9\pm 6)\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\) in the 0.5-7 keV observed energy band if we assume a power-law model with an intermediate (between radio-loud and radio-quiet AGN) photon index value \(\Gamma=1.7\).
## 3 Discussion
The _Chandra_ observation confirms the high X-ray luminosity of J1429 (the extrapolated rest-frame 0.1-100 keV luminosity \(\sim 4\times 10^{46}\) erg s\({}^{-1}\)) and returns a photon index value within the typical range of high-\(z\) quasars (\(\Gamma\sim 1.5-2.2\), e.g. Vito et al., 2019; Zhu et al., 2019). In order to investigate the origin of the observed X-ray emission, in Figure 4 we compared the radio, optical and X-ray properties of our target (in Table 2) with those of samples of quasars from the literature. For this purpose, we used the estimated radio-loudness parameters and the X-ray-to-optical (2500 A) luminosity ratios, expressed as spectral slopes between 2500 A and 2 or 10 keV, namely \(\alpha_{\rm ox}=0.3838\log[L_{\rm 2keV}/L_{\rm 2500}]\)(Tananbaum et al., 1979) and \(\tilde{\alpha}_{\rm ox}=0.3026\log[L_{\rm 10 keV}/L_{\rm 2500}]\)(Bjhina et al., 2019), where \(L_{\rm 2500}\), \(L_{\rm 2\,keV}\) and \(L_{\rm 10\,keV}\) are the corresponding rest-frame luminosity densities in units of erg s\({}^{-1}\) Hz\({}^{-1}\). The rest-frame luminosities of J1429 at 2 and 10 keV, \(7.7\times 10^{45}\) erg s\({}^{-1}\), give \(\alpha_{\rm ox}=-1.15\) and \(\tilde{\alpha}_{\rm ox}=-1.12\), which are both in a good agreement with the previous estimates by Medvedev et al. (2020), while the radio loudness is \(R=109\pm 9\)(Bainados et al., 2015).
For the comparison, we used the samples of radio-loud quasars from Zhu et al. (2020) and \(z>4\) radio-loud quasars from Zhu et al. (2019), high-redshift blazars presented in Iighina et al. (2019), and the sample of young radio sources at \(z>4.5\) from Snios et al. (2020). We also collected from the literature information on \(z>5\) radio-loud quasars with reported X-ray detections, not present in the previously mentioned samples (see Khorunzhev et al., 2021, and references therein). In addition, in the \(\alpha_{\rm ox}\) vs. \(z\) panel, as well as the \(\alpha_{\rm ox}\) vs. \(L_{\rm 2500}\) panel of Figure 4, we included the samples of radio-quiet quasars from Shemmer et al. (2006), Just et al. (2007), Lusso and Risaliti (2016), Martocchia et al. (2017), Nanni et al. (2017) and Vito et al. (2019), which, although collectively not complete, ensure a good redshift coverage.
Looking at the plots in Figure 4, we can make some basic considerations. The \(\alpha_{\rm ox}\) of J1429 is relatively high, in particular in comparison with the high-\(z\) RQ and RL quasars. Although we confirm that J1429 does not follow the anti-correlation between \(\alpha_{\rm ox}\) and \(L_{\rm 2500}\) known for lower-redshift AGN (Lusso and Risaliti, 2016), the deviation from the relation is less extreme than what reported in Wolf et al. (2021) based on the XMM-_Newton_ data (see their Figure 6). The \(\alpha_{\rm ox}-L_{\rm 2500}\) and \(L_{\rm 2\,keV}-L_{\rm 2500}\) panels confirm that the reason for this is an excess in X-rays rather than a deficit in the UV luminosity. As noted by Medvedev et al. (2020), the X-ray luminosity of J1429 is comparable with the most radio-loud (\(\log R\gtrsim 2.5\)) high-\(z\) sources in the sample of Zhu et al. (2019), which are supposed to have a significant contribution in X-rays by a Doppler-boosted jet emission. However, this explanation is to some extent contradicted by a relatively low \(R\) value for our target, which is in the lowest tail of the samples of Zhu et al. (2019) and Iighina et al. (2019), though \(R\) may not be an ideal tracer of a jet-activity in high-\(z\) quasars accreting at the highest rates (see Sbarrato et al., 2021). Moreover, the average photon index of the \(z>4\) blazar sources is markedly flatter (\(\Gamma=1.4\); Iighina et al., 2019) than our revised value. In fact, referring to the \(\tilde{\alpha}_{ox}\) vs. \(\Gamma\) classification plot proposed by Iighina et al. (2019), the steep photon index locates J1429 among the non-blazar sources. Note also that, while the brightest X-ray (say \(L_{\rm 2\,keV}>10^{28}\) erg s\({}^{-1}\)Hz\({}^{-1}\)) RL quasars are radio bright (at 5 GHz), the opposite is not true, i.e. high \(L_{\rm 5\,GHz}\) values do not necessarily imply high \(L_{\rm 2\,keV}\) (see the middle right panel in Figure 4), as \(L_{\rm 5\,GHz}\) could be the sum of beamed (jet) and unbeamed (holes) radio emission.
Zhu et al. (2020) investigated the origin of the X-ray excess of RL vs. RQ quasars using a large sample of optically-selected RL quasars and concluded that, only in the case of flat-spectrum radio quasars (FSRQs), this excess is due to a direct contribution of a boosted jet emission. For the majority of steep-spectrum radio quasars (SSRQs) instead, the authors argue that the radio, optical and X-ray parameters point to a disk corona origin of the X-ray emission, although there must be a physical link between the coronal and the jet activity, which is manifested through the increase of the X-ray emission as a function of the radio-loudness. On one hand, our target seems to fit well into this picture, in view of its photon index value in line with the corona emission. On the other hand, its X-ray luminosity exceeds that of SSRQs. One could argue that the production of the X-ray emission in the disk corona of J1429 is, for some reasons, more efficient than in the cases of SSRQs from the Zhu et al. sample. A caveat for this comparison is indeed that the sample of Zhu et al. (2020) does not include SSRQs at the same redshift as J1429.
Vito et al. (2019) investigated the X-ray properties of a sample of \(z>6\) RQ quasars and did not find evidence for an evolution of the
Figure 3: Simulated counts from a point source and the data. The blue histogram shows the distribution of counts in the annulus 0.5–1.5′′/overplotted with the KDE curve (orange) for 500 simulations of a point source with 100 counts. The vertical line marks 49 counts detected in the same annulus by Chandra. The green line shows the Gaussian distribution with the mean of 49 and \(\sigma=7\) representing the measurement error. The counts in the PSF artefact region were excluded.
Figure 4: Comparison plots for J1429. Top: optical-to-X-ray power-law slope \(\alpha_{\rm ox}\) versus redshift (left) and versus the 2500 Å luminosity density (right). Middle: 2 keV luminosity density versus 2500 Å luminosity (left) and versus 5 GHz luminosity density (right). Bottom: \(\alpha_{\rm ox}\) versus radio loudness (left) and 2 keV luminosity density versus radio loudness (right). All quantities are computed in the quasars’ rest frames. The RQ quasars (grey dots) are from Shemmer et al. (2006), Just et al. (2007), Lusso & Risaliti (2016), Martocchia et al. (2017), Nanni et al. (2017) and Vito et al. (2019). The samples of radio-loud quasars are taken from Zhu et al. (2019) (yellow dots) and Zhu et al. (2020) (blue dots), the high-redshift blazars (pink dots) from Iglina et al. (2019), the X-ray sample of young radio sources at \(z>4.5\) (green dots) from Sainos et al. (2020); \(z>5\) radio-loud quasars with reported X-ray detections are marked with violet dots (see Khorunzhev et al. 2021). In all the panels but the middle ones, for each sample we highlighted in cyan the radio-loud quasars with the 2 keV luminosity densities comparable with, or higher than, that of J1429 (\(L_{\rm 2.2keV}\gtrsim 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\)). In the top panels, we have also included for a comparison the samples of radio-quiet quasars from Shemmer et al. (2006), Just et al. (2007), Lusso & Risaliti (2016), Martocchia et al. (2017), Nanni et al. (2017) and Vito et al. (2019).
disk/hot corona structures with respect to the lower-redshift counterparts. If so, the corona-related excess X-ray emission would be unique to SSRQs. To test this possibility, one would need to significantly increase the number of \(z>6\) SSRQs. Incidentally, we note that Shen et al. (2019) classifies J1429 as a weak emission-line QSO (WLQSO, where the definition is based on the rest-frame equivalent width of C IV \(<\)15.4 A, Fan et al., 1999; Diamond-Stanic et al., 2009). The proposed explanations for this class of objects involve young accreting systems or different accretion and absorption conditions in the innermost region of the QSO (e.g. Shemmer et al., 2010; Laor and Davis, 2011; Luo et al., 2015).
Alternatively, it is possible that multiple radiative components contribute to the total X-ray emission of J1429. Medvedev et al. (2021) explored the IC/CMB scenario for the X-ray emission. While part of their reasoning was based on the steep X-ray photon index measured by XMM-_Newton_, which we now revised, a contribution of the jet via IC/CMB to the total X-ray emission remains a possibility. We exploited the available _Chandra_ data to search for an extended emission on angular scales \(\gtrsim 0.5^{\prime\prime}\), corresponding to a physical scale of \(\sim 2.8\) kpc. The resolved kpc-scale X-ray quasar jets are typically characterized by the jet-to-core luminosity ratios \(R_{\rm jc}\sim 2\%\)(Marshall et al., 2018), and even for the most luminous jets at high redshift \(R_{\rm jc}\lesssim 10\%\)(Siemiginowska et al., 2003; Cheung et al., 2006; Schwartz et al., 2020; Ighina et al., 2022). For our 30 ksec _Chandra_ observation, this would give a maximum of 10 net counts in the most optimistic scenario (\(R_{\rm jc}\sim 10\%\), assuming a standard photon index \(\Gamma=1.7\)), while for \(R_{\rm jc}\sim 2\%\) the jet X-ray emission would be below the detection limit.
Based on the available X-ray dataset, we place an upper limit \(9\times 10^{44}\) erg s\({}^{-1}\)to any X-ray component on scales \(>1.5^{\prime\prime}\) (\(>8\) kpc), thus excluding the presence of a luminous, jet-related emission far outside of the galactic host. The putative count excess that we measure in the \(0.5^{\prime\prime}\)-\(1.5^{\prime\prime}\)amulus (\(\sim 3-8\) kpc projected scale) implies an X-ray luminosity \(\sim 4\times 10^{45}\) erg s\({}^{-1}\). If due to a kiloparsec jet, this would make it for a remarkable \(\sim 20\%\) of the total observed X-ray flux, well beyond the \(R_{\rm jc}\) observed ranges. Even in this case, however, the \(>3\) kpc jet would be far from being the dominant X-ray contribution.
The imaging analysis leads us to conclude that the bulk of the X-ray emission must be produced within the central \(\sim 3\) kpc (projected) region. Nonetheless, Doppler boosting would still be needed to explain the high X-ray luminosity in terms of the IC/CMB jet emission. Shen et al. (2019) argue that the weakness of the C IV line could be due to contamination of the UV continuum by the jet emission, a possibility taken into consideration also for the far IR (FIR) emission (Khusanova et al., 2022). Therefore, a blazar-like nature of the source cannot be fully ruled out, although it stands at odds with the established radio properties. In this scenario, the high-energy emission could be produced via inverse-Compton scattering of the nuclear photons (UV and IR photons) in the inner segment of a relativistic jet. Follow-up multi-wavelength observations could probe this hypothesis by searching for a variability of the emission, or lack of thereof.
We also briefly consider in this context the young radio source scenario. High-energy emission is predicted to be produced in the compact lobes of young radio galaxies (Stawarz et al., 2008) and in the jets of young radio quasars (Migliori et al., 2014). Support to a non-thermal, high-energy component in young radio sources has come from the detection of a handful of these sources in the \(\gamma\)-ray band with the _Fermi_ telescope (Migliori et al., 2016; Abdollahi et al., 2020; Principe et al., 2021; Principe et al., 2021). However, modeling of the broad-band high-energy output of the compact lobes of young radio galaxies points to much lower X-ray luminosities than J1429, namely \(\sim 10^{41}-10^{42}\) erg s\({}^{-1}\)(see, e.g., the SED modeling of the \(\gamma\)-ray detected young radio galaxy PKS 1718\(-\)649, Sobolewska et al., 2022, and references therein). The high-energy emission of young radio quasars can be as high as \(\sim 10^{45}-10^{46}\) erg s\({}^{-1}\) but it is typically charaterized as variable, suggesting a blazar-type origin (Siemiginowska et al., 2008; Principe et al., 2021).
To conclude, the bulk of the X-ray luminosity of J1429 seems either originating in the quasar accretion or in the (aligned?) jet, with the latter hypothesis being disfavored by the source's radio properties. As discussed in Khusanova et al. (2022), X-ray dominated regions (XDR) produced by the X-ray photons from the accreting AGN can contribute to the observed [CII] emission. If we use the relation between [CII] line and the 2-10 keV luminosity for XDR \(L_{\rm[CII],\,\lambda XDR}=2\times 10^{-3}L_{\rm 2-10\,keV}\)(Stacey et al., 2010), we obtain \(L_{\rm[CII],\,\lambda XDR}\sim 2\times 10^{43}\) erg s\({}^{-1}\), a value comparable with the observed \(L_{\rm[CII]}\)(Khusanova et al., 2022). Indeed, this is a rough estimate, as it is unlikely that the whole \(L_{\rm[CII]}\) is produced by XDR (see Wolfire et al., 2022, for a review). However, it shows that, in this system, XDR could be in principle an important contribution to gas heating in addition to the starlight.
## 4 Conclusions
We presented the results of the \(\sim 30\) ksec _Chandra_ observation of the high-\(z\) radio quasar J1429. The high angular resolution of _Chandra_ allowed us to identify the X-ray sources in the field of J1429, and to derive the X-ray spectrum of the target free of the contaminating source, which was not spatially resolved in the previous eROSITA and XMM-_Newton_ observations. In addition, we were able to place constraints on the X-ray emission of a putative kiloparsec jet, concluding that the bulk of the X-ray emission must be produced within a \(\sim 3\) kpc-radius central region, either in the disk-corona system, or in the jet. In the former case, the accretion luminosity of J1429 appears higher than that of similar systems observed at high redshifts, such as steep spectrum radio quasars, and could significantly impact the ISM. In the latter case, the non-thermal emission should be boosted, implying a (moderately) aligned jet as in blazar sources.
The analysis of the _Chandra_ image pointed to a count excess over
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(L_{\rm 5.GHz}\) & \(L_{\rm 4400\AA}\) & \(L_{\rm 2500\AA}\) & \(L_{\rm 21eV}\) & \(L_{\rm 10keV}\) \\ erg s\({}^{-1}\)Hz\({}^{-1}\) & erg s\({}^{-1}\)Hz\({}^{-1}\) & erg s\({}^{-1}\)Hz\({}^{-1}\) & erg s\({}^{-1}\)Hz\({}^{-1}\) & erg s\({}^{-1}\)Hz\({}^{-1}\) \\ \hline \((3.1\pm 0.1)\times 10^{33}\) & \((2.8\pm 0.2)\times 10^{31}\) & \((1.6\pm 0.2)\times 10^{31}\) & \((1.6^{+0.8}_{-0.6})\times 10^{28}\) & \((3.2^{+0.9}_{-0.7})\times 10^{27}\) \\ \hline \end{tabular} Notes: the values of \(L_{\rm 5.GHz}\) and \(L_{\rm 4400\AA}\) are taken from Bááados et al. (2015), \(L_{\rm 4400\AA}\) was calculated from the WISE W1 magnitude. \(L_{\rm 2500\AA}\) is taken from Medvedev et al. (2020) and was estimated using the median composite SED of radio-loud quasars from Shang et al. (2011) normalized to the observed y-band flux density from the Pan-STARRS1 survey (Chambers et al., 2016). The X-ray flux density are from the analysis here presented.
\end{table}
Table 2: J1429 rest-frame luminosity densities.
the PSF predictions in the 0.5\({}^{\prime\prime}\)-1.5\({}^{\prime\prime}\) central region, corresponding to a high X-ray luminosity (\(\sim 4\times 10^{45}\) erg s\({}^{-1}\)). While a deeper _Chandra_ observation is needed to confirm this result, we mention the possibility that -- instead of being related to J1429 (i.e., a kpc-scale jet) -- this excess could be revealing of a separate X-ray source. This is an intriguing hypothesis given the observational evidences that J1429 resides in a merging system (Khusanova et al., 2022).
The case of J1429 effectively shows how wide-field, large-effective area X-ray telescopes are key for the discovery of high-\(z\) quasars, while high-angular resolution observations are needed to ensure the correct characterization of the X-ray emission.
## Acknowledgements
The authors would like to thank the referee for useful suggestions and comments. Support for this work was provided by the National Aeronautics and Space Administration through _Chandra_ Award Numbers GO8-19093X and GO-21101X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. G.M. acknowledges financial support from INAF mini-grant "The high-energy view of jets and transient" (Bando Ricerca Fondamentale INAF 2022). A.S., M.S., D.A.S. and V.L.K. were supported by NASA contract NAS8-03060 (Chandra X-ray Center). C.C.C. was supported at NRL by NASA DPR S-15633-Y. L.S. was supported by the Polish NSC grant 2016/22/E/ST9/00061. This research has made use of data obtained from the Chandra Data Archive, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa.
## Data Availability
All data underlying this article are already publicly available from NASA's HEASARC archive ([https://heasarc.gsfc.nasa.gov/](https://heasarc.gsfc.nasa.gov/)), _Chandra_'s Data Archive ([https://cxc.harvard.edu/cda/](https://cxc.harvard.edu/cda/)).
|
2304.10311 | Movie Box Office Prediction With Self-Supervised and Visually Grounded
Pretraining | Investments in movie production are associated with a high level of risk as
movie revenues have long-tailed and bimodal distributions. Accurate prediction
of box-office revenue may mitigate the uncertainty and encourage investment.
However, learning effective representations for actors, directors, and
user-generated content-related keywords remains a challenging open problem. In
this work, we investigate the effects of self-supervised pretraining and
propose visual grounding of content keywords in objects from movie posters as a
pertaining objective. Experiments on a large dataset of 35,794 movies
demonstrate significant benefits of self-supervised training and visual
grounding. In particular, visual grounding pretraining substantially improves
learning on movies with content keywords and achieves 14.5% relative
performance gains compared to a finetuned BERT model with identical
architecture. | Qin Chao, Eunsoo Kim, Boyang Li | 2023-04-20T13:42:27Z | http://arxiv.org/abs/2304.10311v1 | # Movie Box Office Prediction With Self-Supervised and Visually Grounded Pretraining
###### Abstract
Investments in movie production are associated with a high level of risk as movie revenues have long-tailed and bimodal distributions [1]. Accurate prediction of box-office revenue may mitigate the uncertainty and encourage investment. However, learning effective representations for actors, directors, and user-generated content-related keywords remains a challenging open problem. In this work, we investigate the effects of self-supervised pretraining and propose visual grounding of content keywords in objects from movie posters as a pretraining objective. Experiments on a large dataset of 35,794 movies demonstrate significant benefits of self-supervised training and visual grounding. In particular, visual grounding pretraining substantially improves learning on movies with content keywords and achieves 14.5% relative performance gains compared to a finetuned BERT model with identical architecture.
Multimodal Learning, Self-supervised Learning, Visual Grounding, Box Office Prediction, Movie Revenue Prediction
## I Introduction
Movies are undoubtedly a preeminent form of art in the 21st-century human civilization. However, the business side of movie production is often less than glamorous. Statistics [1] show that box-office revenues have long-tailed and bimodal distributions, where a small number of movies take most of the profit and the majority barely make even. According to boxoficmemojo.com1, in 2019, the top-10 highest-grossings collected 13.2 billion US dollars or 37.4% of the global revenue of the top-200 movies. Three years into the pandemic, as of November 2022, the ratio balloons to 50.1%. The exorbitant risk of the industry drives producers to focus on superhero movies and sequels, whose outcomes are relatively predictable. Small studios that cannot afford to make high-budget movies share an ever smaller pie.
Footnote 1: [https://www.boxoficemojo.com/year/world/2019/](https://www.boxoficemojo.com/year/world/2019/)
Algorithmic box office prediction holds the promise to help producers properly budget expenses, reduce risk, and encourage investment in creative and diverse content. The problem has attracted much research interest [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. In this paper, we investigate the effects of self-supervised pretraining and visual grounding.
The star power of actors and directors is one of the most important factors determining box office revenue, but the data for each actor and director is limited. Even prolific directors typically make less than 30 movies throughout their careers. Similarly, few modern actors play leading roles in more than 30 movies. By modern machine learning standards, these numbers are considered few-shot settings. To tackle data sparsity, we adopt self-supervised pretraining that encourages the network to learn the data distribution before training on box office data.
Another important, yet difficult to model, aspect of the box office is the movie content. The movie storyline is a complex artifact with multiple layers of semantics [16, 17, 18, 19], which are challenging for even state-of-the-art AI to understand. To tackle this issue, we utilize user-generated content keywords from The Movie Database (TMDB)2 to incorporate the movie content into the box office prediction problem. Table I shows example keywords. Compared to traditional genre categories, these keywords provide finer-grained categorization of content, including topic, plot, emotion, and even source-related information3.
Footnote 2: www.themoviedb.org
Footnote 3: More details are in the keyword contribution guide located at https://www. themoviedb.org/bible/movie
To gain a precise understanding of these keywords, we further propose to ground the keywords in the visual modality -- the movie posters. In the context of movies, the meaning of keywords can differ subtly from their daily usage. For example, the keyword _action_ may be associated with explosion, car chasing, or martial art, deviating from its dictionary definition. The keyword _robot_ typically refers to robots in science fiction or animation movies, rather than those on assembly lines. Recent research [20, 21] shows that grounding language in visual signals yields improved representation. In this paper, we find that this effect also exists and that the improved representation contributes to a better box office prediction. To our knowledge, this is the first paper that visually grounds textual information for box office prediction.
Overall, our research highlights the effectiveness of self-supervised pretraining and visual grounding in box office
\begin{table}
\begin{tabular}{c} \hline \hline action, criminology, fbi, psycho, aircraft, robot \\ love, hate, high school, father-daughter relationship, \\ paris france, kingdom, based on novel or book \\ \hline \hline \end{tabular}
\end{table} TABLE I: Examples of user-generated keywords from TMDB.
prediction. Our models relatively reduce prediction error by 7.8%\(\sim\)14.5% compared to the directly finetuned baseline BERT model under the same number of hyperparameters, whereas pretraining with visual grounding leads to up to 2.1% relative performance improvements.
With this paper, we make the following contributions. First, we propose self-supervised pretraining for movie box office forecasting that can utilize a combination of textual and numerical information. Second, we demonstrate that visual grounding user-generated keywords in movie posters significantly improves pretraining, suggesting a good correlation between movie content and the posters. Finally, we construct a large well-organized dataset for movie box office prediction and share it with the research community4.
Footnote 4: [https://github.com/jdsannchao/MOVIE-BOX-OFFICE-PREdiction](https://github.com/jdsannchao/MOVIE-BOX-OFFICE-PREdiction)
## II Related Work
**Predicting Movie Success.** Prior work has attempted to predict a number of indicators of commercial and artistic success, including the box office [2, 3, 4], return on investment [5, 6, 7], IMDb ratings [8, 9], critic reviews [11], and awards or award nominations [10]. Recently, with the advancement of ML, deep networks have begun to gain research attention [12, 13, 14, 15].
In terms of features, aside from commonly adopted numeral features, [2, 5, 6, 7, 11] utilize textual features such as sentiment and topics. In particular, topics from Latent Dirichlet Allocation [5, 11] may be seen as a type of content feature. [9, 15] utilize fastText [22] and ELMO [23] word embeddings respectively. To our knowledge, the only prior work using visual features for box office prediction is [14], which incorporates movie poster features from a convolutional neural network during training. In contrast, our work leverages objects inside the poster to visually ground content keywords during pretraining, but do not use the poster during the finetuning stage.
**Self-supervised Multimodal Pretraining.** The success of pre-trained textual models such as BERT [24] has inspired a series of pretrained multimodal models [25, 26, 27, 28], often adopting the masked language modeling (MLM) objective. Similar to a denoising autoencoder, the MLM objective trains the model to predict masked portions of the input. This seemingly simple training technique has demonstrated effectiveness across a wide range of downstream applications. Another line of work, such as CLIP [29] and BLIP [30], adopt a pretraining objective that distinguishes between correct image-text pairings and incorrect pairings.
A classic problem of cognitive science, the symbol grounding problem [31] is concerned with how words can gain their meaning as pointers to other concepts and objects. Computationally grounding textual tokens in visual images has demonstrated success in some applications [20, 21, 32, 33, 34, 35, 36]. In this work, we use movie posters as a source of visual grounding for the textual tokens -- keywords. A movie poster is a widely used visual medium to advertise a movie long before its release. Thus, we ground the tokens using objects from a single poster, and each token can be related to multiple objects and vice versa. Compared to the aforementioned prior work, which retrieve or generate relevant images for the textual descriptions, in our task correspondences between the keywords and the poster are not known _a priori_ and must be discovered in a multi-instance manner.
## III Methodology
In this section, we first introduce the features used by the proposed network, followed by the pretraining strategies.
### _Features_
We include both discrete features such as actors or directors and real-valued features such as movie budget. The embeddings of discrete tokens are learned from data. For real-valued features, we adopt prototype-based numeral embeddings [37]. Formally, the embedding function is formulated as \(\mathrm{NE}(x):\mathbb{R}\rightarrow\mathbb{R}^{D}\) that maps a real number \(x\) to a \(D\)-dimensional vector with the component
\[\mathrm{NE}_{i}(x)=\exp\left(-\frac{\left\|x-q_{i}\right\|_{2}}{\sigma^{2}} \right), \tag{1}\]
where \(\{q_{i}\}_{i=0}^{D-1}\) are \(D\) evenly spaced numbers over a specified interval, e.g., \([-10,10]\). Before applying the numeral embedding function, we normalize the values using logarithm or min-max normalization, depending on whether or not the feature has a long-tail distribution.
We broadly categorize the features used in forecasting box office revenue into four categories: investment & marketing, star power, content, and competition & seasonality.
**Investment & Marketing.** The production budget is often an indicator of the movie's quality. Here we take the logarithm with base 10. Furthermore, we include the distributor company as a token as distributors with greater market power may release movies on more screens, which increases revenue.
**Star Power.** We include up to two directors, two writers, and three leading actors in our model. Each person is a unique token whose embeddings are trained from scratch. We also calculate the profitability of each person, which is defined as the average of the revenues of all previous movies that this person has participated in as one of the leading roles. Moreover, we incorporate the gender and age of each actor at the time of the movie release.
**Movie Content.** We first include genres and MPAA ratings. In addition, we also include an indicator for whether a movie is part of a franchise.
Inspired by the success of user-generated keywords as content descriptors [38], we collect user-generated keywords from TMDB, yielding a total of 7,700 unique keywords for 35,794 movies. Among the keywords, we observe many rare keywords and near-synonyms, which may hinder learning. For rare keywords, the lack of data may prevent accurate embedding estimation. Synonyms and near-synonyms cause problems for constrastive learning, which would force the
model to learn dissimilar embeddings for two words with similar meanings.
To overcome these issues, we cluster the keywords using both lexical similarity and co-occurrence statistics. To capture lexical information, we use 300-dimensional embeddings computed by fastText [22]. Next, we construct a movie-keyword term-frequency inverse-document-frequency (TF-IDF) matrix, which captures the co-occurrence statistics of keywords. From the TF-IDF matrix, we use the technique of [39] to construct embeddings for keywords. We extract the first 50 dimensions of the singular vectors to represent a keyword. The final representation is the 350-dimensional concatenation of the two vectors. We then perform average-link agglomerative clustering and use the resultant keyword clusters as features of movies. We show detailed cluster results in Appendix -B.
**Competition & Seasonality.** To capture the effects of changing consumer tastes and holiday seasons, we include the year and month of the movie release as discrete tokens. Further, we model the competition intensity during the release window. We first identify competitors as those released two weeks before and after the current movie and have the same genre. After that, we sum up the overlap of content keywords, computed using the Jaccard similarity between the current movie and every competitor.
### _Self-supervised Pretraining_
Figure 1 shows the overall pipeline. In the first stage, we pretrain a Transformer network on the MLM and visual grounding objectives. Next, we freeze the token embeddings and finetune the network on box-office prediction. We now introduce the pretraining tasks.
**Masked Field Prediction.** We adopt a pretraining objective similar to the masked language modeling task, which has been shown to be an effective pretraining method for natural language understanding [24] and multimodal understanding [25]. We randomly mask one token from each group of input features: genres, keywords, director/writer names, and actor names. The network is trained to predict the missing token. The prediction is formulated as cross-entropy losses, which we denote as \(\mathcal{L}_{CE}\). By training the network to predict missing fields, we encourage the network to learn the correlations between the inputs, which could mitigate data scarcity issues.
**Structured Visual Grounding.** The content of the movie is undoubtedly crucial for box office, but understanding the user-generated content keywords is challenging. In particular, the content keywords may change in the context of motion pictures as the meaning of keywords can differ subtly from their daily usage as mentioned before.
We propose to ground the keywords in the visual modality provided by the movie posters. We conduct contrastive learning that encourages high similarity between a poster and the corresponding content keywords and suppresses the similarity between incorrectly paired posters and keyword sets. We first perform object detection on the poster with an off-the-shelf network, VinVL [40], but our method is not tied to this particular choice. We denote the extracted object features from the \(i^{\text{th}}\) movie as \(\mathcal{Z}_{i}=\{\mathbf{z}_{m}\}_{m=1}^{M}\). Note that we use the subscript \(i\) to denote the movie index. We also take the contextualized embeddings of the keywords from the output of the Transformer network, denoted as \(\mathcal{X}_{i}=\{\mathbf{x}_{k}\}_{k=1}^{K}\).
We define the similarity between the poster and the key
Fig. 1: The overall pipeline of self-supervised pretraining and finetuning on the box-office prediction task. The token embeddings are frozen during finetuning.
words as
\[\text{sim}(\mathcal{X}_{i},\mathcal{Z}_{i})=\sum_{(\mathbf{x},\mathbf{z})\in\mathcal{X}_{ i}\times\mathcal{Z}_{i}}\exp(\frac{\mathbf{x}^{\top}\mathbf{z}}{\|\mathbf{x}\|_{2}\|\mathbf{z}\|_{2}}), \tag{2}\]
where \(\times\) denotes the Cartesian product and \(\|\cdot\|_{2}\) denote the L2 norm. To motivate the definition, we show one example poster and the associated keywords in Figure 2. We use colors of the keyword boxes to indicate cluster membership (e.g., "quadriplegia" and "handicapped" both belong to the red cluster). We observe that a cluster can correspond to multiple objects and one object may ground multiple clusters. For instance, the cluster "quadriplegia" is grounded by the wheelchair, the tire and the sitting man; the sitting man relates to the red and the purple clusters. Due to many-to-many relations, we follow [41] to define the similarity between the two sets as the sum of similarities of all possible pairs.
With randomly sampled negative pairs \((i^{\prime},j^{\prime})\), we define the visual grounding loss, \(\mathcal{L}_{\text{VG}}\), as
\[\mathcal{L}_{\text{VG}}=-\frac{1}{N}\sum_{i=1}^{N}\log\left(\frac{\text{sim}( \mathcal{X}_{i},\mathcal{Z}_{i})}{\text{sim}(\mathcal{X}_{i},\mathcal{Z}_{i}) +\sum_{(i^{\prime},j^{\prime})}\text{sim}(\mathcal{X}_{i^{\prime}},\mathcal{Z }_{j^{\prime}})}\right) \tag{3}\]
where \(N\) is the total number of movies in the training set.
### _Finetuning on Box Office Prediction_
In the finetuning stage, we train the network to predict box office revenues. We generate the prediction by feeding the average output from all input positions to a fully connected layer. Revenues follow a long-tailed distribution, which we approximate using a log-normal distribution. Hence, we take the base-10 logarithm of the revenue as the target value. To further reduce the effects of outliers, we train the network using the smooth L1 loss, also called the Huber loss,
\[\mathcal{L}_{\text{Huber}}=\begin{cases}0.5\left(y-\hat{y}\right)^{2},\text{ if }|y-\hat{y}|<1\\ |y-\hat{y}|-0.5,\text{ otherwise}\end{cases}, \tag{4}\]
where \(y\) is the ground truth and \(\hat{y}\) is the prediction.
## IV Experimental Results
### _Data and Experimental Setup_
We collect metadata of 35,794 movies from TMDB, including the period from 1920 to 2020. Total box office data for each movie during its original release period is crawled from IMDbPro. We use stratified sampling to divide the data into train, validation, and test sets in the ratios of 70/10/20, using "franchise movie" as the label for stratification. Using the method in SSIII-A, we cluster 7,700 keywords into 1,414 clusters. The number of clusters is tuned on the validation set. We use a 4-layer Transformer, with model dimension \(d_{\text{model}}=512\), fully connected layer dimension \(d_{ff}=512\), and 4 attention heads. The architecture is the same as \(\operatorname{BERT}_{\text{small}}\). More hyperparameters are reported in the Appendix A.
### _Baselines_
We introduce three types of baseline models. The first is a Random Forest (RF). We feed only numerical features to the RF as one-hot encodings of the discrete features would have too many dimensions. Next, we introduce pretrained BERT models of small and medium sizes and finetune them on box office prediction. To mimic the classic BERT input, we concatenate all the input tokens into one sentence, while rounding numeral features to one decimal point, and then apply the BERT tokenizer. Lastly, we compare against a random initialized \(\operatorname{BERT}_{\text{small}}\) directly trained on box office prediction and a \(\operatorname{BERT}_{\text{small}}\) with pretrained \(\operatorname{BERT}\) embeddings for actors, crew members, genres and keywords. When a name contains multiple words, we use the average of the pretrained \(\operatorname{BERT}\) embeddings. For a keyword cluster, we use the embedding of the keyword in the cluster appearing the most frequently.
### _Results and Discussion_
In Table II, we report the test-set Huber loss for all models, as well as their performance relative to \(\operatorname{BERT}_{\text{small}}\). Pretrained \(\operatorname{BERT}\) models easily outperform the RF baseline,
\begin{table}
\begin{tabular}{l l} \hline \hline Model & Test Huber Loss(\% improvement) \\ \hline \multicolumn{3}{l}{**Numerical features only**} \\ Random Forest & \(0.3677_{(-3.5\%)}\) \\ \hline \multicolumn{3}{l}{**Textual and numerical features**} \\ \(\operatorname{BERT}_{\text{small}}\) finetuned & \(0.3553_{\text{baseline}}\) \\ \(\operatorname{BERT}_{\text{medium}}\) finetuned & \(0.3446_{(2.5\%)}\) \\ \hline \multicolumn{3}{l}{**Our models**} \\ \hline Random init. & \(0.3290_{(7.4\%)}\) & \(0.3265_{(8.1\%)}\) \\ + MLM pretraining & \(0.3109_{(12.5\%)}\) & \(0.3133_{(11.8\%)}\) \\ + VG pretraining & \(0.3070_{(13.6\%)}\) & \(0.3109_{(12.5\%)}\) \\ \hline \hline \(\operatorname{BERT}\) embeddings init. & \(0.3137_{(11.7\%)}\) & \(0.3249_{(8.6\%)}\) \\ + MLM pretraining & \(0.3102_{(12.7\%)}\) & \(0.3226_{(9.2\%)}\) \\ + VG pretraining & \(0.3037_{(14.5\%)}\) & \(0.3182_{(10.4\%)}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance comparisons on the held-out box office test dataset. Our best model shows a 14.5% of accuracy improvement compared to \(\operatorname{BERT}_{\text{small}}\).
Fig. 2: Multiple objects and keywords alignments for the movie _The Upside_ (2019)
but are inferior to the MLM and VG pretraining. Although the larger \(\mathrm{BERT}_{\mathrm{medium}}\) outperforms \(\mathrm{BERT}_{\mathrm{small}}\), it underperforms our MLM-pretrained networks by more than 10% relatively. The domain gap between movie and textual data used in pretraining and our feature engineering likely contribute to the performance gaps.
Notably, VG pretraining obtains sizeable improvements on top of MLM for both types of embedding initialization. The fact that VG pretraining leads to improvement even with BERT-pretrained token embeddings corroborates our hypothesis that keywords may have specialized meanings in the movie context and visual grounding may help capture the specialized semantics. Finally, the best test loss of 0.3037, or 14.5% improvement relative to \(\mathrm{BERT}_{\mathrm{small}}\), is achieved by MLM+VG pretraining.
**Content Keywords and Scaling.** As not all movies come with user-supplied keywords, we further investigate the effects of pretraining on movies with and without content keywords. We split the training set into movies with keywords (16K out of 25K) and movies without (9K out of 25K). As comparison baselines, we also create random subsets of the entire training set of sizes 9K, 12K, 16K, 20K, and 25K. We report losses on the same test set when the MLM+VG network is training on different training sets in Fig. 3. We note that with equal amount of training data, MLM and VG both exhibit stronger generalization when training on data with keywords than randomly mixed data. This agrees with our intuition as MLM exploits correlation between keywords and VG further reinforces keywords with visual information. In Fig. 6 in the Appendix, we examine if VG improves upon MLM for movies with keywords. We observe that the improvement of MLM+VG over MLM widens as training data increase, suggesting VG scales well and its effectiveness grows with data.
**Effects of Keywords Clustering.** We examine the effects of keyword clusters. Table II compares results with keyword clustering ("Clustering") with those on raw keywords ("Keywords"). In most cases, keyword clusters provide performance gains, especially when pretrained \(\mathrm{BERT}\) embeddings are used. A possible reason is that near-synonyms have similar \(\mathrm{BERT}\) embeddings that are difficult for the model to differentiate, and clusters alleviate this problem.
### _Poster Retrieval Examples_
We qualitatively examine the effects of visual grounding. Figure 4 shows posters that are most similar to keywords within the contexts of movies. The top two rows are retrieved for the keyword "love" in the context of a romantic movie _One Day (2009)_. The majority of posters fall under the romance genre and visualize a couple embracing one another. The bottom ten posters are retrieved for the keyword "superhero" in the movie _The Avengers (2012)_. The results are mostly action movies with a hero at the center of the poster surrounded by others. Appendix E contains more examples.
## V Conclusion
Box office revenue is influenced by a plethora of entangled factors that are often hard to observe, let alone computationally model. An important challenge in box office prediction is hence to learn representations that capture movie semantics and correlate well with the target variable. For this purpose,
Fig. 4: **Top**: Retrieved posters from the keyword “love” in the context of a romantic movie, _One Day (2009)_; **Bottom**: Retrieved posters using the keyword “superhero” in the context of _The Avengers (2012)_.
Fig. 3: Test losses on box office prediction with different training set sizes. Vertical bars indicate standard deviations. Exact numbers are reported in Appendix E.
we propose to pretrain a transformer network with masked language modeling and visual grounding objectives, which demonstrate substantial performance boost. We hope these results could inspire subsequent research on multimodal box-office prediction.
## VI Acknowledgments
This research is supported, in part, by Alibaba Group through the Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (No. ANG-GC-2020-011), the National Research Foundation Fellowship (No. NRF-NRFF13-2021-0006), Singapore, and NTU Start-Up Grant.
|
2304.06573 | Continuity of Monge-Ampère Potentials with Prescribed Singularities | We study the continuity of solutions to complex Monge-Ampere equations with
prescribed singularities. This generalizes the previous results of DiNezza-Lu
and the author. As an application, we can run the Monge-Ampere flow starting at
a current with prescribed singularities. | Quang-Tuan Dang | 2023-04-13T14:31:31Z | http://arxiv.org/abs/2304.06573v2 | # Continuity of Monge-Ampere potentials with prescribed singularities
###### Abstract.
We study the continuity of solutions to complex Monge-Ampere equations with prescribed singularity type. This generalizes the previous results of DiNezza-Lu and the author [6, 10]. As an application we can run the pluripotential Monge-Ampere flows in [10] starting at a current with prescribed singularities.
Key words and phrases:Monge-Ampere operator, prescribed singularities 2020 Mathematics Subject Classification: 32U15, 32Q15, 32W20
## 1. Introduction
Complex Monge-Ampere equations play a crucial role in studying canonical metrics in Kahler geometry, following Yau's solution [23] to the Calabi conjecture. As evidenced by recent developments in connection with the Minimal Model Program, it is thus desirable to construct canonical metrics on varieties with mild singularities; see [11, 12] and references therein. They led one to the study of degenerate complex Monge-Ampere equations.
In a series of recent papers [6, 13], Darvas-DiNezza-Lu intensively studied complex Monge-Ampere equations with prescribed singularities. They proved the existence and uniqueness of solutions in the context of big cohomology classes. However, the regularity is unknown.
To state our main result, let \((X,\omega)\) be a compact Kahler manifold of complex dimension \(n\) and \(\theta\) be a smooth closed (1,1) form representing a big cohomology class. We let \(\operatorname{PSH}(X,\theta)\) denote the set of \(\theta\)-psh functions. Recall that the cohomology class \(\{\theta\}\) is _big_ if there exists \(\varphi\in\operatorname{PSH}(X,\theta-\varepsilon\omega)\) with analytic singularities for some \(\varepsilon>0\). Its _ample locus_ is denoted by \(\operatorname{Amp}(\theta)\).
Fixing \(\psi\in\operatorname{PSH}(X,\theta)\) and \(0\leq f\in L^{p}(X)\), \(p>1\), we look for a solution \(\varphi\in\operatorname{PSH}(X,\theta)\) satisfying
\[\theta_{\varphi}^{n}=f\omega^{n},\quad|\varphi-\psi|\leq C, \tag{1.1}\]
where \(\theta_{\varphi}^{n}\) denotes the non-pluripolar product of \(\varphi\), introduced in [11]. The last condition means that \(\varphi\) and \(\psi\) has the same singularity type.
As mentioned in [6], for the solvability of 1.1, one needs to impose the necessary condition that \(\chi\) has _model type singularities_, i.e. \(\int_{X}\theta_{\chi}^{n}>0\) and \(\chi-P_{\theta}[\chi]\) is bounded on \(X\) (see Sect. 2.2 for precise definition). It is crucial to emphasize that model type singularities are natural and appear in many different contexts of complex differential geometry. For instance, all analytic and minimal singularities are of the model type.
We state the main result which generalizes the one of DiNezza-Lu [6] and of the author [10].
**Theorem 1.1** (see Theorem 3.2).: _Assume that \(\chi\in\operatorname{PSH}(X,\theta)\) has analytic singularities. Let \(\mu=ge^{-u}dV_{X}\) be a positive measure such that \(\mu(X)=\int_{X}\theta_{\chi}^{n}>0\), where \(g\in\operatorname{PSH}(X,\theta)\) is a positive measure such that \(\mu(X)=\int_{X}\theta_{\chi}^{n}>0\)._
\(L^{p}(X)\), \(p>1\), and \(u\) is a quasi-psh function. Then the unique solution \(\varphi\in\mathcal{E}(X,\theta,P_{\theta}[\chi])\) of the equation_
\[\theta_{\varphi}^{n}=\mu,\quad\sup_{X}\varphi=0,\]
_is continuous in \(Amp(\theta)\setminus E_{1/q}(u)\), where \(q\) is the conjugate exponent of \(p\), i.e., \(\frac{1}{p}+\frac{1}{q}=1\), and \(E_{1/q}(u)=\{x\in X:\nu(u,x)\geq 1/q\}\) with \(\nu(u,x)\) being the Lelong number of \(u\) at \(x\)._
The existence (and uniqueness) of solution in was shown in [1, 16]. The characterization of solutions belonging to weighted subspace was discussed in [14, 15]. A celebrated result of Siu shows that \(E_{1/q}(u)\) is a closed analytic subset, which means that \(\varphi\) is continuous in a Zariski open set.
The case when \(\chi\) has minimal singularities has been shown in [16]. If we additionally assume that \(u\) is bounded on \(X\), then the solution \(\varphi\) is Holder continuous in the ample locus \(\mathrm{Amp}(\theta)\), thanks to [15]. In case \(\mu\) is a smooth volume form it is expected the solution \(\varphi\) is smooth where \(\chi\) is. But even \(\chi\) has minimal singularities; this is a widely open question. According to to [1], the answer is affirmative under an extra assumption that \(\{\theta\}\) is nef.
The strategy of the proof is the same as that of [16, 16]. One can not expect the Monge-Ampere potential \(\varphi\) to be globally bounded in the context of prescribed singularities. In the latter references, the proof relied on a theory of the generalized method of Kolodziej [17], which makes use of a theory of generalized Monge-Ampere capacities, developed in [16], also in [15, 16]. We extend the result and propose in this paper an alternative proof by using the quasi-psh envelope technique, recently developed by Guedj-Lu [14, 15].
The result obtained in the context of elliptic equations allows us to have an analogous one in the parabolic counterpart, continuing pluripotential Monge-Ampere flows from degenerate initial data. This generalizes the one in [16], is briefly discussed in Section 4.
## 2. Preliminaries
In this section we recall some terminology and notation.
### Non-pluripolar complex Monge-Ampere measures
We denote \((X,\omega)\) a compact Kahler manifold of dimension \(n\) and fix \(\theta\) a smooth closed (1,1) form.
An upper semi-continuous function \(\varphi:X\to\mathbb{R}\cup\{-\infty\}\) is called _quasi-plurisubharmonic (quasi-psh for short) if it is locally the sum of a smooth and a plurisubharmonic (psh for short) function. We say that \(\varphi\) is \(\theta\)_-plurisubharmonic_ (\(\theta\)_-psh_ for short) if it is quasi-psh, and \(\theta+dd^{c}\varphi\geq 0\) in the sense of currents, where \(d^{c}\) is normalized so that \(dd^{c}=\frac{i}{\pi}\partial\delta\). We let \(\mathrm{PSH}(X,\theta)\) denote the set of all \(\theta\)-psh functions which are not identically \(-\infty\). The cohomology class \(\{\theta\}\) is _big_ if the set \(\mathrm{PSH}(X,\theta-\varepsilon\omega)\) is not empty for some \(\varepsilon>0\).
From now on, we assume that \(\{\theta\}\) is big, unless specified otherwise.. Following Demailly [16], we can find a closed positive \((1,1)\)-current \(T_{0}\in\{\theta\}\) such that
\[T=\theta+dd^{c}\chi\geq 2\delta_{0}\omega\]
for some \(\delta_{0}>0\) with \(\chi\) a quasi-psh function with _analytic singularities_, i.e. locally \(\chi=c\log\left[\sum_{j=1}^{N}|f_{j}|^{2}\right]+v\), where \(v\) is a bounded function and the \(f_{j}\)'s are holomorphic functions. We see that on
\[\Omega:=X\setminus\{\chi=-\infty\},\]
\(\chi\) is smooth. It moreover follows from [1] that we can choose \(\chi\) such that \(\Omega\) is the largest Zariski open subset of all points \(x\in X\) for which there exists a Kahler current \(T_{x}\in\alpha\) with analytic singularities such that \(T_{x}\) is smooth in a neighborhood of \(x\). This locus is called the _ample locus_ of \(\theta\), also denoted by \(\operatorname{Amp}(\theta)\).
Given \(\varphi,\psi\in\operatorname{PSH}(X,\theta)\), we say that \(\varphi\) is _less singular_ than \(\psi\), and denote by \(\psi\preceq\varphi\), if there exists a constant \(C\) such that \(\psi\leq\varphi+C\) on \(X\). We say that \(\varphi,\psi\) have the _same singularity type_, and denote by \(\varphi\simeq\psi\) if \(\varphi\preceq\psi\) and \(\psi\preceq\varphi\). This defines an equivalence relation on \(\operatorname{PSH}(X,\theta)\), whose equivalence classes are the singularity types \([\varphi]\). There is a natural least singular potential in \(\operatorname{PSH}(X,\theta)\) given by
\[V_{\theta}:=\sup\{\varphi\in\operatorname{PSH}(X,\theta):\varphi\leq 0\}.\]
A function \(\varphi\) is said to have minimal singularities if it has the same singularity type as \(V_{\theta}\). In particular, \(V_{\theta}=0\) if \(\theta\) is semi-positive. Note also that \(V_{\theta}\) is locally bounded in the ample locus.
Let \(\theta^{1},\cdots,\theta^{n}\) closed smooth real (1,1) form representing big cohomology classes and \(\varphi_{j}\in\operatorname{PSH}(X,\theta^{j})\). Following the construction of Bedford-Taylor [1], it has been shown in [1] that for each \(k\in\mathbb{N}\),
\[\mathbf{1}_{\cap_{j}\{\varphi_{j}>V_{\theta^{j}}-k\}}\theta^{1}_{\max(\varphi _{1},V_{\theta^{1}}-k)}\wedge\cdots\wedge\theta^{n}_{\max(\varphi_{n},V_{ \theta^{n}}-k)}\]
is well-defined as a Borel positive measure with finite total mass. The sequence of these measures is non-decreasing in \(k\) and it converges weakly to the so called _Monge-Ampere product_, denoted by
\[\theta^{1}_{\varphi_{1}}\wedge\cdots\wedge\theta^{n}_{\varphi_{n}}.\]
This does not charge on pluripolar sets by definition. When \(\theta^{1}=\cdots=\theta^{n}=\theta\) and \(\varphi_{1}=\cdots=\varphi_{n}\) we obtain the non-pluripolar Monge-Ampere measure of \(f\), denote by \((\theta+dd^{c}\varphi)^{n}\) or simply by \(\theta^{n}_{\varphi}\).
Let \(\phi_{j}\operatorname{PSH}(X,\theta^{j})\) be such that \(\phi_{j}\) is less singular than \(\varphi_{j}\). By [1, Thm. 2.4] we have that
\[\int_{X}\theta^{1}_{\varphi_{1}}\wedge\cdots\wedge\theta^{n}_{\varphi_{n}} \leq\int_{X}\theta^{1}_{\varphi_{1}}\wedge\cdots\wedge\theta^{n}_{\varphi_{n}}.\]
We say that \(\theta^{1}_{\varphi_{1}}\wedge\cdots\wedge\theta^{n}_{\varphi_{n}}\) has _full mas_ with respect to \(\theta^{1}_{\varphi_{1}}\wedge\cdots\wedge\theta^{n}_{\varphi_{n}}\) if the equality holds. We let \(\mathcal{E}(X,\theta^{1}_{\varphi_{1}}\!\cdots\!,\theta^{n}_{\varphi_{n}})\) denote the set of such \(n\)-tuple \((\varphi_{1},\ldots,\varphi_{n})\). In the particular case when the potentials involved are from the same cohomology class \(\{\theta\}\), and with \(\phi\) less singular than \(\varphi\) and \(\int_{X}\theta^{n}_{\varphi}=\int_{X}\theta^{n}_{\phi}\) then we simply write \(\varphi\in\mathcal{E}(X,\theta,\phi)\). Also, we simply write \(\mathcal{E}(X,\theta)\) when \(\phi=V_{\theta}\).
We recall here the plurifine locality of the non-pluripolar Monge-Ampere measure (see [1, Sect. 1.2]) for later use.
**Lemma 2.1**.: _Assume that \(\varphi\), \(\psi\) are \(\theta\)-psh function such that \(\varphi=\psi\) on an open set \(U\) in the plurifine topology. Then_
\[\mathbf{1}_{U}\theta^{n}_{\varphi}=\mathbf{1}_{U}\theta^{n}_{\psi}.\]
For practice, we stress that sets of the form \(\{u<v\}\), where \(u\), \(v\) are quasi-psh functions, are open in the plurifine topology.
We recall the following classical inequality (see e.g. [1, Lemma 4.5]).
**Lemma 2.2**.: _Let \(\varphi,\psi\in\operatorname{PSH}(X,\theta)\) be such that \(\varphi\leq\psi\). Then_
\[\mathbf{1}_{\{\varphi=\psi\}}\theta^{n}_{\varphi}\leq\mathbf{1}_{\{\varphi= \psi\}}\theta^{n}_{\psi}.\]
Proof.: For the reader's convenience, we briefly give a proof here. If \(\varphi\) and \(\psi\) are locally bounded, the result follows, due to Demailly (see e.g. [1, Theorem 3.23]).
For general case, set \(\varphi^{t}:=\max(\varphi,V_{\theta}-t)\), \(\psi^{t}:=\max(\psi,V_{\theta}-t)\). Then \(\varphi^{t}\) and \(\psi^{t}\) are locally bounded on \(\Omega\), it follows that
\[\mathbf{1}_{\{\varphi>V_{\theta}-t\}\cap\{\psi>V_{\theta}-t\}\cap\{\varphi^{t} =\psi^{t}\}}\theta^{n}_{\varphi^{t}}\leq\mathbf{1}_{\{\varphi>V_{\theta}-t\} \cap\{\psi>V_{\theta}-t\}\cap\{\varphi^{t}=\psi^{t}\}}\theta^{n}_{\psi^{t}}\]
using plurifine locality. Letting \(t\to+\infty\), the inequality follows.
### Quasi-plurisharmonic envelopes and model potentials
Given a measurable function \(h:X\to\mathds{R}\), we define the \(\theta\)-_psh envelope_ of \(h\) by
\[P_{\theta}(h):=\left(\sup\{u\in\operatorname{PSH}(X,\theta):u\leq h\text{ on }X\}\right)^{*}\]
where the star means that we take the upper semi-continuous regularization. Given a \(\theta\)-psh function \(\phi\), J. Ross and D. Witt Nystrom [13] introduced the "rooftop envelope"
\[P_{\theta}[\phi](h)=\left(\lim_{C\to+\infty}P_{\theta}(\min(\phi+C,h))\right) ^{*}.\]
If \(h=0\) we simply write \(P_{\theta}[\phi]\). A potential \(\phi\in\operatorname{PSH}(X,\theta)\) is called a _model potential_ if \(\int_{X}\theta^{n}_{\phi}>0\) and \(\phi=P_{\theta}[\phi]\).
**Proposition 2.3**.: _Assume that \(h=a\varphi-b\psi\), where \(\varphi\), \(\psi\) are quasi-psh functions, and \(a,b\) are positive constants. If \(P_{\theta}(h)\not\equiv-\infty\) then \((\theta+dd^{c}P(h))^{n}\) puts no mass on the contact set \(\{P(h)=h\}\)._
We note that \(h=a\varphi-b\psi\) is well-defined in the complement of a pluripolar set and by assumption \(P_{\theta}(h)\in\operatorname{PSH}(X,\theta)\). Moreover, \(P_{\theta}(h)\leq a\varphi-b\psi\) means that \(P_{\theta}(h)+b\psi\leq a\varphi\) on \(X\). A generalized result is proved when \(h\) is merely quasi-continuous, c.f. [10, Theorem 2.7].
Proof.: It is already well-known in some literature [10, 11]. We just sketch the proof here. We assume that \(\varphi\), \(\psi\) are \(\omega\)-psh. Thanks to [10], we can find \(\varphi_{j}\in\operatorname{PSH}(X,\omega)\cap\mathcal{C}^{\infty}(X)\) be such that \(\varphi_{j}\searrow\varphi\). We set \(u_{j}=P_{\theta}(a\varphi_{j}-b\psi)\) and \(u:=P_{\theta}(h)\), so \(u_{j}\searrow u\) (see [10, Prop. 2.2]). Since \(a\varphi_{j}-b\psi\) is lower semi-continuous, the set \(\{u_{j}<a\varphi_{j}-b\psi\}\) is open. It thus follows from a classical balayage argument that for each \(j\), \(\theta^{n}_{u_{j}}\) vanishes on \(\{u_{j}<a\varphi_{j}-b\psi\}\). Also, \(u_{j}\leq a\varphi_{j}-b\psi\), hence we have that
\[\int_{X}\min(a\varphi_{j}-b\psi-u_{j},1)\theta^{n}_{u_{j}}=0\]
by [10, Prop. 2.5]. The functions \(\min(a\varphi_{j}-b\psi-u_{j},1)\) are uniformly bounded, are quasi-continuous and converge in capacity to \(\min(a\varphi-b\psi-u,1)\) (which is quasi-continuous and bounded on \(X\)). By letting \(j\to+\infty\), the conclusion follows from [10, Thm 2.3].
**Proposition 2.4**.: _Let \(\varphi,\psi\in\operatorname{PSH}(X,\theta)\) be such that \(\psi\) is more singular than \(P_{\theta}[\varphi]\). Then for any \(b>0\), \(P_{\omega}(b\varphi-b\psi)\) is a \(\omega\)-psh function with full Monge-Ampere mass._
Proof.: We adapt the argument in [10, Prop. 2.10] which goes back to [10]. We assume that \(\theta\leq A\omega\) for some \(A>0\). For each \(j\in\mathds{N}\) we set \(\varphi_{j}:=\max(\varphi,\psi-j)\) and \(u_{j}:=P_{\omega}(b\varphi_{j}-b\psi)\). Then \((u_{j})\) is a decreasing sequence of \(\omega\)-psh functions, and \(u_{j}\geq-jb\). We will show that \(\lim_{j}\psi_{j}\) is not identically \(-\infty\). We let for each \(j\), \(D_{j}:=\{u_{j}=b\varphi_{j}-b\psi\}\) denote the contact set. Observe that the sets \(D_{j}\) are non-empty for \(j\) large enough. Since \(u_{j}+b\psi\leq b\varphi_{j}\) it follows from the maximum principle and Proposition 2.3 that
\[\omega^{n}_{u_{j}}\leq\mathbf{1}_{D_{j}}[\omega+dd^{c}u_{j}+b(A\omega+dd^{c} \psi)]^{n}\leq\mathbf{1}_{D_{j}}((Ab+1)\omega+dd^{c}b\varphi_{j})^{n}\]
Set \(\tilde{\omega}:=\left(\frac{1}{b}+A\right)\omega\). Fix \(t>0\). Since \(\varphi_{j}=\varphi\) on \(\{\varphi>V_{\theta}-t/b\}\) for \(j>t/b\), by plurifine locality we have for \(j>t/b\),
\[\int_{\{\varphi\leq\psi-t/b\}}\tilde{\omega}_{u_{j}}^{n}=\int_{X}\tilde{\omega }_{\varphi_{j}}^{n}-\int_{\{\varphi>\psi-t/b\}}\tilde{\omega}_{\varphi}^{n}.\]
We see that
\[\{u_{j}\leq-t\}\cap D_{j}=\{\varphi_{j}\leq\psi-t/b\}\subset\{\varphi\leq\psi- t/b\}.\]
From these things above, we have that
\[\begin{split}\omega_{u_{j}}^{n}(u_{j}\leq-t)&\leq \mathbf{1}_{D_{j}}b^{n}\tilde{\omega}_{\varphi_{j}}^{n}(u_{j}\leq-t)\\ &\leq b^{n}\tilde{\omega}_{\varphi_{j}}^{n}(\varphi\leq\psi-t/b) \\ &\leq b^{n}\left(\int_{X}\tilde{\omega}_{\varphi_{j}}^{n}-\int_{\{ \varphi>\psi-t/b\}}\tilde{\omega}_{\varphi}^{n}\right)\end{split} \tag{2.1}\]
Suppose by contradiction that \(\sup_{X}u_{j}\to-\infty\) as \(j\to+\infty\). It thus follows that \(\{u_{j}\leq-t\}=X\) for \(j\) large enough, \(t\) being fixed. Hence, for \(j>0\) large enough, (2.1) becomes
\[\int_{X}\omega^{n}\leq b^{n}\left(\int_{X}\tilde{\omega}_{\varphi_{j}}^{n}- \int_{\{\varphi>\psi-t/b\}}\tilde{\omega}_{\varphi}^{n}\right).\]
Letting \(j\to+\infty\), we obtain
\[\int_{X}\omega^{n}\leq b^{n}\left(\int_{X}\tilde{\omega}_{\varphi_{j}}^{n}- \int_{\{\varphi>\psi-t/b\}}\tilde{\omega}_{\varphi}^{n}\right), \tag{2.2}\]
where we have used that
\[\tilde{\omega}_{\varphi_{j}}^{n}=\sum_{k=0}^{n}\binom{n}{k}(\tilde{\omega}- \theta)^{k}\wedge\theta_{\varphi_{j}}^{n-k}\to\tilde{\omega}_{\varphi}^{n}\]
in the weak sense of measures on \(X\), thanks to [1, Thm. 2.3, Rmk. 2.5]. Indeed, since \(\psi\) is more singular than \(P_{\theta}[\varphi]\) hence \(\varphi\leq\varphi_{j}\leq P_{\theta}[\varphi]\). By monotonicity, for \(k=0,1,\ldots,n\),
\[\int_{X}(\tilde{\omega}-\theta)^{k}\wedge\theta_{\varphi}^{n-k}=\int_{X}( \tilde{\omega}-\theta)^{k}\wedge\theta_{\varphi_{j}}^{n-k}=\int_{X}(\tilde{ \omega}-\theta)^{k}\wedge\theta_{P_{\theta}[\varphi]}^{n-k}.\]
Finally, letting \(t\to+\infty\) in (2.2) we obtain a contradiction. Consequently, \(u_{j}\) decreases to a \(\omega\)-psh function, we infer that \(P_{\omega}(b\varphi-b\psi)\) is a \(\omega\)-psh function for any \(b>0\).
We proceed exactly the same as [1, Prop. 2.10] to obtain that \(P_{\omega}(b\varphi-b\psi)\in\mathcal{E}(X,\omega)\).
## 3. Proof of the Main Theorem
According to [1] we can find \(\chi\) a quasi-psh function with analytic singularities such that
\[\theta+dd^{c}\chi\geq 2\delta_{0}\omega\]
for some \(\delta>0\) and moreover \(\operatorname{Amp}(\theta)=X\setminus\{\chi=-\infty\}\). Let \(\phi\) be a model type singularity, i.e., \(\phi=P_{\theta}[\phi]\). We assume in this section that \(\phi\) is less singular than \(\chi\). We emphasize that the assumptions on \(\phi\) are truly natural, for instance, \(\phi=V_{\theta}\) or \(\phi=P_{\theta}[\chi]\), as described in [1, Remark 1.6].
Given a non-negative Radon measure \(\mu\) whose total mass is \(\int_{X}\theta_{\phi}^{n}>0\), we consider the Monge-Ampere equation
\[\theta_{\varphi}^{n}=\mu. \tag{3.1}\]
The systematic study of such equations with precribed singularities in big cohomology classes has been initiated in [1, 2]. It has been shown there that (3.1) admits a unique normalized solution \(\varphi\in\mathcal{E}(X,\theta,\phi)\) if and only if \(\mu\) put no mass on _pluripolar_ sets. The characterization of solutions belonging to weighted subspace was proved in [10, 11].
Our goal is to prove the following:
**Theorem 3.1**.: _Assume \(\phi\) is as above. Let \(\varphi\in\mathcal{E}(X,\theta,\phi)\) be normalized by \(\sup_{X}\varphi=0\). Assume that_
\[\theta_{\varphi}^{u}\leq e^{-u}gdV,\]
_for some quasi-psh function \(u\) on \(X\), and \(0\leq g\in L^{p}(dV)\), with \(p>1\). Assume that \(u\) is locally bounded on an open set \(U\subset\operatorname{Amp}(\theta)\). Then \(\varphi\) is continuous on \(U\)._
For a quasi-psh function \(u\) and \(c>0\) we set
\[E_{c}(u):=\{x\in X,\nu(u,x)\geq c\},\]
where \(\nu(u,x)\) denote the Lelong number of \(u\) at \(x\). A well-known result of Siu asserts that the Lelong super-level sets \(E_{c}(\varphi)\) are closed analytic subsets of \(X\).
As a consequence of Theorem 3.1 we obtain the following theorem which implies our theorem in the introductory part.
**Theorem 3.2**.: _Assume \(\phi\) is as above. Assume \(\nu=gdV\) to be a Radon measure, with \(0\leq g\in L^{p}(dV)\) for some \(p>1\). Let \(\mu\) be a non-pluripolar measure such that \(\mu(X)=\int_{X}\theta_{\varphi}^{u}\). Assume that \(\mu=fd\nu\), with \(f\leq e^{-u}\) for some quasi-psh function \(u\) on \(X\). Let \(\varphi\in\mathcal{E}(X,\theta,\phi)\) be the unique normalized solution to (3.1). Then \(\varphi\) is continuous on \(\operatorname{Amp}(\theta)\setminus E_{1/q}(u)\), where \(q\) denotes the conjugate exponent of \(p\)._
Proof.: The proof relies on Demailly's equisingular approximation theorem [11]. We argue the same as in [11, Theorem 3.1].
Proof of Theorem 3.1.: The proof relies on quasi-psh envelope technique developed in [11]. This can also be applied to give an alternative proof for [11, Theorem 3.1]. We divide the proof in several steps
**Step 1**.: _We prove that \(\varphi\) is locally bounded on \(U\)._
We pick \(a>0\) so that \(au\) is \(\delta_{0}\omega\)-psh. Set \(\psi:=\chi+au\). We thus have
\[\theta+dd^{c}\psi\geq\delta_{0}\omega\]
and \(\psi\) is locally bounded on \(U\). We also obtain \(\psi\leq\phi+au\). We claim that
\[\varphi\geq\psi-A\]
for a positive constant \(A\) only depending on \(\delta_{0}\), \(p\), \(dV\), and \(\int_{X}e^{-P_{\omega}(a^{-1}\varphi-a^{-1}\phi)}gdV\).
Set \(\tilde{\varphi}:=P_{\delta_{0}\omega}(\varphi-\psi)\). By Proposition 2.4, \(\tilde{\varphi}\) is an \(\omega\)-psh function. One can show that \(\sup_{X}\tilde{\varphi}\) is uniformly bounded from above by applying the argument in [11]. Without loss of generality we can normalize \(\sup_{X}\psi=0\). The set \(G:=\{\psi>-1\}\) is non empty plurifine open set. We observe that \(\tilde{\varphi}(x)\leq(\varphi-\psi)(x)\leq 1\) for \(x\in G\), hence
\[\tilde{\varphi}(x)-1\leq V_{G,\omega}:=\sup\{u\in\operatorname{PSH}(X,\omega) :u|_{G}\leq 0\}.\]
By [11, Thm. 9.17.1] we have that \(\sup_{X}V_{G,\omega}<+\infty\) since \(G\) is non-pluripolar, hence \(\sup_{X}\tilde{\varphi}\leq 1+\sup_{X}V_{G,\omega}<+\infty\).
Since \(\varphi-\psi\) is bounded from below and quasi-continuous, it follows from Theorem that the Monge-Ampere meaure \((\delta_{0}\omega+dd^{c}\tilde{\varphi})^{n}\) is concentrated on the contact set \(D:=\{\tilde{\varphi}+\psi=\varphi\}\). We thus get
\[(\delta_{0}\omega+dd^{c}\tilde{\varphi})^{n}\leq\mathbf{1}_{D}(\theta+dd^{c}( \tilde{\varphi}+\psi))^{n}\]
We also observe that \(\tilde{\varphi}+\psi\) is an \(\theta\)-psh function such that \(\tilde{\varphi}+\psi\leq\varphi\) on \(X\). It follows from Lemma 2.2 that
\[\mathbf{1}_{D}(\theta+dd^{c}(\tilde{\varphi}+\psi))^{n}\leq\mathbf{1}_{D}( \theta+dd^{c}\varphi)^{n}.\]
Therefore, we have
\[(\delta_{0}\omega+dd^{c}\tilde{\varphi})^{n} \leq\mathbf{1}_{D}(\theta+dd^{c}\varphi)^{n}\] \[\leq\mathbf{1}_{D}e^{-u}dgV\] \[=\mathbf{1}_{\mathbf{D}}e^{a^{-1}\phi}e^{-a^{-1}(\varphi-\chi)}gdV\] \[\leq\mathbf{1}_{\mathbf{D}}e^{a^{-1}\sup_{X}\phi}e^{-P_{\omega}( a^{-1}\varphi-a^{-1}\phi)}gdV\]
It follows from the Holder inequality that \(e^{-P_{\omega}(a^{-1}\varphi-a^{-1}\phi)}g\) belongs to \(L^{r}(X,dV)\) for some \(r\in(1,p)\). By Kolodziej's estimate [10] (see also [11] for alternative one), we infer that \(\tilde{\varphi}\geq-A\) is uniformly bounded from below. This proves our claim.
**Step 2**.: _There exists a sequence of functions \(\varphi_{j}\in\operatorname{PSH}(X,\theta)\cap\mathcal{C}^{0}(Amp(\theta))\) such that \(\phi\geq\varphi_{j}\) decreases to \(\varphi\)._
For convenience, we normalize \(\varphi\) so that \(\sup_{X}\varphi=-1\). Let \(0\geq h_{j}\) be a sequence of smooth functions decreasing to \(\varphi\). Then the sequence of \(\theta\)-psh functions \(\varphi_{j}:=P_{\theta}[\chi](h_{j})\) decreases to \(\varphi\) as \(j\to+\infty\). Indeed, since the operator \(P_{\theta}\) is monotone, hence the sequence \(\varphi_{j}\) is decreasing to a \(\theta\)-psh function \(v\). Note that \([\phi]\geq[\varphi]\) hence \(\varphi\) is a candidate defining \(\varphi_{j}\) we thus have \(\varphi_{j}\geq\varphi\) for all \(j\) so \(v\geq\varphi\). Moreover, \(v(x)\leq\varphi_{j}(x)\leq h_{j}(x)\), \(\forall\ x\in X\), for all \(j\), hence \(v(x)\leq\varphi(x)\), as claimed.
Moreover, it follows from [10, Theorem 1.1] that \(\varphi_{j}\) is continuous outside the singular locus of \(\chi\), hence on \(\operatorname{Amp}(\theta)\).
**Step 3**.: _Continuity of solutions._ We finally adapt the arguments in [11, Theorem 3.7] to prove the continuity of the solution \(\varphi\) on \(U\).
Fix \(\lambda\in(0,1)\). Since Proposition 2.4 ensures that for any \(b>0\), \(P_{\omega}(b\varphi-b\phi)\) has zero Lelong number everywhere on \(X\) it follows from the Holder inequality that \(ge^{-P_{\omega}(b\varphi-b\phi)}\in L^{r}(dV)\) for some \(r>1\). It was shown (see e.g. [10, 1]) that there exists a bounded \(\delta_{0}\omega\)-psh function solving
\[(\delta_{0}\omega+dd^{c}u_{j,\lambda})^{n}=e^{u_{j,\lambda}}(g_{j,\lambda}e^{- P_{\omega}(b\varphi-b\phi)}+h)dV,\]
where \(h=\omega^{n}/dV\), \(g_{j,\lambda}=\lambda^{-n}\mathbf{1}_{\{\varphi<\varphi_{j}-\lambda\}}g\). Moreover, \(g_{j,\lambda}e^{-P_{\omega}(\varphi-\phi)}\to 0\) in \(L^{r}\). It thus follows from the stability property (see e.g. [11]) that \(u_{j,\lambda}\) uniformly converges to \(0\) as \(j\to+\infty\). We consider
\[v_{j,\lambda}:=(1-\lambda)\varphi_{j}+\lambda(\psi+u_{j,\lambda})-C\lambda\]
for \(C>0\) to be chosen hereafter. We observe that \(v_{j,\lambda}\leq\phi+\lambda au\) since \(\chi\leq\phi\) and \(\varphi_{j}\leq\phi\) by Step 2. We compute
\[(\theta+dd^{c}v_{j,\lambda})^{n}\geq e^{u_{j,\lambda}}\mathbf{1}_{\{\varphi< \varphi_{j}-\lambda\}}g^{e-P_{\omega}(b\varphi-b\phi)}dV. \tag{3.2}\]
By previous step, we have \(\varphi_{j}\geq\varphi\geq\psi-A\) for a positive constant \(A\). Hence on the set \(\{\varphi<v_{j,\lambda}\}\),
\[\varphi<\varphi_{j}-\lambda(\varphi_{j}-\psi)+\lambda\sup_{X}|u_{j,\lambda}|-C \lambda\leq\varphi_{j}-\lambda\]
where we have chosen \(C>1+\sup_{X}|u_{j,\lambda}|+A\). Moreover on this set we have
\[e^{u_{j,\lambda}}ge^{-P_{\omega}(bp-b\phi)} \geq e^{-\sup_{X}|u_{j,\lambda}|}ge^{-b\varphi+b\phi}\] \[\geq e^{-\sup_{X}|u_{j,\lambda}|}ge^{-b\varphi_{j,\lambda}+b\phi}\] \[\geq ge^{Ca^{-1}-u} \tag{3.3}\]
since \(v_{j,\lambda}\leq\phi+\lambda au\) with \(b=(\lambda a)^{-1}\). Therefore, it follows from (3.2) and (3.3) that
\[e^{Ca^{-1}}(\theta+dd^{c}\varphi)^{n}\leq(\theta+dd^{c}v_{j,\lambda})^{n}\leq (\theta+dd^{c}\max(\varphi,v_{j,\lambda}))^{n}\]
on the set \(\{\varphi<v_{j,\lambda}\}\). Since \(\varphi\in\mathcal{E}(X,\theta,\phi)\) we can apply the comparison principle (see [1, Lemma 2.3]) to get
\[e^{Ca^{-1}}\int_{\{\varphi<v_{j,\lambda}\}}\theta_{\varphi}^{n}\leq\int_{\{ \varphi<v_{j,\lambda}\}}\theta_{\max(\varphi,v_{j,\lambda})}^{n}\leq\int_{\{ \varphi<v_{j,\lambda}\}}\theta_{\varphi}^{n}.\]
It thus follows that \(\varphi\geq\max(\varphi,v_{j,\lambda})\geq v_{j,\lambda}\) a.e. with respect to the measure \(\theta_{\varphi}^{n}\) hence everywhere by the domination principle (see [1, Proposition 3.11]). Letting \(j\to+\infty\) we obtain
\[\liminf_{j\to+\infty}\inf_{K}(\varphi-\varphi_{j})\geq-(\sup_{K}|\psi|+C)\lambda\]
for any compact set \(K\Subset U\). Letting \(\lambda\to 0\) we have that \(\varphi_{j}\to\varphi\) uniformly on \(K\). This completes the proof.
## 4. Pluripotential Monge-Ampere flows through prescribed singularities
The results obtained in the previous section allow us to obtain analogous ones in the parabolic counterpart. We investigate in this section the following complex Monge-Ampere flow
(CMAF) \[dt\wedge(\omega_{t}+dd^{c}\varphi_{t})^{n}=e^{\phi_{t}+F(t,\cdot\varphi_{t})}fdV \wedge dt\]
on \(X_{T}\), where
* \(X_{T}:=(0,T)\times X\) with \(T<+\infty\);
* \(0\leq f\in L^{p}(X,dV)\) for some \(p>1\), and \(f>0\) almost everywhere;
* \((\omega_{t})_{t\in[0,T)}\) is a smooth family of closed \((1,1)\)-forms on \(X\) such that \[g(t)\theta\leq\omega_{t},\quad\forall\,t\in[0,T),\] with \(g(t)\) an increasing smooth positive function on \([0,T]\).
* \(F:[0,T]\times X\times\mathbb{R}\to\mathbb{R}\) is continuous on \([0,T]\times X\times\mathbb{R}\), increasing in \(r\) and is uniformly Lipschitz, convex in \((t,r)\in[0,T]\times\mathbb{R}\),
* \(\varphi:[0,T)\times X\to\mathbb{R}\) is the unknown function, with \(\varphi_{t}=\varphi(t,\cdot)\).
We consider an \(\theta\)-psh function \(\phi\) as previous, i.e., \(\phi\) is model type and is less singular than \(\chi\). We recall that from [1, 1] there exists an \(\theta\)-psh function \(\rho_{\phi}\) normalized by \(\sup_{X}\rho_{\phi}=0\) such that
\[(\theta+dd^{c}\rho_{\phi})^{n}=2^{n}e^{c_{1}}fdV,\quad[\rho_{\phi}]=[\phi]\]
where \(c_{1}\) the normalizing constant such that \(\int_{X}\theta_{\phi}^{n}=\int_{X}2^{n}e^{c_{1}}fdV\).
Let now \(\varphi_{0}\) be a \(\omega_{0}\)-psh function which is less singular than \(g(0)(\rho_{\rho}+\chi)/2\). Then there exists a constant \(C_{0}>0\) such that
\[g(0)\frac{\rho_{\phi}+\chi}{2}-C_{0}\leq\varphi_{0}.\]
**Definition 4.1**.: The set \(\mathcal{P}(X_{T},\omega)\) of _parabolic potentials_ consists of functions \(\varphi:X_{T}\to[-\infty,+\infty)\) such that
* \(\varphi\) is upper semi-continuous on \(X_{T}\) and \(\varphi\in L^{1}_{\mathrm{loc}}(X_{T})\);
* for each \(t\in(0,T)\) fixed, the slice \(\varphi_{t}:x\mapsto\varphi(t,x)\) is \(\omega_{t}\)-psh on \(X\);
* for any compact subinterval \(J\subset(0,T)\), there exists a positive constant \(\kappa=\kappa_{J}(\varphi)\) such that (4.1) \[\partial_{t}\varphi\leq\kappa-\kappa(\rho_{\phi}+\chi),\] in the sense of distributions on \(J\times\Omega\).
**Definition 4.2**.: We say that a parabolic potential \(\varphi\in\mathcal{P}(X_{T},\omega)\) is a _pluripotential subsolution_ to (CMAF) on \(X_{T}\) if
* for each \(t\in(0,T)\) fixed, the \(\omega_{t}\)-psh function \(\varphi(t,\cdot)\) is locally bounded in \(\Omega\)
* the inequality \[(\omega_{t}+dd^{c}\varphi_{t})^{n}\wedge dt\geq e^{\varphi_{t}+F(t,\cdot \varphi_{t})}fdV\wedge dt\] holds in the sense of measures in \((0,T)\times\Omega\).
**Definition 4.3**.: A Cauchy datum for (CMAF) is a \(\omega_{0}\)-psh function \(\varphi_{0}:X\to\mathbb{R}\) as above. We say \(\varphi\in\mathcal{P}(X_{T},\omega)\) is a subsolution to the Cauchy problem:
\[(\omega_{t}+dd^{c}u_{t})^{n}=e^{\partial_{t}u_{t}+F(t,\cdot\mu_{t})}fdV,\quad u |_{\{0\}\times X}=\varphi_{0}\]
if \(\varphi\) is a pluripotential subsolution to (CMAF) such that \(\limsup_{t\to 0}\varphi(t,x)\leq\varphi_{0}(x)\) for all \(x\in X\).
We let \(\mathcal{S}_{\varphi_{0},f,F}(X_{T})\) denote the set of pluripotential subsolutions to the Cauchy problem above.
**Lemma 4.4**.: _The set \(\mathcal{S}_{\varphi_{0},f,F}(X_{T})\) is non-empty, uniformly bounded from above on \(X_{T}\), and stable under finite maxima._
Proof.: We proceed the same arguments as in [4, Lemma 2.2] to obtain a subsolution
\[\underline{u}(t,x):=g(t)\frac{\rho_{\phi}(x)+\chi(x)}{2}-C_{1}(t+1), \tag{4.2}\]
and all subsolutions are uniformly bounded by \(M_{0}\).
**Definition 4.5**.: We let
\[U=U_{\varphi_{0},f,F,X_{T}}:=\sup\{\varphi\in\mathcal{S}_{\varphi_{0},f,F}(X_ {T}):\underline{u}\leq\varphi\leq M_{0}\}\]
denote the upper envelope of all subsolutions.
In the same vain we obtain the regularity in time \(t\) of the Perron upper envelope.
**Theorem 4.6**.: _There exists uniform constants \(L_{U}>0\), \(C_{U}>0\) such that for_
\[t|\partial_{t}U(t,x)|\leq L_{U}-L_{U}(\rho_{\phi}(x)+\chi(x)); \tag{4.3}\]
\[t^{2}\partial_{t}^{2}U(t,x)\leq C_{U}-C_{U}(\rho_{\phi}(x)+\chi(x)), \tag{4.4}\]
_in the sense of distributions in \(X_{T}\)._
**Theorem 4.7**.: _The upper envelope \(U:=U_{\varphi_{0},f,F,X_{T}}\) is a pluripotential solution to the Cauchy problem for the parabolic complex Monge-Ampere equation (CMAF) in \(X_{T}\). Moreover, \(U\) is locally uniformly semi-concave in \((0,T)\times\Omega\)._
Proof.: The proof relies on a balayage process. We proceed exactly the same as in [4, Theorem 3.1].
**Theorem 4.8**.: _For each \(t\in(0,T)\), \(U_{t}\) is continuous in \(\Omega\)._
Proof.: We have seen that
\[U_{t}\geq g(t)\frac{\rho_{\phi}+\chi}{2}-C_{1}(t+1)\geq g(t)\chi-C(t)\]
for \(C(t)\) a positive constant only depending on \(t\). Next, we proceed the same as Step 2, 3 in Theorem. This completes the proof.
We can obtain the uniqueness result.
**Theorem 4.9**.: _Let \(\Phi\) be a pluripotential solution to the Cauchy problem for (CMAF) with initial data \(\varphi_{0}\). Assume that_
* \(\Phi\) _is locally uniformly semi-concave in_ \((0,T)\)_;_
* _for each_ \(t\)_,_ \(\Phi_{t}\) _and_ \(U_{t}\) _have the same singularities;_
* \(\Phi_{t}\) _is continuous in_ \(\Omega\)_._
_Then \(\Phi=U\)._
Proof.: We refer the reader to [10, Proposition3.4, Theorem 3.7].
|
2305.11326 | Automatic Generation of Conversational Interfaces for Tabular Data
Analysis | Tabular data is the most common format to publish and exchange structured
data online. A clear example is the growing number of open data portals
published by public administrations. However, exploitation of these data
sources is currently limited to technical people able to programmatically
manipulate and digest such data. As an alternative, we propose the use of
chatbots to offer a conversational interface to facilitate the exploration of
tabular data sources, including support for data analytics questions that are
responded via charts rendered by the chatbot. Moreover, our chatbots are
automatically generated from the data source itself thanks to the instantiation
of a configurable collection of conversation patterns matched to the chatbot
intents and entities. | Marcos Gomez-Vazquez, Jordi Cabot, Robert Clarisó | 2023-05-18T22:23:40Z | http://arxiv.org/abs/2305.11326v3 | Towards the Automatic Generation of Conversational Interfaces to Facilitate the Exploration of Tabular Data
###### Abstract
Tabular data is the most common format to publish and exchange structured data online. A clear example is the growing number of open data portals published by all types of public administrations. However, exploitation of these data sources is currently limited to technical people able to programmatically manipulate and digest such data. As an alternative, we propose the use of chatbots to offer a conversational interface to facilitate the exploration of tabular data sources. With our approach, any regular citizen can benefit and leverage them. Moreover, our chatbots are not manually created: instead, they are automatically generated from the data source itself thanks to the instantiation of a configurable collection of conversation patterns.
## 1 Introduction
In real-world applications, the most common data type is tabular data, comprising _samples_ (rows) with the same set of _features_ (columns). With the rise of digital technologies and the exponential growth of data, the number of tabular data sources is constantly increasing and is expected to continue to do so in the future. In particular, tabular data is also the underlying mechanism used by all kinds of public administrations to release datasets of public data, known as _open data_. Indeed, a quick search in any of the public administration open data and transparency portals reveals the large number of data sources published1 and the popularity of CSV and other similar tabular data formats to publish those.
Footnote 1: Just the EU portal [https://data.europa.eu/](https://data.europa.eu/) registers over 1.5M
Despite its importance, there is a lack of methods and tools that facilitate the exploration of tabular data by non-technical users. This is especially relevant in the context of open data portals, which are targeting a general public audience. This situation hampers a lot the benefits regular citizens can get from current large investments of public money in open data initiatives.
Conversational User Interfaces (CUI), embedded in chatbots and voicebots, have been proven useful in many contexts to automat
experience, such as automated customer services, education and e-commerce. We believe they could also play a major role in the exploitation of tabular data sources, improving their accessibility as well [3]. Until now, such chatbots for tabular data were either manually created (an option that it is not scalable) or completely relying on pure English-to-SQL translational approaches (with limited coverage and with a risk of generating wrong answers).
This paper proposes a new approach where bots are automatically derived based on an analysis of the tabular data description and contents. For instance, the types of the columns (string, date, integer,...) are inferred from the column values and, together with the column names, used to generate questions users could potentially ask about the dataset. Our process requires no mandatory manual input and therefore can scale to cover a large number of datasets while at the same time offering an optional web interface to enrich the data description in order to improve the bot generation if so desired. For example, you can add synonyms for column names (so that the bot recognizes better questions using those synonyms) or merge or filter out columns. Moreover, we have put in place a default fallback mechanism that attempts to answer unforeseen questions via a translational model. In this case, the user is warned about possible mistakes in the bot answer. Thanks to these characteristics, we believe our approach could be adopted by any company or organization interested in facilitating the distribution and consumption of their tabular data.
The rest of this paper is structured as follows. After the preliminary concepts, we introduce the architecture for our tabular data chatbots (Section 3) and the automatic generation process to create them (Section 4). Finally, we discuss the validation, related work and conclusions and further work.
## 2 Preliminary concepts
In our approach, we aim to generate CUIs to interrogate tabular data sources. This section briefly introduces these two core concepts.
Tabular data is data that is structured into rows, each of which contains information about some observation. Each row contains the same number of cells (they could be empty), which provide values of properties of the observation described by the row. In tabular data, cells within the same column provide values for the same property of the observations described by each row [15].
CUIs are becoming more and more popular every day, typically as the front-end of a bot able to respond to user requests in a variety of domains. A bot wraps a CUI and complements it with a behavior specification that defines how the bot should react to a given user message. Bots are classified depending on the channel employed to have a conversation with the user, e.g. in _chat_bots the user interaction is through textual messages, in _voice_bots through speech [12].
Regardless the communication channel, bots always follow the working schema depicted in Figure 0(a). The conversation capabilities are designed as a set of _in-tents_, where each intent represents a possible user goal when interacting with
the bot4. The bot then tries to match any user utterance (_i.e._, user's input text coming from the CUI) to one of its intents. As part of the match, one or more parameters (called also _entities_ in bot terminology) in the utterance can also be recognized, in a process known as _named entity recognition_ (NER). When there is a match, the bot back-end executes the required behaviour, optionally calling external services for complex responses (_e.g._, querying the data source); and finally, it produces a response that it is returned to the user.
Footnote 4: It is also possible to define more open domain bots, where no predefined set of intents are provided and, instead, the bot relies on generative large language models to answer. We discuss them in the related work (Section 7)
As an example, Figure 0(b) shows a bot created with our tool. It has been automatically generated from a CSV dataset of the Barcelona open data portal publishing data of senior officials of the municipal government.Data includes names, salaries and political party affiliations so you can ask, for instance, for the top paid officials as shown in the bot. The bot recognizes the intent ("asking for salary information") and the parameters ("highest" and "3") and based on this looks up and prints the information.
## 3 Chatbot Architecture
The architecture we propose for our generated chatbots is depicted in Figure 2. At the core of the bot, we have an intent-based matching process aimed at
Figure 1: Preliminary concepts about chatbots.
identifying the user questions and their parameters. If the bot is able to match the intent and recognize all the mandatory parameters for that specific intent, it generates the SQL query to get the data. If the intent is matched but not the parameters, the bot may trigger a clarification subconversation with the user to gather them. If this is successful then it proceeds with the query generation as before. The query to execute is precisely generated based on the intent and parameter values: as the bot knows exactly what the user is looking for, it is able to unambiguously generate the SQL query that will precisely answer the user's question based on the structure of the data source.
The final step is to provide the answer to the user. Depending on the matched intent and the result, it will be directly printed in the output channel or the bot will ask the user how she wants to see it (_e.g._, if the result has many rows) and explain the results if appropriate. The bot will also alert the user if it detects some kind of error during the execution.
Despite the bot's best efforts, sometimes it may fail to understand the user's question. This could happen because the user posed the question in a way that is too different from the training sentences used to train the bot or because the question was not foreseen during the bot generation5. When this happens, the bot cannot give an exact answer on its own. At this point, it could just tell the user that it was not able to understand the question but we try to be more useful and add to the bot a powerful fallback mechanism to rely on. The fallback relies on a pretrained language model [11] to automatically translate the user's query to its corresponding SQL statement. This approach does not always provide a perfect translation, and therefore, it may generate a wrong answer but it is worth trying as our experiments suggest that users prefer an approximate result (even
Figure 2: Diagram of the architecture of the generated chatbots.
if potentially wrong) than a plain "sorry" message. Note that the bot always warns the user when answering via this fallback strategy.
Bots are multilingual and so far understand Catalan, Spanish and English though adding support for other languages is straightforward.
## 4 Automated chatbot generation
Figure 3 shows the workflow we follow to generate the bots, following the previous architecture, from an initial tabular data source, depicted as a CSV file in the Figure. The process can be fully automatic. Nevertheless, the data owner can optionally participate by enriching the data definition we automatically infer from the tabular data source to generate more powerful bots.
The next subsections describe in more detail each of these steps.
### Data description inference
To automatically create a bot we only need one ingredient: a tabular dataset. Datasets are composed by columns (dataset fields) and rows (dataset entries). From the structure of the dataset we will gather the list of columns/fields (with their names). From the analysis of the dataset content, we will infer the data type of each field (numeric, textual, date-time,...) and its diversity (number of different values present in that specific column). Based on a predefined (but configurable) diversity threshold, we automatically classify as _categorical_ those fields under the threshold. Categorical fields are implemented as an additional bot entity so that users can directly refer to them in any question.
Think of fields and rows as input and output parameters of the user's questions the bot must be able to answer, _e.g._, users can ask for the value in field
Figure 3: The chatbot generation process.
\(X\) of rows satisfying a certain condition in field \(Y\). Values of categorical fields can be directly used as filter without having to reference to the field where they appear. Internally, the bot will transform the reference to the category value to a subquery checking for rows where the categorical field has that specific value. This kind of reasoning is part of the bot generation phase but there are several options to modify and enrich this default process as explained next.
### Data description enhancement
There are several ways to improve the default bot to maximize the chances it understands a user question. Here we review the most important ones:
#### 4.2.1 Synonyms and translations:
You can add translations to the column names to enable users asking questions in languages different from the one the dataset is in6. This is similar for synonyms: the bot may be able to match them (as the underlying neural network may be able to identify the synonym due to its closeness, in terms of word embeddings, to the field name) but the results will always be better if you explicitly add them.
Footnote 6: An internal automatic translation could work but this is not currently implemented.
#### 4.2.2 Row names:
You can also add aliases for the concept represented by a row in a dataset. This way users can refer to the rows of the dataset in different ways (_e.g._, users could ask about 'rows' but also 'persons', 'people', 'officers',... in our example dataset).
#### 4.2.3 Field composition:
You can create new (virtual) fields by merging several other ones. For instance, you could define the concept of "address" by saying it is the composition of the existing fields'street', 'city' and 'postcode'. This way, the generation process will add the "address" concept to the bot which will then be able to answer questions about the address of somebody while also being able to answer questions on specific fields (_i.e._, the most common postcode).
#### 4.2.4 Field Groups:
Some fields may have similar names and/or meanings. To avoid confusion, you can indicate that they are related. If so, when the user asks about a field part of a group, the bot will start a subconversation to identify which one the user is actually interested in. You can also mark one as the default option. Note that this is different from field composition. In a composition, all fields were different parts of a larger concept. Here, they are different perspectives on the same concept (_e.g._, a metric expressed in different unit systems or with different quality or precision).
### Bot generation
The bot generation phase takes the (potentially enriched) data description and instantiates a set of predefined conversation patterns, gathered, improved and extended via a number of experiments with users, to generate the actual set of questions the bot will be trained on.
On top of this core component, the generator will add the default fallback mechanism and other auxiliary conversations and components (see Section 3) to create a fully functional bot as described in more detail in Section 5.
Next subsections focus on the description of the conversation patterns and how they are systematically applied on the dataset fields to generate all the intents (and entities) to cover the maximum number of potential user questions.
#### 4.3.1 Conversation patterns
For each field data type, we have identified different types of questions that could be potentially asked about the value/s of fields of that type, both for individual and grouped sets of rows. We have also identified several dataset-level questions.
Due to lack of space we cannot show all the question variations we cover, but we summarize and exemplify them in Table 1. As such, each example should be understood as representative of a full set of related questions comprising different combinations of operators, filters and wordings. Each conversation is illustrated with the leading question but keep in mind that some of them will trigger a subconversation with the bot, _e.g._, to clarify the target field in a field group as mentioned above or to better identify the parameters in the utterance. Examples are taken from the Barcelona official dataset introduced in Section 2.
Note that the bot will also understand concatenations of the different conversation patterns. Note also how some of the examples make use of the enrichment options. For example, the literal "Ada Colau" (name of the current Mayor of Barcelona) is targeting a composite field grouping two existing fields in the dataset: "First Name" and "Last Name" so users can ask about officials called "Ada", called "Colau" but also "Ada Colau" altogether.
#### 4.3.2 Intent generation process
The patterns above (including all the operator variations) are systematically applied over each field to generate all the bot intents. Intents for dataset and meta level patterns are also added. Each intent is specified by providing a set of training sentences (examples of verbalizations a user could use to ask for that exact question) and, when needed based on the pattern, slots for indicating the parameters for the query.
For instance, for each numeric field \(F_{i}\) we will generate an intent matching user's questions asking for all the elements in the dataset that have a value _greater than VALUE_ in \(F_{i}\) (and we would repeat the process for each of the other numeric operators). _VALUEE_ is defined as an intent parameter to be provided by the user as part of the utterance. A set of training sentences (initially defined by us and extended based on the actual usage of the bot) are predefined for each type of conversation. This intent (simplified as we only show three of the training sentences due to space constraints) is shown in the following listing.
intent: salary_greater_than_value training sentences: "Give met the rows with salary > VALUE" "Who has a salary greater than VALUE?" "Filter remuneration bigger than VALUE" parameters: name: value, fragment: VALUE, entity: number
As you can guess, this process suffers from a combinatioral explosion problem that ends up generating numerous intents. This threatens the scalability of the process when dealing with datasets with many columns. In those cases, we resort to a slightly different approach where the bot is trained on more generic intents. In this approach, the field itself becomes an additional parameter. Same for the operators. During the generation process, all field names and all operators are stored as part of two new bot entities and recognized as parameters in the user utterance during the matching phase. Let's see an example that subsumes the previous intent:
intent: field_operator_value training sentences: "Give met the rows with FIELD OPERATOR VALUE" "Who has a FIELD OPERATOR VALUE?" "Filter FIELD OPERATOR VALUE"
\begin{table}
\begin{tabular}{l|l} \hline
**Conversation type** & **Examples** \\ \hline
**Dataset level** & _How many rows are there?_ \\ Questions about the & _How many attributes does the dataset have?_ \\ global dataset itself & _How many different values has the field political party?_ \\ \hline
**Field level** & _How many different political parties are there?_ \\ Queries about any field. & _Which political party has more members?_ \\ Applying a variety of & _Give me the 3 parties with the highest salaries_ \\ operators and filters & _Give me the officials with salary >120000_ \\ depending on the field & _What officials are called ’Colau’?_ \\ types & _Give me the people with salary between 80000 and 100000_ \\ & _Show me the politicians with age <30 and salary >50000_ \\ \hline
**Cell values level** & _What is the salary of Ada Colau?_ \\ Queries that do not explicitly mention any & _How many women are there?_ \\ & _Are there more women or men?_ \\ field, but its values & _Who are the People’s Party women?_ \\ \hline
**Aggregations** & _What is the average salary of People’s Party?_ \\ Grouping operations & _What is the total salary of People’s Party?_ \\ on the selected rows & _Give me the minimum omission of officials younger than 30_ \\ \hline
**Meta questions** & _Where is the dataset taken from?_ \\ On the dataset or & _How old is the data?_ \\ the bot & _What kind of questions can I ask?_ \\ \hline \end{tabular}
\end{table}
Table 1: Examples of questions that our generated chatbots will understand
parameters: name: field, fragment: FIELD, entity: fieldEntity name: operator, fragment: OPERATOR, entity: operatorEntity name: value, fragment: VALUE, entity: literalEntity
Given a user utterance such as "what are the officers with a salary higher than 10000", the bot would now match this _field operator value_ intent while at the same time instantiating the parameter _FIELD_ with _salary_, the parameter _OPERATOR_ with _greater than_ and the parameter _VALUE_ with 10000.
With this approach, the total number of intents is constant regardless of the dataset. As a trade-off, the quality of the matching process in the bot may slightly decrease as now most of the matching weight relies on the parameters themselves and not so much on the training sentences that are now more abstract and do not explicitly contain the specific terms employed by the user7. Moreover, the bot will make sure that the data types of the field, the operator and the literal are consistent (_e.g._, it will not allow expressions like "name > 1000").
Footnote 7: Providing good support for this type of intents was precisely the main motivation to build our own intent recognition engine, see Section 5.
## 5 Tool Support
We have implemented a Spring web application that enables the importation and analysis of new data sources, the (optional) data definition enrichment and the actual bot generation as described in Section 4. Figure 4 shows part of the customization tab where you can edit the field properties, add synonyms,...
The generated bots rely on Xatkit [8], an open source chatbot library created to reduce boilerplate code, complex API understanding, and technical details to
Figure 4: Screenshot of the configuration tab to enrich the data definition.
facilitate the definition and deployment of bots. The bot is wrapped and exported as a single jar file. You can also modify any aspect of the chatbot source code before its deployment.
Although Xatkit allows using any NLP intent recognition provider (such as Dialogflow or IBM Watson), we implemented our own engine to better support the types of questions, with heavy use of entities, typical of tabular data usages. It is based on a multi-class classified neural network build with TensorFlow and performs intent recognition altogether with named entity recognition.
To access the data source when computing answers, the bot relies on Apache Drill to abstract from the concrete format of the data source (CSV, a relational table,...). The bot's fallback mechanism is implemented as a two-step process. First a translation from the input language (_e.g._, Catalan or Spanish) to English, if needed. And then the English-to-SQL translation. The Spanish-to-English model we use is Helsinki-NLP/opus-mt-es-en while for Catalan-to-English we use softcatala/opennmt-cat-eng. TabularSemanticParsing [11] is then used for the English-to-SQL translation.
All our tooling infrastructure is freely available on GitHub8
Footnote 8: [https://github.com/opendata-for-all/bodi-generator](https://github.com/opendata-for-all/bodi-generator)
## 6 Validation
To validate our approach, we have conducted three preliminary experiments with datasets from the city of Barcelona, the Catalan Government and the LIS Cross-National Data Center in Luxembourg.
In each case, we first asked the institutions for a representative data source they wanted us to use. We then generated the respective chatbots and deployed them in a public website for them to test. We also invited other potential users (_i.e._, "citizens") to try them out as well with the questions they thought the bot should be able to answer in order to be useful.
Based on their feedback and the data automatically collected by the bots9 we iterate over the generator and the bots until reaching a point where bots managed to answer all desired user questions. Some of these iterations were bot-specific, _i.e._, mostly adding synonyms and data-cleaning preprocessing operations, but others triggered generic improvements to the bot infrastructure and generator process. For instance, the experiments showed us the need to be able to group related fields (see Section 4.2) as users were asking ambiguous questions potentially referring to more than one related field.
Footnote 9: Bots include a monitoring component that logs their hits and misses and also the (optional) users’ evaluation of the answers’ quality.
## 7 Related Work
In this section, we compare our approach with other works focusing on the exploration and exploitation of tabular data by non-technical people.
A first group of works focuses on the generation of charts and interactive dashboards [17] to help users filter and view the data they want. Socrata10 is a popular tool for building such interfaces in the world of open data. However, while really useful to see trends and global data perspectives, this approach is not suitable to answer concrete adhoc questions on specific aspects of the data.
Footnote 10: [https://dev.socrata.com/](https://dev.socrata.com/)
Other approaches opt for a direct English-to-SQL translation when querying tabular data, such as [1, 11, 14]. On the one hand, they attempt to answer any question, not just those that match a predefined intent. On the other hand, they are in fact constrained in the questions they understand as they have been trained with pairs of <English,SQL> training sets of limited complexity. But a major issue is that they can generate wrong translations and therefore come up with wrong factual answers. This latter issue is also the main concern with generative chatbots based on large language models (LLMs) that could also be used to chat with the citizens. They tend to "hallucinate" and invent facts, which is something too risky for a public-facing chatbot, especially for a government administration. Note that we acknowledge the benefits of the above English-to-SQL approaches as a complementary strategy, as we do in our own framework (see Section 3) but they are, so far, not a good option as the core solution [16].
More similar to our efforts, a couple of Proof of Concepts (PoC) of intent-based chatbots used for public tabular / open data have been published [13][5]. Both bots were manually created, this is in contrast with our approach where bots are automatically generated. [4, 10] propose a chatbot to help users find data sources in an Open Data repository but do not provide querying capabilities to consult those data sources. An exception is [6] where the bot generation is semi-automated but it requires a mandatory and extensive annotation process while we focus more on a scalable approach able to generate chatbots with no human intervention if so desired. Generation of chatbots from other types of data sources like web pages [7] or knowledge graphs [2] has also been explored and some of their ideas could be exploited as well for tabular data.
To sum up, we believe our approach proposes a novel combination of strategies and opens the door to a more massive use of chatbots for tabular data thanks to our automatic generation strategy.
## 8 Conclusions and further work
We have proposed a new framework to automatically generate chatbots that facilitate non-technical users to explore tabular data sources. This is especially useful in our current trend towards more transparency and openness in the public administration, with more and more open data sources released each day. Our chatbots encode a large number of potential questions users may want to ask the data. Such questions are automatically generated based on an initial analysis of the structure and content of the data source. A default fallback, based on the use of LLMs, is used to try to answer those questions that were not foreseen.
As further work, we plan to improve the training of the chatbots thanks to the use of ontologies. The idea is to map the dataset metadata (including column names) to semantic concepts in an ontology to automatically obtain synonyms and related concepts to enrich the training. We also plan to extend the set of conversation patterns, for instance including questions on the validity, origin and possible biases of the data whenever the dataset includes this information [9]. Finally, we would like to integrate our approach with full relational databases or even open data management systems, such as CKAN, to automate the generation of sets of interrelated chatbots for all the data in a given organization together.
|
2307.10641 | Transfer Learning for Inverse Design of Tunable Graphene-Based
Metasurfaces | This paper outlines a new approach to designing tunable electromagnetic (EM)
graphene-based metasurfaces using convolutional neural networks (CNNs). EM
metasurfaces have previously been used to manipulate EM waves by adjusting the
local phase of subwavelength elements within the wavelength scale, resulting in
a variety of intriguing devices. However, the majority of these devices have
only been capable of performing a single function, making it difficult to
achieve multiple functionalities in a single design. Graphene, as an active
material, offers unique properties, such as tunability, making it an excellent
candidate for achieving tunable metasurfaces. The proposed procedure involves
using two CNNs to design the passive structure of the graphene metasurfaces and
predict the chemical potentials required for tunable responses. The CNNs are
trained using transfer learning, which significantly reduced the time required
to collect the training dataset. The proposed inverse design methodology
demonstrates excellent performance in designing reconfigurable EM metasurfaces,
which can be tuned to produce multiple functions, making it highly valuable for
various applications. The results indicate that the proposed approach is
efficient and accurate and provides a promising method for designing
reconfigurable intelligent surfaces for future wireless communication systems. | Mehdi Kiani, Mahsa Zolfaghari, Jalal Kiani | 2023-07-20T07:10:37Z | http://arxiv.org/abs/2307.10641v1 | # Transfer Learning for Inverse Design of Tunable Graphene-Based Metasurfaces
###### Abstract
This paper outlines a new approach to designing tunable electromagnetic (EM) graphene-based metasurfaces using convolutional neural networks (CNNs). EM metasurfaces have previously been used to manipulate EM waves by adjusting the local phase of subwavelength elements within the wavelength scale, resulting in a variety of intriguing devices. However, the majority of these devices have only been capable of performing a single function, making it difficult to achieve multiple functionalities in a single design. Graphene, as an active material, offers unique properties, such as tunability, making it an excellent candidate for achieving tunable metasurfaces. The proposed procedure involves using two CNNs to design the passive structure of the graphene metasurfaces and predict the chemical potentials required for tunable responses. The CNNs are trained using transfer learning, which significantly reduced the time required to collect the training dataset. The proposed inverse design methodology demonstrates excellent performance in designing reconfigurable EM metasurfaces, which can be tuned to produce multiple functions, making it highly valuable for various applications. The results indicate that the proposed approach is efficient and accurate and provides a promising method for designing reconfigurable intelligent surfaces for future wireless communication systems.
## 1 Introduction
Electromagnetic (EM) metasurfaces have gained significant attention due to their ability to control EM wave propagation in a subwavelength thickness [1, 2]. While this technology has led to various advancements, including mantle cloaking [3, 4], polarization twisting [5, 6], wave-front manipulation [7, 8], and perfect absorption [9, 10], there are certain limitations that need to be addressed. One such limitation pertains to the inflexibility of EM characteristics, particularly in operating systems
[11, 12]. Consequently, there is a growing need for reconfigurable metasurfaces that can dynamically manipulate EM waves in response to external signals.
Reconfigurability has been achieved at microwave frequencies by incorporating semiconductor lumped components into metasurfaces [13, 14, 15]. At Terahertz (THz) frequencies, tuning substances such as vanadium dioxide [16], liquid crystal [17], and graphene [18] have shown promise as platforms for reconfigurable metasurfaces. Graphene, a single two-dimensional (2D) plane of carbon atoms arranged in a hexagonal lattice, exhibits substantial and configurable absorption in the THz regime [19, 20]. The ability to control graphene's absorption behavior by changing its chemical potential through electrostatic or chemical doping makes it a promising candidate for reconfigurable THz devices [21].
One critical issue in designing reconfigurable metasurfaces, including graphene-based ones, is the inverse design of tunable meta-atoms that can exhibit different desired EM responses by altering external control signals. Traditional inverse design methods involve time- and computation-intensive searches over known structures, such as cross-shaped patches, rectangular patches, and split-ring resonators, which often fall short of achieving the required performance, especially when broadband, polarization-sensitive, and wide-angle responses are desired. An evolutionary optimization algorithm was proposed as an alternative to finding optimal THz absorbers with tunable performance [22]. However, this stochastic algorithm heavily relies on the quality of initial designs, limiting its consistency and productivity as the problem complexity increases.
Deep Learning (DL) approaches, on the other hand, have emerged as powerful representation-learning methods. These approaches assemble simple yet nonlinear modules that map high-dimensional structured data into lower-dimensional representations, and the layers are learned from data rather than being developed by human engineers [23]. DL has gained popularity in various fields, including computer vision[24, 25], natural language processing [26], reinforcement learning [27], graph representation learning [28, 29] drug discovery [30], medical diagnosis [31], and different fields of engineering [32, 33, 34, 35].
Figure 1: A conceptual illustration of the reconfigurable metasurface, as well as the five-layer graphene-based meta-atoms used for training the proposed inverse design model.
In the field of EM design, DL approaches have demonstrated promise in designing a wide range of well-functioning EM devices by directly identifying key geometric parameters based on desired EM responses. Peurifoy et al. utilized fully connected Neural Networks (NNs) to simulate light interaction with nanoscale structures [36]. Similarly, Nadell et al. employed NNs with fully connected layers along with a fast-forward dictionary search method for the inverse design of all-dielectric metasurfaces [37]. However, the use of fully connected NNs limited the efficiency of the inverse design models to simple structure designs with restricted EM responses. In contrast, Convolutional Neural Networks (CNNs) have been successfully applied to design various types of metasurfaces, leading to the achievement of new functionalities and improved device performance [38, 39, 40, 41]. For instance, Liu et al. developed a Generative Adversarial Network (GAN) based on CNNs for the inverse design of metasurfaces [38]. However, these metasurface design schemes typically focused on either the amplitude or phase response of the metasurfaces [42, 43]. To address this limitation, Naseri et al. proposed a generative DL model based on a variational autoencoder for the inverse design of multi-layer metasurfaces, considering both the amplitude and phase responses, albeit for a single function, polarization conversion [44]. To tackle the challenge of generating metasurfaces with multiple functionalities, two inverse design models have been introduced. Kiani et al. utilized conditional GANs in the microwave regime to design multi-layer metal-dielectric metasurfaces with three different functions and full-space coverage [45]. In a similar vein, An et al. employed a combination of conditional and Wasserstein GANs for the inverse design of all-dielectric metasurfaces in photonics [46].
Notwithstanding, despite the advancements in DL models for metasurface design, significant restrictions still persist. Firstly, these models are limited to designing passive metasurfaces with fixed functions, which further constrains their applicability to a limited frequency range. Secondly, the data collection process associated with these models is time-consuming, resulting in overall inefficiency. As a consequence, the DL-enabled design of reconfigurable metasurfaces, especially when the training dataset is significantly reduced, presents a formidable challenge that has remained unaddressed until now. In this study, a novel approach based on Transfer Learning (TL) is proposed for the inverse design of reconfigurable graphene metasurfaces. The approach utilizes two consecutive CNNs that leverage TL to train the networks and facilitate the design of tunable meta-atoms. In the proposed model, the first network designs the passive components of the tunable graphene meta-atoms, while the second network predicts the chemical potentials (control signals) of the graphene meta-atoms to achieve tunable responses. To train, validate, and test the networks, datasets are constructed comprising graphene meta-atoms represented as 16\(\times\)16 matrices. These datasets incorporate both graphene and vacuum square blocks with chemical potentials ranging from 100 \(meV\) to 1000 \(meV\), along with their corresponding phase responses encompassing a wide range of values. It is demonstrated that the incorporation of pre-trained CNNs in the inverse design model improves the feature extraction of EM phase responses, yielding satisfactory results even with a limited amount of training data samples. Finally, the effectiveness of the approach is showcased by successfully employing it in the inverse design of a tunable meta-atom. This meta-atom exhibits a wide range of desired phase responses by simply adjusting the chemical potentials. The presented approach offers several unique features in comparison to previous works in the literature. Firstly, the framework en
ables the design of tunable graphene metasurfaces without relying on initial designs, distinguishing it from the optimization-based approach presented in [22]. Secondly, this approach represents the first endeavor to design reconfigurable graphene metasurfaces using DL-based metasurface inverse design methods. Lastly, the proposed methodology leverages the power of TL, resulting in significant reductions in computational time and the cost associated with training data collection.
## 2 Meta-atom structure
This study employs reflective meta-atoms, comprising five layers, to manipulate EM waves. Figure 1 shows the schematic diagram of these meta-atoms, which are designed to control EM waves effectively. The first layer of the meta-atoms contains randomly arranged vacuum and graphene square blocks with a length of \(l=0.5\)\(\mu m\). The second and third layers consist of ultra-thin alumina and silicon layers that serve the sole purpose of electrostatically biasing the graphene layers and have a negligible impact on the EM responses [47]. The other layers of the meta-atoms consist of a silicon dioxide layer (\(\epsilon_{r}=2.1\)) serving as the primary substrate of the metasurface, with a thickness of \(h=2.0\)\(\mu m\), and a very thin gold layer. The meta-atoms for practical applications are incorporated into lattices, to decrease corner-related coupling effects. They are linked together through horizontal and perpendicular graphene ribbons with width \(w=0.1\)\(\mu m\) to simplify the biasing of the meta-atoms in lattices and use one electrostatic bias for them. The reflective meta-atoms employed in this study possess a periodicity of \(p=10\)\(\mu m\) in both the x and y directions. The graphene-based layer is subdivided into 16\(\times\)16 square blocks and exhibits twofold symmetry along the x- and y-axis in the x-o-y plane. Consequently, the first layer of the meta-atoms can be efficiently represented using 8\(\times\)8 coding sequences.
DL models rely on the initial data used to train the model. Even the most effective models may become worthless without a base of high-quality training data. Indeed, when trained on insufficient, incorrect, or irrelevant data, strong DL models can fail severely at generaliz
Figure 2: The chemical potential distribution of the graphene meta-atoms in the training, validation, and test datasets.
a high-performing model for the inverse design of reconfigurable graphene-based metasurfaces, a dataset composed of 8\(\times\)8 coding sequences, which represent the geometry of meta-atoms, graphene material electrostatic biases, and the corresponding phase responses are collected. The coding sequence of graphene material in the meta-atoms is randomly selected. Moreover, in order to preserve a balanced dataset, 10 chemical potentials between 100 \(meV\) to 1000 \(meV\) are chosen for each meta-atom structure. The first chemical potential is considered 100 \(meV\), while the others are randomly chosen in 100 \(meV\) intervals. For example, the last chemical potential is a random value between 900 \(meV\) and 1000 \(meV\). Figure 2 shows the distribution of the chemical potentials in different intervals. As clearly shown, the distribution has an almost uniform density in the whole 100 \(meV\) to 1000 \(meV\) period. The randomization of the meta-atoms in terms of graphene geometry and chemical potential significantly enhances the generalization performance of deep NNs for the inverse design of unseen EM responses in the training dataset. The EM responses of the generated meta-atoms are calculated by CST Microwave Studio (CST MWS) using the Frequency Domain Solver. In the CST MWS environment, periodic boundary conditions are activated in the x- and y-directions and Floquet ports are defined along the z-direction, resulting in the simulation of a transversely infinite array composed of graphene-based meta-atoms. Using the DL model, the reflection coefficients of the generated meta-atoms are calculated by the FDTD method. The FDTD method is used to generate the FDTD method.
metasurfaces are explored in order to design reconfigurable metasurfaces with desired EM functions, in the THz regime.
Figure 4: Performance comparison of Network 1, passive metasurface designer, during training and validation with and without TL. Training and validation (a) accuracy, (b) loss without TL, (c) accuracy, and (d) loss with TL.
Inverse design model
In 2016, Szegedy et al. suggested Inception v3, the third generation of Google Net, for assisting in image analysis and object detection [48]. The Inception v3 model is a highly robust and advanced CNN architecture that consists of 48 meticulously crafted layers. Its power lies in its innovative utilization of symmetric and asymmetric building blocks, including convolutions, average pooling, max pooling, concatenations, dropouts, and fully connected layers. To ensure stable training, batch normalization is extensively incorporated throughout the model and applied to activation inputs. With its intricate design, Inception v3 exhibits unparalleled accuracy and performance across various tasks, establishing itself as one of the most formidable models in the field of DL. Notably, it achieves an impressive 78.1% accuracy on the extensive ImageNet dataset, which encompasses over 14 million images across more than 20,000 categories. However, due to the processing of approximately 25 million parameters on a vast dataset, training Inception v3 demands substantial computational resources. This high cost makes it unsuitable for use in applications like metasurface design, where gathering a huge dataset is impossible. A way to short-cut this time-consuming process is to use TL. TL is a novel machine learning framework based on DL that provides for rapid progress or enhanced performance when modeling a new problem by transferring knowledge from a previously learned related task [49]. TL methods attain optimal performance faster than standard DL models, which require training from scratch and a significant quantity of data. In this paper, a model based on TL is employed for the inverse design of reconfigurable graphene-based metasurfaces at the THz frequency regime. This model consists of two consecutive CNNs in which pre-trained Inception v3 models are used as the starting point for reconfigurable metasurfaces design.
Each tunable graphene meta-atom of the metasurface can be described with two matrices: the first matrix, 8\(\times\)8 matrix or a binary vector with 64 elements, represents the passive structure of the meta-atom; while the second one is a 1-element matrix that shows the chemical potential of the meta-atom. The reflection phase of the graphene meta-atom is studied in 1024 frequency points from 3 THz to 5 THz, so it can be represented by a 32\(\times\)32 matrix (image with 1024 pixels). The inputs of the inverse design model are the desired reflection phases and the output of the model is the tunable meta-atom (geometrical structure as well as chemical potentials). To achieve an advanced inverse design platform for reconfigurable graphene-based metasurfaces, a groundbreaking model composed of two CNNs is developed. The workflow of the inverse design platform is presented in Figure 3.
The first CNN is designed to take in 32\(\times\)32 images of reflection phases as input and produce 64-element binary vectors that represent the geometrical structure of the meta-atoms. Its purpose is to design the passive components of the meta-atoms while maintaining a fixed chemical potential of 100 \(meV\). This choice of fixed chemical potential is made to ensure stable and controlled behavior for the passive meta-atoms, allowing the focus to be on manipulating the reflection phase by engineering the geometry. In the subsequent step, if a different reflection phase is desired for the meta-atom designed in the previous step, the second CNN comes into play. This network takes an image containing a desired reflection phase independent of the one inputted in Network 1, along with the passive meta-atom generated by the first network, and predicts the appropriate chemical potential needed to achieve the desired response. By learning the relationship between the reflection phase,
geometrical structure, and the required chemical potential, the second CNN enables the inverse design of reconfigurable metasurfaces with the desired EM functions. It is important to highlight that the model's capability is not limited to designing meta-atoms that exhibit only two desired EM responses. Instead, it can be used to inverse design meta-atoms that demonstrate multiple desired EM responses by simply adjusting the chemical potential. The architectures and performances of both networks are analyzed in detail in the subsequent sections.
### Network 1, passive metasurface designer
In the proposed methodology, the design process of tunable graphene meta-atoms, which exhibit different EM responses based on external control signals, is divided into two distinct parts. The first part involves designing the passive components of the meta-atoms, while the second part focuses on predicting the appropriate external control signals or chemical potential values.
To design the passive components of the tunable graphene meta-atoms, Network 1 is employed. The power of TL and CNNs is combined in this network to extract essential features from reflection phase images. The feature extractor in Network 1 utilizes the Inception v3 model up to the "mixed 5" layer, which is pre-trained with weights from the ImageNet database. By utilizing this pre-trained model, the network benefits from the extensive knowledge and expertise gained through training on a large and diverse image dataset. The advanced Keras library is used to import the pre-trained Inception v3 model. This feature extraction step is crucial for capturing the important characteristics and patterns present in the reflection phase images, facilitating the design of the passive components of the meta-atoms.
The powerful Inception v3 network is initially utilized to extract essential information from images in the proposed methodology. However, to optimize the network for the specific task of estimating the passive structure of tunable meta-atoms, the uppermost layers of the Inception v3 network are replaced with a single trainable layer. This new layer is carefully designed with an impressive node count of 1024, allowing for efficient estimation of the passive structure.
\begin{table}
\begin{tabular}{c c} \hline
**\# Samples** & **Correlation Distance (\%)** \\ \hline
6,000 & 24.8 \\
12,000 & 95.4 \\
18,000 & 96.2 \\
27,000 & 97.8 \\ \hline \end{tabular}
\end{table}
Table 1: The performance of Network 1 for different numbers of training data samples.
\begin{table}
\begin{tabular}{c c c} \hline
**Model** & **Trainable Parameters** & **Correlation Distance (\%)** \\ \hline TL Inception V3 & 9,291,488 & 95.4 \\ MobileNet & 7,467,904 & 54.2 \\ DenseNet 121 & 8,069,056 & 59.5 \\ DL Inception V3 & 9,291,488 & 52.4 \\ \hline \end{tabular}
\end{table}
Table 2: The performance of different DL-based models in developing Network 1.
One potential challenge when training the new network is the risk of rapid overfitting, particularly when working with a limited number of training examples. To mitigate this issue and enhance the network's generalization ability, dropout regularization is implemented. Dropout regularization randomly deactivates a portion of the nodes during training, forcing the network to rely on the remaining nodes and preventing over-reliance on specific features. This regularization technique enhances the efficiency of the CNN in the inverse design of graphene meta-atoms that were not utilized during the model's training phase. The incorporation of this configuration, which combines Inception v3 with a trainable layer and dropout regularization, enables the training of the graphene meta-atom inverse design network with greater efficiency and effectiveness compared to traditional DL techniques, even when working with a limited amount of data.
As shown in Figure 3(a), the initial stage of the design process involves inputting 32\(\times\)32 matrices or 1024-pixel images representing reflection phases into Network 1. This network generates corresponding 64-pixel graphene meta-atoms, where each element is either 0 or 1 to indicate vacuum and graphene blocks, respectively. Since the inverse design of the passive components of graphene meta-atoms is a multi-output classification task, Network 1 maps a single input image to 64 distinct binary outputs. To accomplish this, sigmoid functions are used to predict the probability of each binary variable in the multi-output binary classification. Accuracy and loss metrics are employed to evaluate the performance of Network 1 in designing these meta-atoms, measuring the classification model's accuracy and loss. However, pixel arrangements in the graphene meta-atoms are crucial, and there are strong correlations between different pixels within the meta-atoms. Therefore, these two metrics alone are insufficient to fully represent the network's performance.
To address this issue, the correlation distance between the original vectors in the training and
Figure 5: Performance comparison of Network 2, chemical potential predictor, during training and validation with and without TL. Training and validation R2-score (a) without TL and (b) with TL.
validation datasets and the generated output vectors is measured and considered as the evaluation metric for Network 1's performance. The correlation distance is a statistical measure of dependence between two vectors, defined as the distance between two vectors based on the absolute value of their pairwise correlations. The correlation distance between vectors u and v is defined as:
\[dCor=1-\frac{(u-\bar{u})(v-\bar{v})}{(||u-\bar{u}||_{2}||u-\bar{v}||_{2})} \tag{1}\]
where \(\bar{u}\) and \(\bar{v}\) represent the mean of vectors \(u\) and \(v\), respectively. Incorporating this metric allows for a more accurate evaluation of Network 1's performance in designing passive graphene meta-atoms, ensuring a more robust and reliable design process.
In order to optimize the trainable parameters in the TL-based CNN with respect to the correlation distance, the Adam optimizer is employed. The Adam optimizer is well-known for its rapid convergence speed and efficiency compared to traditional gradient descent methods. For this optimization, an initial learning rate of \(1\times 10^{-4}\) is utilized. Furthermore, the optimization process yields superior results when performed with a batch size of 32 over 200 epochs. These parameter choices have been carefully selected to ensure optimal performance and efficiency in optimizing the trainable parameters within the proposed TL-based CNN.
#### 3.1.1 Network 1 evaluation
In this part, the performance of the first network of the proposed methodology in designing passive components of tunable meta-atoms is investigated. As previously stated, the pre-trained model is utilized as the starting point for a model on meta-atom inverse design to save training time, improve CNN performance, and avoid the requirement for a large amount of data. In this regard, the accuracy and loss of both the training and validation datasets are studied in two cases: 1- training the network when the starting points are random numbers, and 2- training the network when the starting points are the weights of the pre-trained Inception v3 using ImageNet. Figures 4(a) and 4(b) show the accuracy and loss of the case 1. Upon reviewing the learning curves during training, it becomes apparent that the model quickly becomes overfitted on the training data. Simultaneously, the validation loss demonstrates an increasing trend with significant spikes, while the validation accuracy remains stagnant with substantial fluctuations. As a result, in the case 1, although the model is very specialized in predicting the training dataset, it cannot generalize to new data samples. This problem is known as overfitting in data science. To alleviate the overfitting problem, a large amount of data samples should be added to the training dataset. However, for the metasurface design problem data collection is expensive, time-consuming, and difficult. In addition, the training time of the model with the larger training set will increase drastically. In the case 2, however, as seen in Figure 4(c), the training and validation accuracies increase to a point of stability with a minimal gap between the two final accuracy values. The training and validation losses likewise converge to a value from the loss standpoint (see Figure 4(d)).
Table 1 provides a comprehensive evaluation of the performance of Network 1 based on correlation distance for different sizes of training datasets. The results in Table 1 show that when the training dataset size exceeds 12,000 samples, the correlation distance exceeds 95%. Furthermore, the number
of trainable parameters and correlation distance of several sample networks, including the Inception v3 TL network, MobileNet TL network, DenseNet 121 TL network, and Inception v3 DL network, are compared to demonstrate the superior performance of Network 1 (Inception v3 TL network) in training reflection phases. It is worth noting that MobileNet and DenseNet are alternative CNN architectures commonly used in various DL tasks, including image classification, object detection, and natural language processing [50, 51]. As presented in Table 2, despite having the same architecture and trainable parameters, the Inception v3 DL and TL models exhibit vastly different performances in learning the mapping between the reflection phases and meta-atom structures. Specifically, the pre-trained weights of the Inception v3 TL network, which are used as starting points for Network 1, enable more efficient feature extraction, resulting in better dataset fitting. Conversely, the other two TL models, MobileNet TL network and DenseNet 121 like Inception v3 DL network fail to extract important features of the EM responses, indicating their performance inferiority in comparison to Network 1. These findings highlight the potential of the proposed methodology for designing and optimizing meta-atoms, as well as the importance of appropriate network architecture and training strategies.
### Network 2, chemical potential predictor
The tunable properties of graphene meta-atoms make them highly desirable for use in reconfigurable metasurfaces. However, to achieve the desired EM responses from these meta-atoms, the chemical potentials must be carefully designed. This is where Network 2, based on TL and employing a mapping between the reflection phase and chemical potentials, comes in. As can be seen in Figure 3(b) using a set of 1088-pixel images composed of the meta-atoms designed by Network 1 and the desired reflection phases, Network 2 predicts the chemical potentials required for optimal tunability. By doing so, it identifies the ideal chemical potential match for obtaining the tunable response desired from the designed passive meta-atoms of Network 1. To accomplish this task, Network 2 utilizes Inception v3 up to layer "mixed 7" pre-trained with ImageNet as the feature extractor, with
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Trainable Parameters** & **Training \(R^{2}\) score** & **Validation \(R^{2}\) score** \\ \hline TL Inception V3 & 6,977,825 & 0.905 & 0.870 \\ MobileNet & 5,305,153 & 0.900 & 0.652 \\ DenseNet 121 & 7,479,169 & 0.907 & 0.854 \\ DL Inception V3 & 6,977,825 & 0.900 & 0.358 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The performances of different DL-based models in developing Network 2.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**\# Samples** & **Training \(R^{2}\) score** & **Validation \(R^{2}\) score** \\ \hline
10,000 & 0.898 & 0.803 \\
20,000 & 0.900 & 0.808 \\
30,000 & 0.899 & 0.833 \\
36,000 & 0.905 & 0.870 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The performance of Network 2 for different numbers of training data samples.
one fully-connected trainable layer containing 256 nodes. Dropout regularization is also employed to prevent overfitting. The training process of Network 2 involves the Adam optimizer with an initial learning rate of \(1\times 10^{-4}\), which adapts the learning rate dynamically during training and incorporates momentum and adaptive learning rate scaling.
#### 3.2.1 Network 2 evaluation
Building and deploying a successful regression model requires the selection of an appropriate metric for evaluating its performance, which can enable better performance optimization and fine-tuning and ultimately leads to improved results. While there are several metrics available to evaluate regression models, such as Mean Square Error (MSE), Root Mean Squared Logarithmic Error (RMSLE), and \(R^{2}\) score, the \(R^{2}\) score metric stands out as an effective metric for evaluating regression models' performance. This is because the \(R^{2}\) score accurately reflects the proportion of variation in real values captured by the model, providing a more precise and meaningful assessment of its quality on a scale of \(<\)1. Adopting the \(R^{2}\) score as the primary evaluation metric for regression models can significantly enhance the model's effectiveness, thereby leading to greater real-world impact and improved outcomes. The following relationship is used to express the \(R^{2}\) score:
\[R^{2}(y,\hat{y})=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y})^{2}}{\sum_{i=1}^{n}(y_{ i}-\bar{y})^{2}} \tag{2}\]
where \(y_{i}\) is the actual output value of the \(i\)th sample in the dataset, \(\hat{y}_{i}\) is the predicted output value of the \(i\)th sample by the regression model, \(\bar{y}\) is the mean of the output values in the dataset, and
Figure 6: The passive structure of tunable meta-atom designed by the TL-based inverse design model.
is the total number of samples in the dataset.
Figure 5 shows the \(R^{2}\) score of Network 2 during training, both with and without TL. In Figure 5(a), the \(R^{2}\) score of Network 2 based on Inception v3 with pre-trained weights from ImageNet is displayed. The training and validation \(R^{2}\) scores converge to approximately 0.89 and 0.8, respectively, with a generalization gap. Despite this gap, these scores are acceptable for the task, especially
Figure 7: The target and simulated reflection phases of the tunable meta-atom designed by the inverse design model at the designed chemical potentials. (a) \(100\ meV\) (b) \(148\ meV\) (c) \(225\ meV\) (d) \(270\ meV\).
considering that the network is trained with only 10,000 samples. Figure 5(b) explores the training process of Network 2 without using TL. The training \(R^{2}\) score achieves similar stability as the TL-based network, but the validation \(R^{2}\) score exhibits large spikes, indicating the limited size of the training dataset. However, increasing the size of the training dataset is challenging and requires time-consuming computations for the reconfigurable metasurface design problem. Hence, utilizing TL to predict the necessary chemical potentials of tunable graphene meta-atoms significantly reduces computing costs.
Additionally, the comparison of the TL network's performance in predicting the chemical potential of tunable graphene-based meta-atoms, considering varying numbers of training data samples, is presented in Table 3. The results reveal that as the number of training samples increases, the model's ability to generalize to unseen data from the same distribution improves. Remarkably, even with a relatively small dataset of 10,000 samples, the network achieves a satisfactory \(R^{2}\) score of approximately 80%. Interestingly, when the dataset is expanded threefold, the score only shows a marginal increase of 7%. This highlights the diminishing returns associated with further increasing the dataset size.
The effectiveness of Network 2 in predicting chemical potentials is further demonstrated by comparing the number of trainable parameters and \(R^{2}\) scores of various sample networks, including the Inception v3 TL network, MobileNet TL network, DenseNet 121 TL network, and Inception v3 DL network (Table 4). Interestingly, despite having the same architecture and number of trainable parameters, the DL and TL models of Inception v3 exhibit vastly different performances in predicting the chemical potential of the designed meta-atoms for the desired EM responses. This suggests that integrating pre-trained layers to extract features from EM response images significantly enhances the architecture's efficiency in designing tunable meta-atoms. Moreover, while the pre-trained Inception v3 network is capable of deducing the salient features from the input images and producing satisfactory results, the other pre-trained models, MobileNet and DenseNet 121 networks are unable to achieve comparable performance. Therefore, the effectiveness of pre-trained models in designing tunable meta-atoms depends heavily on the specific architecture of the models.
## 4 Discussion
A reconfigurable graphene metasurface is presented in this section, utilizing the proposed TL-based model to test its efficiency and performance in inverse design. Four different reflection phases are considered as the target EM responses, and the corresponding tunable meta-atom is designed using the inverse design model. Figure 6 shows the pixelated structure of the graphene layer. This designed graphene meta-atom exhibits different phase resonances at different frequencies, depending on the applied chemical potential. The methodology developed for achieving the corresponding tunable meta-atom to the desired reflection phases consists of two steps. In the first step, Network 1 designs the passive part of the graphene meta-atom to exhibit the first desired reflection phase at a chemical potential of 100 \(meV\). In the second step, Network 2 predicts the required biases (chemical potentials) to reach the other three different phase responses using the meta-atom designed by Network 1. The predicted chemical potentials to realize the desired reflection phases are shown
in Table 5. The designed tunable meta-atom is then re-entered into CST MWS, and its reflection phases at the predicted chemical potentials are studied and compared with the target ones. It can be seen in Figure 7(a) that the simulated phase response of the tunable meta-atom designed by Network 1 is in total agreement with the target phase response, and the other simulated reflection phases are also in good agreement with the target ones, demonstrating the excellent performance of the methodology in the inverse design of reconfigurable metasurfaces (Figure 7 (b)-(d)).
## 5 Conclusion
In this paper, a DL-based approach is presented to solve the inverse problem of designing reconfigurable graphene metasurfaces based on the desired reflection responses. The proposed methodology leverages two distinct CNNs to the inverse design of graphene metasurfaces that present several EM functionalities with real-time reconfigurability. The developed CNNs are trained using TL which significantly mitigates the computational costs of gathering training datasets. The pre-trained Inception v3 by ImageNet database is applied in the CNNs as the feature extractor to transfer the knowledge of image recognition to tunable metasurface design. The first network of the methodology designs the passive components of graphene meta-atoms, while the second network predicts the required chemical potentials of the reconfigurable graphene meta-atoms.
The fully trained CNNs demonstrate their efficiency and capability in designing reconfigurable graphene meta-atoms via the design of a tunable metasurface at the terahertz regime. The results of numerical EM case study simulations confirm that the proposed TL-based methodology can be adopted as an efficient tool for designing tunable metasurfaces. This design technique is expected to be easily extended beyond graphene meta-atoms to other types of reconfigurable intelligent surfaces operating at multiple frequency bands suitable for next-generation wireless networks.
|
2303.13867 | Few Shot Medical Image Segmentation with Cross Attention Transformer | Medical image segmentation has made significant progress in recent years.
Deep learning-based methods are recognized as data-hungry techniques, requiring
large amounts of data with manual annotations. However, manual annotation is
expensive in the field of medical image analysis, which requires
domain-specific expertise. To address this challenge, few-shot learning has the
potential to learn new classes from only a few examples. In this work, we
propose a novel framework for few-shot medical image segmentation, termed
CAT-Net, based on cross masked attention Transformer. Our proposed network
mines the correlations between the support image and query image, limiting them
to focus only on useful foreground information and boosting the representation
capacity of both the support prototype and query features. We further design an
iterative refinement framework that refines the query image segmentation
iteratively and promotes the support feature in turn. We validated the proposed
method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI. Experimental
results demonstrate the superior performance of our method compared to
state-of-the-art methods and the effectiveness of each component. Code:
https://github.com/hust-linyi/CAT-Net. | Yi Lin, Yufan Chen, Kwang-Ting Cheng, Hao Chen | 2023-03-24T09:10:14Z | http://arxiv.org/abs/2303.13867v3 | # Few Shot Medical Image Segmentation with Cross Attention Transformer
###### Abstract
Medical image segmentation has made significant progress in recent years. Deep learning-based methods are recognized as data-hungry techniques, requiring large amounts of data with manual annotations. However, manual annotation is expensive in the field of medical image analysis, which requires domain-specific expertise. To address this challenge, few-shot learning has the potential to learn new classes from only a few examples. In this work, we propose a novel framework for few-shot medical image segmentation, termed CAT-Net, based on cross masked attention Transformer. Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information and boosting the representation capacity of both the support prototype and query features. We further design an iterative refinement framework that refines the query image segmentation iteratively and promotes the support feature in turn. We validated the proposed method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI. Experimental results demonstrate the superior performance of our method compared to state-of-the-art methods and the effectiveness of each component. we will release the source codes of our method upon acceptance.
Keywords:Few Shot Cross Attention Iterative Refinement.
## 1 Introduction
Automatic segmentation of medical images is a fundamental step for a variety of medical image analysis tasks, such as diagnosis, treatment planning, and disease monitoring [8]. The emergence of deep learning (DL) has enabled the development of many medical image segmentation methods, which have achieved remarkable success [1, 19, 7, 6]. Most of the existing methods follow a fully-supervised learning paradigm, which requires a considerable amount of labeled data for training. However, the manual annotation of medical images is time-consuming and labor-intensive, limiting the application of DL in medical image segmentation. Specifically for the 3D volumetric medical images (_e.g._, CT, MRI), the manual annotation is even more challenging which requires the annotators to go through hundreds of 2D slices for each 3D scan.
To address the challenge of manual annotation, various label-efficient techniques have been explored, such as self-supervised learning [16], semi-supervised learning [15], and weakly-supervised learning [10]. Despite leveraging information from unlabeled or weakly-labeled data, these techniques still require a substantial amount of training data [18, 21, 2, 23], which may not be practical for novel classes with limited examples in the medical domain. This limitation encourages the few-shot learning paradigm [24, 26, 4, 29] to be applied to medical image segmentation. Specifically, the few-shot learning paradigm aims to learn a model from a small number of labeled data (denoted as _support_) and then apply it to a new task (denoted as _query_) with only a few labeled data without any retraining. Considering the hundreds of organs and countless diseases in the human body, FSL brings great potential to the various medical image segmentation tasks where a new task can be easily investigated in a data-efficient manner.
Most few-shot segmentation methods follow the learning-to-learn paradigm, which aims to learn a meta-learner to predict the segmentation of query images based on the knowledge of support images and their respective segmentation labels. The success of this paradigm depends on how effectively the knowledge can be transferred from the support prototype to the query images. Existing few-shot segmentation methods mainly focus on the following two aspects: (1) how to learn the meta-learner [17, 20, 12, 13]; and (2) how to better transfer the knowledge from the support images to the query images [25, 28, 31, 14, 3, 27]. Despite prototype-based methods having shown success, they typically ignore the interaction between support and query features during training.
In this paper, as shown in Fig. 1(a), we propose **CAT**-**Net**, a **C**ross **A**ttention **T**ransformer network for few-shot medical image segmentation, which aims to fully capture intrinsic classes details while eliminating useless pixel information and learn an interdependence between the support and query features. Different from the existing FSS methods that only focus on the single direction of knowledge transfer (_i.e._, from the support features to the query features), the proposed CAT-Net can boost the mutual interactions between the support and query features, benefiting the segmentation performance of both the support and query images. Additionally, we propose an iterative training framework that feed the prior query segmentation into the attention transformer to effectively enhance and refine the features as well as the segmentation. Three publicly available datasets are adopted to evaluate our CAT-Net, _i.e._, Abd-CT [11], Abd-MRI [9], and Card-CT [32]. Extensive experiments validate the effectiveness of each component in our CAT-Net, and demonstrate its state-of-the-art performance.
## 2 Method
### Problem Definition
Few-shot segmentation (FSS) aims to segment novel classes by just a few samples with densely-annotated samples. In FSS, the dataset is divided into the training set \(\mathbb{D}_{\mathrm{train}}\), containing the base classes \(\mathbb{C}_{\mathrm{train}}\), and the test set \(\mathbb{D}_{\mathrm{test}}\), containing the novel classes \(\mathbb{C}_{\mathrm{test}}\), where \(\mathbb{C}_{\mathrm{train}}\cap\mathbb{C}_{\mathrm{test}}=\emptyset\). To obtain the segmentation
model for FSS, the commonly used episode training approach is employed [30]. Each trainig/testing episode \((S_{i},Q_{i})\) instantiates a \(N\)-way \(K\)-shot segmentation learning task. Specifically, the support set \(S_{i}\) contains \(K\) samples of \(N\) classes, while the query set \(Q_{i}\) contains one sample from the same class. The FSS model is trained with episodes to predict the novel class for the query image, guided by the support set. During inference, the model is evaluated directly on \(\mathbb{D}_{\text{test}}\) without any re-training. In this paper, we follow the established practice in medical FSS [5, 16, 22] that consider the **1**-way **1**-shot task.
### Network Overview
The Overview of our CAT-Net is illustrated in Fig. 1(a). It consists of three main components: 1) a mask incorporated feature extraction (MIFE) sub-net that extracts initial query and support features as well as query mask; 2) a cross masked attention Transformer (CMAT) module in which the query and support features boost each other and thus refined the query prediction; and 3) an iterative refinement framework that sequentially applies the CMAT modules to continually promote the segmentation performance. The whole framework can be trained in an end-to-end fashion.
### Mask Incorporated Feature Extraction
The Mask Incorporate Feature Extraction (MIFE) sub-net takes query and support images as input and generates their respective features, integrated with the support mask. A simple classifier is then used to predict the segmentation for the query image. Specifically, we first employ a feature extractor network (_i.e._, ResNet-50) to map the query and support image pair \(I^{q}\) and \(I^{s}\) into the feature space, producing multi-level feature maps \(F^{q}\) and \(F^{s}\) for query and support image, respectively. Next, the support mask is pooled with \(F^{s}\) and then expanded
Figure 1: (a) Overview of the CAT-NET; (b) The architecture of CMAT module.
and concatenated with both \(F^{q}\) and \(F^{s}\). Additionally, a prior mask is further concatenated with the query feature to strengthen the correlation between query and support features via a pixel-wise similarly map. Finally, the query feature is processed by a simple classifier to get the query mask. Further details of the MIFE architecture can be found in the supplementary material.
### Cross Masked Attention Transformer
As shown in Fig. 1(b), the cross masked attention Transformer (CMAT) module comprises three main components: 1) a self-attention module for extracting global information from query and support features; 2) a cross masked attention module for transferring foreground information between query and support features while eliminating redundant background information, and 3) a prototypical segmentation module for generating the final prediction of the query image.
**Self-Attention Module.** To capture the global context information of every pixel in the query feature \(F_{0}^{q}\) and support features \(F_{0}^{s}\), the initial features are first flattened into 1D sequences and fed into two identical self-attention modules. Each self-attention module consists of a multi-head attention (MHA) layer and a multi-perceptron (MLP) layer. Given an input sequence \(S\), the MHA layer first projects the sequence into three sequences \(K\), \(Q\), and \(V\) with different weights. The attention matrix \(A\) is then calculated as:
\[A(Q,K)=\frac{QK^{T}}{\sqrt{d}} \tag{1}\]
where \(d\) is the dimension of the input sequence. The attention matrix is then normalized by a softmax function and multiplied by the value sequence \(V\) to get the output sequence \(O\):
\[O=\text{softmax}(A)V \tag{2}\]
The MLP layer is a simple \(1\times 1\) convolution layer that maps the output sequence \(O\) to the same dimension as the input sequence \(S\). Finally, the output sequence \(O\) is added to the input sequence \(S\) and normalized using layer normalization (LN) to obtain the final output sequence \(X\). The output feature sequence of the self-attention alignment encoder is represented by \(X^{q}\in\mathbb{R}^{HW\times D}\) and \(X^{s}\in\mathbb{R}^{HW\times D}\) for query and support features, respectively.
**Cross Masked Attention Module.** We utilize cross masked attention to incorporate query features and support features with respect to their foreground information by constraining the attention region in attention matrix with support and query masks. Specifically, given the query feature \(X^{q}\) and support features \(X^{s}\) from the aforementioned self-attention module, we first project the input sequence into three sequences \(K\), \(Q\), and \(V\) using different weights, resulting in \(K^{q}\), \(Q^{q}\), \(V^{q}\), and \(K^{v}\), \(Q^{v}\), \(V^{v}\), respectively. Taking the query features as an example, the cross attention matrix is calculated by:
\[\text{A}(K^{q},Q^{s})=\frac{(K^{q})^{T}Q^{s}}{\sqrt{d}} \tag{3}\]
We expend and flatten the binary query mask \(M^{q}\) to limit the foreground region in attention map. The masked cross attention (MCA) map is computed as:
\[\text{MCA}(K^{q},Q^{s},V^{s},M^{q})=M^{q}\cdot V^{s}(\text{softmax}(A(K^{q},Q^{s})) \tag{4}\]
Similar to self-attention, the query feature is processed by MLP and LN layer to get the final enhanced query features \(F_{1}^{q}\). Similarly, the enhanced support feature \(F_{1}^{s}\) is obtained with foreground information from the query feature.
**Prototypical Segmentation Module.** Once the enhanced query and support features are obtained, the prototypical segmentation is used to obtain the final prediction. First, a prototype of class \(c\) is built by masked average pooling of the support feature \(F_{1}^{s}\) as follows:
\[p_{c}=\frac{1}{K}\sum_{k=1}^{K}\frac{\sum_{k,x,y}F_{1,(k,x,y)}^{s}m_{(k,x,y,c)} ^{s}}{\sum_{x,y}m_{(k,x,y,c)}^{s}} \tag{5}\]
where \(K\) is the number of support images, and \(m_{(k,x,y,c)}^{s}\) is a binary mask that indicates whether pixel at the location \((x,y)\) in support feature \(k\) belongs to class \(c\). Next, we use the non-parametric metric learning method to perform segmentation. The prototype network calculates the distance between the query feature vector and the prototype \(P=\{P_{c}|c\in C\}\). Softmax function is applied to produce probabilistic outputs for all classes, generating the query segmentation:
\[\hat{M}_{1,(x,y)}^{q}=\text{softmax}\big{(}\alpha\text{cos}(F_{1,(x,y)}^{q},p_ {c})\cdot\text{softmax}(\alpha\text{cos}(F_{1,(x,y)}^{q},p_{c}))\big{)} \tag{6}\]
where \(\text{cos}(\cdot)\) denotes cosine distance, \(\alpha\) is a scaling factor that helps gradients to back-propagate in training. In our work, \(\alpha\) is set to 20, same as in [30].
Additionally, we design a double threshold strategy to obtain query segmentation. Specifically, we set the first threshold \(\tau\) to 0.5 to obtain the binary query mask \(M^{q}\), which is used to calculate the Dice loss and update the model. Then, the second threshold \(\hat{\tau}\) is set to 0.4 to obtain the dilated query mask \(\hat{M}^{q}\), which is used to generate the enhanced query feature \(F_{2}^{q}\) in the next iteration. The second threshold \(\hat{\tau}\) is set lower than the first threshold \(\tau\) to prevent some foreground pixels from being mistakenly discarded. The query segmentation mask \(M^{q}\) and dilated mask \(\hat{M}^{q}\) are represented by:
\[M_{1}^{q}=\begin{cases}1,&M_{1,(x,y)}^{q}>\tau\\ 0,&M_{1,(x,y)}^{q}<\tau\end{cases}\qquad\hat{M}_{1}^{q}=\begin{cases}1,&M_{1,( x,y)}^{q}>\hat{\tau}\\ 0,&M_{1,(x,y)}^{q}<\hat{\tau}\end{cases} \tag{7}\]
### Iterative Refinement framework
As explained above, the CMAT module is designed to refine the query and support features, as well as the query segmentation mask. Thus, it's natural to iteratively apply this sub-net to get the enhanced features and refine the mask, resulting in a boosted segmentation result. The result after the \(i\)-th iteration is represented by:
\[(F_{i}^{s},F_{i}^{q},M_{i}^{q},\hat{M}_{i}^{q})=\text{CMAT}(F_{i-1}^{s},F_{i-1 }^{q},\hat{M}_{i-1}^{q},M^{s}) \tag{8}\]
The subdivision of each step can be specifically expressed as:
\[(F_{i}^{s},F_{i}^{q})=\text{CMA}(F_{i-1}^{s},F_{i-1}^{q},\hat{M}_{i-1}^{q},M^{s}) \tag{9}\]
\[(M_{i}^{q},\hat{M}_{i}^{q})=\text{Proto}(F_{i}^{s},F_{i}^{q},M^{s},\tau,\hat{ \tau}) \tag{10}\]
where CMA(\(\cdot\)) indicates the self-attention and cross masked attention module, and Proto(\(\cdot\)) represents the prototypical segmentation module.
## 3 Experiment
### Dataset and Evaluation Metrics
We evaluate the proposed method on three public datasets, _i.e._, Abd-CT [11], Abd-MRI [9], and Card-MRI [32]. Abd-CT contains 30 abdominal CT scans with annotations of left and right kidney (LK and RK), spleen (Spl), liver (Liv). Abd-MRI contains 20 abdominal MRI scans with annotations of the same organs as Abd-CT. Card-MRI includes 35 cardiac MRI scans with annotations of left ventricular blood pool (LV-B), left ventricular myocardium (LV-M), and right ventricle (RV). We use the Dice score as the evaluation metric following [16, 22].
To ensure a fair comparison, all the experiments are conducted under the 1-way 1-shot scenario using 5-fold cross-validation. We follow [16] to remove all slices containing test classes during training to ensure that the test classes are all unseen during validation. In each fold, we follow [16, 5, 22] that takes the last patient as the support image and the remaining patients as the query (setting I). We further propose a new validation setting (setting II) that takes every image in each fold as a support image alternately and the other images as the query. The averaged result of each fold is reported. It could evaluate the generalization ability of the model by reducing the affect of support image selection.
### Implementation Details
The proposed method is implemented using PyTorch. Each 3D scan is sliced into 2D slices and reshaped into 256\(\times\)256 pixels. Common 3D image pre-processing techniques, such as intensity normalization and resampling, are applied to the training data. We apply episode training with \(20k\) iterations. SGD optimizer is adopted with a learning rate of 0.001 and a batch size of 1. Each episode training takes approximately 4 hours using a single NVIDIA RTX 3090 GPU.
### Comparison with State-of-the-Art Methods
We compare the proposed CAT-Net with state-of-the-art (SOTA) methods, including SE-Net [20], PANet [30], ALP-Net [16], and AD-Net [5], and Q-Net [22]. PANet [30] are the typical prototypical FSS method in the natural image domain, SE-Net [20], ALP-Net [16], AD-Net [5], and Q-Net [22] are the most representative work in medical FSS task. Experiment results presented in Table 1
demonstrate that the proposed method outperforms SOTAs on all three datasets under both setting I and setting II. Under setting I, the proposed CAT-Net achieves 66.59% Dice on Abd-CT, 75.18% Dice on Abd-MRI, and 79.03% Dice on Card-MRI in Dice, outperforming SOTAs by 1.76%, 0.75%, and 0.45%, respectively. Under setting II, CAT-Net achieves 70.88% Dice on Abd-CT, 75.22% Dice on Abd-MRI, and 79.36% Dice on Card-MRI, outperforming SOTAs by 2.56%, 2.02% and 1.32%, respectively. The consistent superiority of our method to SOTAs on three datasets and under two evaluation settings indicates the effectiveness and generalization ability of the proposed CAT-Net. In addition, the qualitative results in Fig. 2 demonstrate that the proposed method is able to generate more accurate and detailed segmentation results compared to SOTAs.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c c|c c c} \multicolumn{13}{c}{Abd-CT [9]} & \multicolumn{8}{c}{Abd-MRI [11]} & \multicolumn{8}{c}{Card-MRI [32]} \\ \hline \multicolumn{13}{c}{Methods} & \multicolumn{1}{c}{LK} & \multicolumn{1}{c}{RK} & \multicolumn{1}{c}{Spl.} & \multicolumn{1}{c}{Liv.} & \multicolumn{1}{c}{Avg.} & \multicolumn{1}{c}{LK} & \multicolumn{1}{c}{RK} & \multicolumn{1}{c}{Spl.} & \multicolumn{1}{c}{Liv.} & \multicolumn{1}{c}{Avg.} & \multicolumn{1}{c}{LV-B} & \multicolumn{1}{c}{LV-M} & \multicolumn{1}{c}{RV} & \multicolumn{1}{c}{Avg.} \\ \hline \multirow{13}{*}{\begin{tabular}{c} SOTAs \\ \end{tabular} } & SE-Net [20] & 32.83 & 14.84 & 0.23 & 0.27 & 11.91 & 62.11 & 61.32 & 51.80 & 27.43 & 50.66 & 58.04 & 25.18 & 12.86 & 32.03 \\ & PA-Net [30] & 37.58 & 34.69 & 43.73 & 61.71 & 44.42 & 47.71 & 47.95 & 58.73 & 64.99 & 54.85 & **70.43** & 46.79 & 69.52 & 62.25 \\ & ALP-Net [16] & 63.34 & 54.82 & 60.25 & 73.65 & 63.02 & 73.63 & 78.39 & 67.02 & 73.05 & 73.02 & 61.89 & 87.54 & 76.71 & 75.38 \\ & AD-Net [5] & **63.84** & 56.98 & 61.84 & 73.95 & 64.15 & 71.89 & 76.02 & 65.84 & 76.03 & 72.70 & 65.47 & 88.36 & 78.35 & 77.39 \\ & Q-Net [22] & 63.26 & 58.37 & 63.36 & 74.36 & 64.83 & **74.05** & 77.52 & 67.43 & 78.71 & 74.43 & 66.87 & 89.63 & 79.25 & 78.58 \\ & **Ours** & 63.36 & **60.05** & **67.65** & **75.51** & **66.59** & 74.01 & **78.90** & **68.83** & **78.98** & **75.18** & 66.85 & **90.54** & **79.71** & **79.03** \\ \hline \multirow{13}{*}{
\begin{tabular}{c} SOTAs \\ \end{tabular} } & ALP-Net [16] & 65.99 & 59.49 & 65.02 & 73.50 & 66.05 & 70.17 & 77.05 & 67.71 & 72.45 & 71.85 & 61.61 & 87.13 & 77.35 & 75.36 \\ & AD-Net [5] & 67.35 & 59.88 & 64.35 & 76.78 & 67.09 & 72.26 & 76.57 & **67.89** & 73.96 & 72.67 & 65.08 & 86.26 & 76.50 & 75.95 \\ \cline{1-1} & Q-Net [22] & 66.25 & 62.36 & **67.35** & 77.33 & 68.32 & 73.96 & 81.07 & 65.39 & 72.36 & 73.20 & 66.35 & 88.40 & 79.37 & 78.04 \\ \cline{1-1} & **Ours** & **68.82** & **64.56** & 66.02 & **80.51** & **70.88** & **75.31** & **83.23** & 67.31 & **75.02** & **75.22** & **67.21** & **90.54** & **80.34** & **79.36** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with state-of-the-art methods in Dice coefficient (%) on Abd-CT and Abd-MRI, and Card-MRI datasets under setting I & II.
Figure 2: Qualitative results of our method on Abd-CT and Abd-MRI.
### Ablation Study
We conduct an ablation study to investigate the effectiveness of each component in CAT-Net. All ablation studies are conducted on Abd-MRI under setting II.
#### 3.4.1 Effectiveness of CMAT Block:
To demonstrate the importance of our proposed CAT-Net in narrowing the information gap between the query and supporting images and obtaining enhanced features, we conducted an ablation study. Specifically, we compared the results of learning foreground information only from the support (\(S\!\!\rightarrow\!Q\)) or query image (\(Q\!\!\rightarrow\!S\)) and obtaining a single enhanced feature instead of two (\(S\!\!\leftrightarrow\!Q\)). It can be observed that using the enhanced query feature (\(S\!\!\rightarrow\!Q\)) achieves 66.72% in Dice, outperforming only using the enhanced support feature (\(Q\!\!\rightarrow\!S\)) by 0.74%. With our CMAT block, the mutual boosted support and query feature (\(S\!\!\leftrightarrow\!Q\)) could improve the Dice by 1.90%. Moreover, the iteration refinement framework consistently promotes the above three variations by 0.96%, 0.56%, and 2.26% in Dice, respectively.
#### 3.4.2 Influence of Iterative Mask Refinement Block:
To determine the optimal number of iterative refinement CMAT block, we experiment with different numbers of blocks. In Fig. 3, we observe that increasing the number of blocks results in improved performance, with a maximum improvement of 2.26% in Dice when using 5 blocks. Considering the performance gain between using 4 and 5 CMAT blocks was insignificant, we hence opt to use four CMAT blocks in our final model to strike a balance between efficiency and performance.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(S\!\!\rightarrow\!Q\) & \(Q\!\!\rightarrow\!S\) & \(S\!\!\leftrightarrow\!Q\) & Iter & Dice & Improve \\ \hline ✓ & & & & 66.72 & - \\ & ✓ & & & 65.98 & -0.74 \\ & & ✓ & & 68.62 & +1.90 \\ ✓ & & & ✓ & 67.68 & +0.96 \\ & ✓ & & ✓ & 66.54 & +0.56 \\ & & ✓ & ✓ & **70.88** & +2.26 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effectiveness of each component. \(S\!\!\rightarrow\!Q\) and \(Q\!\!\rightarrow\!S\) denote one branch CAT-Net to enhance support or query feature, respectively. \(S\!\!\leftrightarrow\!Q\) indicates applying cross attention to both \(S\) and \(Q\).
Figure 3: The influence of different numbers of iteration CMAT modules.
## 4 Conclusion
In this paper, we propose CAT-Net, Cross Attention Transformer network for few-shot medical image segmentation. Our CAT-Net enables mutual interaction between the query and support features by the cross masked attention module, enhancing the representation abilities for both of them. Additionally, the proposed CMAT module can be iteratively applied to continually boost the segmentation performance. Experimental results demonstrated the effectiveness of each module and the superior performance of our model to the SOTA methods. In the future, we plan to extend our CAT-Net from 2D to 3D networks, explore the application of our model to other medical image segmentation tasks, as well as the extension of our model to other medical-related tasks.
|
2305.16449 | Threshold and laser-conversion in nanostructured-resonator parametric
oscillators | We explore optical parametric oscillation (OPO) in nanophotonic resonators,
enabling arbitrary, nonlinear phase-matching and nearly lossless control of
energy conversion. Such pristine OPO laser converters are determined by
nonlinear light-matter interactions, making them both technologically flexible
and broadly reconfigurable. We utilize a nanostructured inner-wall modulation
in the resonator to achieve universal phase-matching for OPO-laser conversion,
but coherent backscattering also induces a counterpropagating pump laser. This
depletes the intra-resonator optical power in either direction, increasing the
OPO threshold power and limiting laser-conversion efficiency, the ratio of
optical power in target signal and idler frequencies to the pump. We develop an
analytical model of this system that emphasizes an understanding of optimal
laser conversion and threshold behaviors, and we use the model to guide
experiments with nanostructured-resonator OPO laser-conversion circuits, fully
integrated on chip and unlimited by group-velocity dispersion. Our work
demonstrates the fundamental connection between OPO laser-conversion efficiency
and the resonator coupling rate, subject to the relative phase and power of
counterpropagating pump fields. We achieve $(40\pm4)$ mW of on-chip power,
corresponding to $(41\pm4)$% conversion efficiency, and discover a path toward
near-unity OPO laser conversion efficiency. | Haixin Liu, Grant M. Brodnik, Jizhao Zang, David R. Carlson, Jennifer A. Black, Scott B. Papp | 2023-05-25T19:54:03Z | http://arxiv.org/abs/2305.16449v1 | # Threshold and laser-conversion in nanostructured-resonator parametric oscillators
###### Abstract
We explore optical parametric oscillation (OPO) in nanophotonic resonators, enabling arbitrary, nonlinear phase-matching and nearly lossless control of energy conversion. Such pristine OPO laser converters are determined by nonlinear light-matter interactions, making them both technologically flexible and broadly reconfigurable. We utilize a nanostructured inner-wall modulation in the resonator to achieve universal phase-matching for OPO-laser conversion, but coherent backscattering also induces a counterpropagating pump laser. This depletes the intra-resonator optical power in either direction, increasing the OPO threshold power and limiting laser-conversion efficiency, the ratio of optical power in target signal and idler frequencies to the pump. We develop an analytical model of this system that emphasizes an understanding of optimal laser conversion and threshold behaviors, and we use the model to guide experiments with nanostructured-resonator OPO laser-conversion circuits, fully integrated on chip and unlimited by group-velocity dispersion. Our work demonstrates the fundamental connection between OPO laser-conversion efficiency and the resonator coupling rate, subject to the relative phase and power of counterpropagating pump fields. We achieve \((40\pm 4)\) mW of on-chip power, corresponding to \((41\pm 4)\%\) conversion efficiency, and discover a path toward near-unity OPO laser conversion efficiency.
_Introduction._ Optical parametric oscillation (OPO) features behaviors that are observed in many physical systems. The intensity distribution of the optical field shows a kind of Turing pattern, which is similar to those in biological systems [1] and sand dunes [2]. The patterns in these systems arise from nonlinearity of the reaction-diffusion equation. With sand dunes there is a nonlinear surface velocity profile [3] while for nonlinear optics it is the nonlinear refractive index. Since OPO is a coherent process, it is subject to a narrow range of phase-matching solutions. Still, we can trace the coherent oscillating output of the OPO to its constituent nonlinear dynamics [4; 5]. For example, accidental [6] and controllable [7; 8; 9] mode frequency shifts have been used to balance group-velocity dispersion (GVD) and Kerr shifts in microresonator OPOs. More recently, photonic-crystal resonators (PhCR) have provided a route to phase-match OPO in nearly any dispersion regime [10]. The accessibility of these controls for nonlinear dynamics in OPO makes the system both interesting to search for novel phenomena and to understand similar dynamics in related physical systems.
OPO laser conversion also has numerous applications in engineering. Degenerate OPO works as a converter of the pump-laser frequency to a tunable signal and idler frequency, which provides a coherent source with designable wavelength [11]. With the help of microresonators, OPO laser-converters can be chip integrated. An intrinsic condition of OPO is phase-matching, which is traditionally achieved by designing anomalous GVD to balance Kerr frequency shifts. With GVD engineering, OPO in microresonators has been realized in silica [12; 13], aluminum nitride [14], silicon nitride [15; 16], and tantalum pentoxide [17] platforms. Microresonator OPO has also been explored with novel bound states in the continuum to tailor wavelength-dependent coupling conditions [18]. With PhCR OPOs [10], nanostructuring the microresonator waveguide induces coherent backscattering and a controllable frequency splitting of one (or more) azimuthal modes. This provides for direct phase-matching between three modes of the device. However, backscattering the pump also depletes the available power in either counterpropagating direction.
While access to tunable phase matching is important for a laser converter, the conversion efficiency (CE) defined as the ratio between the signal and idler output power and the pump laser power is essential as well. According to previous research, the highest CE of microresonator OPO with standard couplers is <40%, and the highest reported on-chip output is \(\approx\) 20 mW [12; 19; 20; 21; 22]. In the case of a PhCR, backscattering reduces the power available for OPO threshold and conversion efficiency in a single propagation direction. Despite this, CE in PhCRs exceeding 10% has been demonstrated by operating in the over-coupled regime [10]. On the other hand, for frequency-comb generation, which has a similar operating principle as OPO [23; 24; 25], a pump-to-comb conversion efficiency as high as 83% has recently been demonstrated by placing a pump reflector to maximize the pump intensity in a PhCR [26; 27]. However, the upper limit of CE and optimal system integration of the PhCR with pump reflector remains an open research question. Moreover, a physical understanding of this system will enable future devices with access to large parametric gain for broadly reconfigurable wavelength access and the highest output power. Such chip-scale technologies would be useful for applications as diverse as optical telecommun
nications [28; 29], spectroscopy [30], and optical sensors [31; 32].
Here, we develop an analytical framework to describe OPO laser converters with counterpropagating pump fields, and we derive formulas for CE and threshold power. Moreover, we develop and implement the experimental infrastructure for an integrated PhCR with a pump reflector in the bus waveguide. The PhCR induces coherent backscattering within the resonator, and the pump reflector transforms the pump into counterpropagating fields. Thereby, we control the phase between the counterpropagating pump fields in the bus waveguide and the backscattered pump mode inside the PhCR, enabling suppression of the unused pump power in one direction and realization of the optimal utility of pump power. Our experiments systematically explore the interaction of counterpropagating pump laser fields in the PhCR, which we find to be in good agreement with our analytical model of the nonlinear system. We demonstrate \((41\pm 4)\%\) CE of OPO by measuring the output power, which is the sum of the power of idler and signal in both directions, and we calculate CEAs the ratio between the on-chip output power and the on-chip pump power. This measurement corresponds to \((40\pm 4)\) mW on-chip output power with idler and signal wavelengths at \((1620.0\pm 0.4)\) nm and \((1498.7\pm 0.4)\)nm. The corresponding spectra are in Supplemental Material. This work illuminates a regime of OPO in which we unleash universal phase matching and high CE through nanophotonic design.
_Theory._ To derive the CE formula in the pump-reflected PhCR case, we need to understand the dynamics of the system, which are two-fold: a linear process describing the coupling between the bus waveguide and the resonator and a nonlinear process within the resonator due to resonant field enhancement. The nonlinear process is described by the normalized, modified Lugiato-Lefever Equation (LLE) [33]:
\[\begin{split}\frac{\partial E_{t\mu}}{\partial t}=& -(1+i(\alpha+D_{\mu}))E_{t\mu}\\ &+i(\sum_{\mu_{1},\mu_{2}}E_{t\mu_{1}}E_{t\mu_{2}}E_{t(\mu_{1}+ \mu_{2}-\mu)}^{*}+2E_{t\mu}\sum_{\mu_{3}}I_{r\mu_{3}})\\ &+(F_{t}-i\frac{\xi}{2}E_{r\mu})\delta_{\mu,0}\\ \frac{\partial E_{r\mu}}{\partial t}=&-(1+i(\alpha+ D_{\mu}))E_{r\mu}\\ &+i(\sum_{\mu_{1},\mu_{2}}E_{r\mu_{1}}E_{r\mu_{2}}E_{r(\mu_{1}+ \mu_{2}-\mu)}^{*}+2E_{r\mu}\sum_{\mu_{3}}I_{t\mu_{3}})\\ &+(F_{r}-i\frac{\xi}{2}E_{t\mu})\delta_{\mu,0}.\end{split} \tag{1}\]
The diagram in Fig. 1(a) shows the components of the field in our devices. The fields in the bus waveguide propagate in both transmitted and reflected directions, denoted by subscript \(t\) and \(r\), respectively. The fields in the PhCR also have two counterpropagating directions: clockwise (CW) and counterclockwise (CCW). The CW wave in the ring couples with the transmitted direction light in the bus waveguide, while the CCW wave in the ring will couple with the light in the reflected direction within the bus waveguide. Due to the periodic boundary of the microresonator, the field inside a PhCR can be decomposed into different modes in each direction, denoted by \(E_{t\mu}\) (CW) and \(E_{r\mu}\) (CCW) where \(\mu\) represents the mode number relative to the pump mode, \(\mu=0\). The field here is normalized such that the nonlinearity thresholds when \(E_{t\mu}(E_{r\mu})\sim 1\). The intensity of each mode is denoted by \(I_{t\mu}\) and \(I_{r\mu}\), equal to the square norm of the field. The parameters \(F_{t}\) and \(F_{r}\) represent the effective driving force inside the resonator of both transmitted and reflected directions, \(\delta_{\mu,0}\) is the Kronecker delta, \(\alpha\) is the detuning of the pump laser and \(D_{\mu}\) is the integrated dispersion defined by \(D_{\mu}=\nu_{\mu}-(\nu_{0}+\text{FSR}\,\mu)\), where \(\nu_{\mu}\) is the cold cavity resonance frequency of mode \(\mu\) and FSR is the free spectral range of the resonator [4; 10]. Both \(\alpha\) and \(D_{\mu}\) are normalized by the halfwidth of the resonator \(\Delta\nu/2=\frac{\kappa}{4\pi}\), and the time \(t\) is normalized by \(\frac{2}{\kappa}\), where \(\kappa\) is the overall loss rate of the ring.
The interaction between the CW and CCW pump mode inside the PhCR due to coherent backscattering induced by the nanostructured inner-wall modulation is characterized by \(\xi\). For a PhCR without a pump reflector, \(\xi\) is approximately equal to the pump mode frequency split normalized by \(\Delta\nu/2\) when this split is much larger than \(\Delta\nu/2\); see the Supplemental Material. While the PhCR provides universal phase matching, the CE depends greatly on the linear coupling dynamics [10]. The overall loss rate can be divided into two parts: \(\kappa=\kappa_{i}+\kappa_{c}\), where \(\kappa_{i}\) is the intrinsic loss rate while \(\kappa_{c}\) is the rate of energy exchange between the bus waveguide and the ring. The ratio between them \(K=\kappa_{c}/\kappa_{i}\) is called the coupling coefficient. It is essential to CE since when \(K\) is high, more light is coupled out with the same energy dissipation in the resonator. Due to this coupling, there is a difference between the fields on the input and output side of the PhCR in the bus waveguide. Moreover, due to the presence of the pump reflector, the input and output to the resonator are counterpropagating in relation to the transmitted and reflected directions. Therefore, we create a notation here to clarify this difference. The fields with superscript i and o in Fig. 1(a) denote input and output fields relative to the PhCR. For the transmitted direction, the field input to resonator \(E_{t\mu}^{i}\) and coming
out of the resonator \(E^{o}_{t\mu}\) are
\[\begin{split} E^{i}_{t\mu}&=\sqrt{\frac{K+1}{2K}}F_{t} \delta_{\mu,0}\\ E^{o}_{t\mu}&=E^{i}_{t\mu}-\sqrt{\frac{2K}{K+1}}E_{t \mu}=\sqrt{\frac{K+1}{2K}}(F_{t}\delta_{\mu,0}-r_{\rm EF}E_{t\mu})\\ r_{\rm EF}&=\frac{2K}{K+1},\end{split} \tag{2}\]
where \(r_{\rm EF}\) is a conversion coefficient between the field in the bus waveguide and the normalized driving force \(F_{t}\) or \(F_{r}\) inside the resonator, and is of utmost importance to optimize CE. The proof of these formulas can be found in the Supplemental Material.
Due to the PhC, there is a CCW propagating pump mode inside the ring. With the addition of a pump reflector in the bus waveguide, the pump mode field \(E^{o}_{t0}\) is reflected back, becoming \(E^{i}_{r0}=rE^{o}_{t0}\), further converted into the CCW driving force \(F_{r}=r(F_{t}-r_{\rm EF}E_{t0})\), and thus reduces the waste and improves the CE. We assume the pump reflector only reflects the pump frequency. Here \(r\) is the reflection coefficient for pump mode of the reflector \(r=\sqrt{R}e^{i\Phi}\), where \(R\) is the reflectivity and \(\Phi\) the reflector phase. The reflected wave inside the bus waveguide can be similarly written as: \(E^{i}_{r\mu}=\sqrt{\frac{K+1}{2K}}F_{r}\delta_{\mu,0}\) and \(E^{o}_{r\mu}=E^{i}_{r\mu}-\sqrt{\frac{2K}{K+1}}E_{r\mu}\). Since the input of the whole system is proportional to \(|F_{t}|^{2}\), we will replace \(F_{t}\) with \(F\) and assume it to be a real number for convenience in the text below. Then, the input power \(P\) of the system can be written as
\[P=\eta F^{2} \tag{3}\]
where \(\eta\) is the conversion coefficient between P and \(F^{2}\), and originates from the normalization of the modified LLE (Eqn (1)). Therefore, \(\eta\) depends on the halfwidth of the resonator which is related to \(K\), the linear and nonlinear refractive index of the PhCR and the mode volume [34].
With the preparation above, we derive the CE formula for a PhCR with pump reflector. The derivation is based on energy conservation, so we study the energy flow of the system. The input field is \(E^{i}_{t0}=\sqrt{\frac{K+1}{2K}}F\). For the output, we investigate the power in the pump mode and other modes separately. Consider first the pump mode. The transmitted wave is \(\sqrt{1-|r|^{2}}E^{o}_{t0}=\sqrt{1-|r|^{2}}\sqrt{\frac{K+1}{2K}}(F-r_{\rm EF }E_{t0})\), ignoring the phase, which is not related to the intensity. The reflected wave is \(E^{o}_{r0}=\sqrt{\frac{K+1}{2K}}(r(F-r_{\rm EF}E_{t0})-r_{\rm EF}E_{r0})\). For non-pump modes \(\mu\neq 0\), the driving forces and the pump reflector have no effect. Therefore, the transmitted and reflected fields are \(E^{o}_{t\mu}=-\sqrt{\frac{2K}{K+1}}E_{t\mu}\) and \(E^{o}_{r\mu}=-\sqrt{\frac{2K}{K+1}}E_{r\mu}\), and their total power equals \(r_{\rm EF}\sum_{\mu\neq 0}(|E_{t\mu}|^{2}+|E_{r\mu}|^{2})\frac{\omega_{\mu}}{ \omega_{0}}\), where \(\omega_{\mu}\) denotes the measured angular frequency of mode \(\mu\). Note that for four-wave mixing, the idler (with mode number \(\mu_{i}\)) and signal (with mode number \(\mu_{s}\)) are generated in pairs and their angular frequencies satisfy \(\omega_{\mu_{i}}+\omega_{\mu_{s}}=2\omega_{0}\). The total power can be further simplified to \(r_{\rm EF}I_{c}\), where \(I_{c}=\sum_{\mu\neq 0}(|E_{t\mu}|^{2}+|E_{r\mu}|^{2})\). Due to energy conservation, for the steady state, the input power equals the total power in both directions plus the intrinsic loss, which we write as \(\sum_{\mu}\frac{2}{K+1}(I_{t\mu}+I_{r\mu})\), according to definition of \(K\). The factor 2 arises due to the loss normalization to the half linewidth. Then, the energy conservation equation becomes
\[\begin{split}|E^{i}_{t0}|^{2}=&|\sqrt{1-|r|^{2}}E^{ o}_{t0}|^{2}+|E^{o}_{r0}|^{2}+r_{\rm EF}I_{c}\\ &+\sum_{\mu}\frac{2}{K+1}(I_{t\mu}+I_{r\mu}).\end{split} \tag{4}\]
The solution of this equation is
\[\begin{split} I_{c}=F{\rm Re}[E_{t0}+r^{*}E_{r0}]-(I_{t0}+I_{r0}+r _{\rm EF}{\rm Re}[rE_{t0}E^{*}_{r0}]).\end{split} \tag{5}\]
According to the definition of CE,
\[\begin{split}{\rm CE}=\frac{r_{\rm EF}I_{c}}{|E^{i}_{t0}|^{2}}=r _{\rm EF}{}^{2}(\frac{{\rm Re}[E_{t0}+r^{*}E_{r0}]}{F}\\ -\frac{I_{t0}+I_{r0}+r_{\rm EF}{\rm Re}[rE_{t0}E^{*}_{r0}]}{F^{2}} ).\end{split} \tag{6}\]
We further simplify the formula above for some specific cases. For an ordinary resonator without pump reflector (\(E_{r0}=0\)), and equation (6) becomes \({\rm CE}=(\frac{2K}{K+1})^{2}(\sqrt{\frac{I_{t0}}{F^{2}}}-\frac{I_{t0}}{F^{2}})\)[19; 35]. However, PhCR are quite different. To achieve wide span OPO with normal GVD, the mode split is much larger than the halfwidth \(\xi\gg 1\), which we call the large mode split approximation (see Supplemental Material). The strong interaction between the CW and CCW wave of the pump mode establishes coherence between them for the red-shifted resonance mode (\(\alpha>0\)), hence \(E_{t0}\approx-E_{r0}\), which suggests a standing wave inside the resonator; see Supplemental Material. Then, equation (6) reduces to
\[\begin{split}{\rm CE}&=r_{\rm EF}{}^{2}(\frac{{\rm Re }[E_{t0}(1-r^{*})]}{F}-\frac{I_{t0}(2-r_{\rm EF}{\rm Re}[r])}{F^{2}})\\ &\leq r_{\rm EF}{}^{2}(\sqrt{\frac{I_{t0}}{F^{2}}}|1-r|-\frac{I_{t0 }}{F^{2}}(2-r_{\rm EF}{\rm Re}[r])).\end{split} \tag{7}\]
On the other hand, the \(I_{t0}\) when OPO exists is actually determined by the phase matching condition [19]. According to CE formula above, a smaller \(I_{t0}\) enables the device to achieve the same CE at smaller \(F\). The minimum of \(I_{t0}\) is 1 and the inequality will become equality when \(E_{t0}\) has the same complex angle as \(1-r\). This can be satisfied by optimizing \(\xi\) and sweeping \(\alpha\), which we call optimal phase matching (OPhM); see the Supplemental
Material. Then, CE reduces to
\[\text{CE}=r_{\text{EF}}{}^{2}(\frac{1}{F}|1-r|-\frac{1}{F^{2}}(2-r_{\text{EF}} \text{Re}[r])). \tag{8}\]
Further, \(F\) at threshold is the solution of \(\text{CE}=0\), which we write as
\[F_{\text{thre}}=\frac{2-r_{\text{EF}}\text{Re}[r]}{|1-r|}. \tag{9}\]
The threshold power in the experiment \(P_{\text{thre}}\) can be expressed as
\[P_{\text{thre}}=\eta F_{\text{thre}}{}^{2}. \tag{10}\]
If we combine equation (10) with equation (3), we can cancel \(\eta\) and express \(F\) as
\[F=F_{\text{thre}}\sqrt{\frac{P}{P_{\text{thre}}}} \tag{11}\]
by which we can control \(F\) in our experiment through the input optical power. Equation (8) depends on 4 variables: \(F\), \(\Phi\), \(R\) and \(K\) (included in \(r_{\text{EF}}\)). When we increase input power, CE increases first, and then it drops. The maximum of CE happens at \(F=2F_{\text{thre}}\) (namely \(P=4P_{\text{thre}}\)), which we call saturation CE and denote by \(\text{CE}_{\text{sat}}\):
\[\text{CE}_{\text{sat}}=\frac{r_{\text{EF}}{}^{2}}{4}\frac{|1-r|^{2}}{2-r_{ \text{EF}}\text{Re}[r]}. \tag{12}\]
For a PhCR without a pump reflector, \(F_{\text{thre}}=2\) and \(\text{CE}_{\text{sat}}=\frac{1}{2}(\frac{K}{K+1})^{2}\). For the fixed reflectivity \(R\), \(F_{\text{thre}}\) can be smaller than 2 at some \(\Phi\), and \(\text{CE}_{\text{sat}}\) is maximized when \(\Phi=\pi\). The maximum of \(\text{CE}_{\text{sat}}\) is \(\frac{K^{2}}{(K+1)(K+1/2)}\) when \(r=-1\), which surprisingly is even greater than the CE limit of ordinary resonators.
Experiment.We explore and measure OPO laser converters with a PhCR and a pump reflector, demonstrating the close connection between lasers we implement with integrated photonics and our predictive model of the system. Hence, our work opens up a robust platform to realize exceptionally designable laser converters with high efficiency and access to broad wavelength bands. Figure 1 presents the foundations of our system, including the \(\Phi\) dependence of threshold power and CE. The pump reflector has a structure of a series of teeth with tapering period and amplitude, which controls the width and the location of the reflection band. We model the reflector using finite element method software. The PhCR enables deterministic single mode splitting by periodically modulating the resonator inner-wall with a nanostructured-period that determines the split resonant wavevector. The amplitude of the modulation is proportional to \(\xi\)[10]. We implement the PhCR with pump reflector by use of the tantala integrated photonics platform that we have developed [17; 36]. Tantala films of 570 nm thickness are deposited by a commercial vendor, FiveNine Optics, on a thermally oxidized silicon wafer, and we realize our PhCR and pump reflector designs via electron-beam lithography and a fluorine inductively coupled plasma-reaction ion etch. Our 2-day fabrication period yields >40 chips with 100 resonators per chip and includes an overnight 500\({}^{\circ}\)C thermal anneal in air to reduce oxygen vacancies in the tantala film. We test our devices by use of a widely tunable external cav
Figure 1: \(\Phi\)**dependence** (a) A diagram and a transmission trace of a PhCR with pump reflector. The blue curve is the normalized transmittance from experiment. The orange dashed line plots the fit. (b) \(P_{\text{thre}}\) vs \(\Phi\) with \(K=6.0\pm 0.6\), \(R=0.8\pm 0.1\). The solid curve shows analytic \(P_{\text{thre}}\) with \(\eta=29.7\) mW and the squares are experimental data. (c) CE vs \(\Phi\) with \(K=6.0\pm 0.6\), \(R=0.8\pm 0.1\). The solid and dashed curve plot the analytic CE at OPhM for \(F=\)1.8 and 2.3 respectively. Empty circles are simulation results with same conditions as curves. Squares are experimental results with \(F=1.80\pm 0.05\).
ity diode laser 1550 nm that is further amplified with an erbium-doped fiber. We convey pump light to the device by lensed optical fibers, which couple light into and out of our chips using an on-chip inverse taper for optimal mode-matching to the fibers.
To test our analytic model, we compare the results with experiment and numerical simulation using equation (1). By measuring the loss between the lensed fibers and our chips, we infer the on-chip power from the output power measured by an optical spectrum analyzer. Additionally, we use a circulator before the device to collect the reflected light; see Supplemental Material. We calculate reflectivity from the ratio between the reflected spectrum of devices with pump reflectors and the transmitted spectrum of devices without reflector. We find the measured reflectivity varies from 72% to 83% due to the uncertainty of the measurement and the tolerance of fabrication between different chips. By rotating the nanostructured inner-wall resonator modulation, we equivalently change the relative phase \(\Phi\) between reflected light in the bus waveguide and the field inside the PhCR. We measure \(\Phi\) as well as \(K\) and \(\xi\) by fitting the transmission, \(T\); see Fig. 1(a) and the Supplemental Material. Figure 1(b) shows the \(\Phi\) dependent measurement of \(P_{\rm{thre}}\) (black squares; Fig. 1(b)), which is consistent with equation (9) and (10) (solid curve; Fig. 1(b)) after fitting the conversion coefficient, \(\eta=30\) mW.
Figure 1(c) shows the \(\Phi\) dependent measurement of CE (black squares; Fig. 1(c)). We control \(F\) to be 1.80 \(\pm\) 0.05 by comparing the input power with \(P_{\rm{thre}}\) for each device, according to equation (11). We calculate CE from the inferred on-chip output and input power, with an uncertainty of 0.4 dB. The solid and dashed curves are the analytic CE (equation (8)) at OPhM with \(F=1.8\) and 2.3, respectively. Also, we perform a numerical simulation based on equation (1) to verify our analytic CE formula (grey empty circles in Fig. 1(c)). Both simulation and experiment results match our analytical CE model. Figure 1 demonstrates that the behavior of the PhCR is sensitive to \(\Phi\), which is due to the strong coherence between \(E_{t0}\) and \(E_{r0}\) when \(\xi\gg 1\) and the phase shift that the pump reflector adds to \(F_{r}\). Although the device with \(\Phi=\pi\) reflector has the maximum saturation CE, for small input power, the devices with reflectors of other phases achieved higher CE first due to lower \(F_{\rm{thre}}\).
The coupling coefficient \(K\) is the critical factor to increase CE of OPO extracted from the resonator. We experimentally vary this parameter by adjusting the gap, \(g_{c}\), between the bus waveguide and the PhCR. Figure 2(a) shows the dependence of \(K\) on \(g_{c}\). The black squares are experimental results and the empty circles are simulation results, using a finite element method software. They show exponential dependence of \(K\) on \(g_{c}\) with a shift in prefactor, which is likely due to the tolerance or incomplete etch of gaps in fabrication. Figure 2(b) shows the dependence of CE on \(K\). The solid curve plots the analytic CE at OPhM with \(F=1.85\), \(\Phi=1.715\pi\) and \(R=0.8\). The black squares and the empty circles are the corresponding measured results and the simulation results with the same parameters obtained from the modified LLE equation (1). The results are nearly overlapping, demonstrating the validity of the derived analytic expressions. The dashed curve is the analytic CE\({}_{\rm{sat}}\) with the same reflectivity but optimal phase \(\Phi=\pi\). It shows that even without modification on the reflector structure, CE can reach nearly 80% with \(K=6\) when \(\Phi\) is optimized and the input power is increased to saturation, \(P=4P_{\rm{thre}}\). In addition, according to equation (12), with an ideal reflectivity \(R=100\%\), CE can reach the theoretical upper boundary \(\frac{K^{2}}{(K+1)(K+1/2)}\), which asymptotically approaches unity when \(K\rightarrow+\infty\).
We explore the CE dependence on OPhM and the corresponding output signal and idler frequencies that are generated. As stated previously, our simplified CE formula (equation (8)) is valid only when OPhM is satisfied. In order to achieve OPhM, we sweep \(\xi\) by adjusting the modulation amplitude of the PhCR's nanostructured inner-wall. This leaves the broader dispersion unchanged, while enabling different phase-matching conditions for the pump mode split by \(\xi\). Figure 3(a)
Figure 2: \(K\) **dependence** (a) \(K\) vs \(g_{c}\). The black squares are the measured \(K\) while the empty circles are simulation results using Lumerical FDTD. (b) CE vs \(K\). The solid curve is analytic CE at OPhM with \(F=1.85\), \(\Phi=1.715\pi\), \(R=0.8\). The dashed curve is the analytic saturation CE with \(\Phi=\pi\), \(R=0.8\). The squares are experimental results with approximately same condition. The empty circles are the simulation results with the same parameters.
shows the CE dependence of \(\xi\) with all other parameters fixed. We find CE spans \(\approx\)20 - 40 %, where the experimental results (black squares) are in good agreement with our numerical simulation based on equation (1) (grey circles). The small decrease in CE corresponds to a change in the generated signal and idler frequencies, whose span in FSR is \(\delta\mu\), as seen in Fig. 3(b). The jump in \(\delta\mu\) denotes that OPhM is satisfied with a new pair of PhCR modes, thereby enabling tunable output frequencies through OPO laser conversion while maintaining high CE.
We also investigate the dependence of CE on F, the pump driving field which is controlled by the input pump optical power. Figure 4 summarizes our results. The solid curve and empty circles are the analytic CE (equation (8)) and simulation results at OPhM with \(R=0.72\), \(\Phi=1.372\pi\), \(K=2.8\). The black squares are the experimental results with near parameters. The results are consistent at small \(F\), but diverge at larger \(F\). We attribute this divergence to two reasons. First, when \(F\) is large, more than one pair of modes can achieve phase matching, which we call cascade, and decentralize the pump mode power. Second, the imperfect environment in the experiment is not fully described by either our analytical model or numerical simulation. The reflectors in our devices actually have some minor effect on the signal and idler modes and \(K\) at the idler mode is larger than \(K\) at the pump and signal modes. When CE is high, the reflected idler light by the reflector might excite other four wave mixing processes, further decentralizing the pump mode power. These imperfections might be further improved by incorporating higher-order dispersion terms to the simulation and by mitigating higher-order mode crossings, which disrupt the ideal dispersion. The reflectivity on the signal and idler can be reduced with a better reflector. The wavelength dependence of K can be suppressed by redesigning the width of bus waveguide and the length of its coupling region (see the diagram in Fig. 1(a)). Moreover, cascading can be avoided by utilizing other dispersion regimes and values of \(\xi\), or operating with a larger FSR. However, as a trade-off, a finer sweep of \(\xi\) will be required in order to reach OPhM at the targeted modes.
## Discussion and Conclusion
In this article, we show that a pump reflector can reduce OPO threshold power and increase CE by controlling the coupling of counterpropagating pump lasers. We develop a theory to explain this mechanism and derive analytic formulas of CE and threshold power for nanostructured-resonator OPO. Our analytical model has been systematically verified in our experiments and we have obtained \((41\pm 4)\%\) CE and \((40\pm 4)\) mW on-chip power for OPO laser-conversion. Higher CE is limited by the non-ideal reflector spectrum and cascade in our experiment Through nanophotonic design, we highlight a regime of OPO with high CE and universal phase matching and realize a robust laser platform.
Figure 3: \(\xi\)**dependence** (a), (b) CE vs \(\xi\) and \(\delta\mu\) vs \(\xi\). The black squares are the experiment results and the empty circles are the simulation results.
## Acknowledgments
We thank Yan Jin and Charles McLemore for carefully reviewing the manuscript. This research has been funded by NIST, the DARPA LUMOS programs as well as AFOSR FA9550-20-1-0004 Project Number 19RT1019 and NSF Quantum Leap Challenge Institute Award OMA - 2016244. This work is a contribution of the US Government and is not subject to copyright. Mention of specific companies or trade names is for scientific communication only and does not constitute an endorsement by NIST.
## Supplemental Material
### Coupling theory
Here we derive equation (2) in the main text. Since the coupler is not related to the PhCR or the pump reflector, we can limit the derivation in some simplified cases. We omit subscript t and r below since the theory is the same for both directions. Notice that \(\frac{\partial I_{\mu}}{\partial t}=2\mathrm{Re}[E_{\mu}^{s}\frac{\partial E_{ \mu}}{\partial t}]\). For each mode and direction, according to equation (1) in the main text, \(\frac{\partial I_{\mu}}{\partial t}=-2I_{\mu}\). Due to the definition of \(K\), the photon flux that goes into the waveguide is \(\frac{2K}{K+1}I_{\mu}\), so \(E_{\mu}^{o}=e^{i\gamma}\sqrt{\frac{2K}{K+1}}E_{\mu}\), where \(\gamma\) denotes the phase difference. Adding the driving force \(F\delta_{\mu,0}\), \(E_{\mu}^{i}\) can be denoted by \(xF\delta_{\mu,0}\) where \(x\) is an unknown coefficient and \(E_{\mu}^{o}=e^{i\gamma}\sqrt{\frac{2K}{K+1}}E_{\mu}+e^{i\epsilon}xF\delta_{ \mu,0}\) due to superposition, where \(\epsilon\) denotes another unknown phase. The power that goes from the bus waveguide to the resonator is \(|E_{\mu}^{i}|^{2}-|E_{\mu}^{o}|^{2}=-2\mathrm{Re}[e^{i(\epsilon-\gamma)}\sqrt {\frac{2K}{K+1}}E_{\mu}^{*}xF\delta_{\mu,0}]-\frac{2K}{K+1}I_{\mu}\). On the other hand, from equation (1) in the main text, we have \(\frac{\partial I_{\mu}}{\partial t}=-(\frac{2}{K+1}+\frac{2K}{K+1})I_{\mu}+2 \mathrm{Re}[F\delta_{\mu,0}E_{\mu}^{*}]\). Due to energy conservation, \(\frac{\partial I_{\mu}}{\partial t}=|E_{\mu}^{i}|^{2}-|E_{\mu}^{o}|^{2}-\frac{ 2}{K+1}I_{\mu}\), where the last term is due to intrinsic loss. This should hold for arbitrary \(E_{\mu}\), so \(x=-e^{i(-\epsilon+\gamma)}\sqrt{\frac{K+1}{2K}}\). Since only magnitude of \(E_{\mu}^{i}\) and \(E_{\mu}^{o}\) can be measured, we can add phases \(-e^{i(\epsilon-\gamma)}\) and \(-e^{-i\gamma}\) and rewrite them as \(E_{\mu}^{i}=\sqrt{\frac{K+1}{2K}}F\delta_{\mu,0}\), \(E_{\mu}^{o}=\sqrt{\frac{K+1}{2K}}(F\delta_{\mu,0}-r_{\mathrm{EF}}E_{\mu})\),\(r_{\mathrm{EF}}=\frac{2K}{K+1}\).
### Large mode split approximation
Here we prove \(E_{t0}\approx-E_{r0}\) with large mode split approximation for the red resonance mode (\(\alpha>0\)). For steady state, the equation (1) in the main text of pump mode in both directions can be written as.
\[0 =-(1+i\alpha)E_{t0}-i\frac{\xi}{2}E_{r0}\] \[\quad+i(\sum_{\mu_{1},\mu_{2}}E_{t\mu_{1}}E_{t\mu_{2}}E_{t(\mu_{1 }+\mu_{2})}^{*}+2E_{t0}\sum_{\mu_{3}}I_{r\mu_{3}})+F_{t}\] \[0 =-(1+i\alpha)E_{r0}-i\frac{\xi}{2}E_{t0}\] \[\quad+i(\sum_{\mu_{1},\mu_{2}}E_{r\mu_{1}}E_{r\mu_{2}}E_{r(\mu_{1 }+\mu_{2})}^{*}+2E_{r0}\sum_{\mu_{3}}I_{t\mu_{3}})+F_{r}. \tag{13}\]
Summing these two equalities, we have
\[0 =-(1+i(\alpha+\frac{\xi}{2}))(E_{t0}+E_{r0})\] \[\quad+i(\sum_{\mu_{1},\mu_{2}}(E_{t\mu_{1}}E_{t\mu_{2}}E_{t(\mu_{ 1}+\mu_{2})}^{*}+E_{r\mu_{1}}E_{r\mu_{2}}E_{r(\mu_{1}+\mu_{2})}^{*})\] \[\quad+2E_{t0}\sum_{\mu_{3}}I_{r\mu_{3}}+2E_{r0}\sum_{\mu_{3}}I_{t \mu_{3}})+F_{t}+F_{r}.\]
Notice that \(F_{t}\),\(F_{r}\),\(E_{t0}\),\(E_{r0}\sim 1\) and the magnitudes for the other modes are even smaller. So, \(E_{t0}=-E_{r0}+O(1/\xi)\) when \(\xi\gg 1\). The \(O()\) here is used to describe its asymptotic behavior. With this conclusion and equation (1) in the main text for the transmitted pump mode, we further obtain:
\[0 =-(1+i(\alpha-\frac{\xi}{2}))E_{t0}+O(1)\] \[\quad+i(\sum_{\mu_{1},\mu_{2}}E_{t\mu_{1}}E_{t\mu_{2}}E_{t(\mu_{ 1}+\mu_{2})}^{*}+2E_{t0}\sum_{\mu_{3}}I_{r\mu_{3}})+F_{t}. \tag{15}\]
Therefore, \(E_{t0}\) is small unless \(\alpha=\xi/2(1+O(1/\xi))\), which suggests \(\alpha\gg 1\) as well. Besides, it can be easily obtained from symmetry that the blue resonance mode (\(\alpha<0\)) happens at \(\alpha=-\xi/2(1+O(1/\xi))\) and the field satisfies \(E_{t0}=E_{r0}+O(1/\xi)\). Thus, \(\xi\) is approximately equal to the normalized mode split in this case.
### Optimal phase matching (OPhM)
Here, we give an argument that OPhM can be achieved by sweeping \(\alpha\) and \(\xi\). We first prove the minimum of \(I_{t0}\) when OPO exists is 1. Since the higher order OPO usually has a much smaller intensity that the first order OPO, we can assume there are only three modes: idler, pump and signal, with mode number \(-\mu\), 0, and \(\mu\) respectively. Then, equation (1) in the main text for the
CW idler and signal modes can be simplified to:
\[\begin{split}\frac{\partial E_{t\mu}}{\partial t}=&-(1+i (\alpha+D_{\mu}+I_{t\mu}-2I))E_{t\mu}\\ &+i{E_{t0}}^{2}E_{t(-\mu)}^{*}\\ \frac{\partial E_{t(-\mu)}^{*}}{\partial t}=&-(1-i( \alpha+D_{-\mu}+I_{t(-\mu)}-2I))E_{t(-\mu)}^{*}\\ &-i{E_{t0}}^{*}{}^{2}E_{t\mu}\end{split} \tag{16}\]
where \(I=\sum_{\mu_{3}}(I_{t\mu_{3}}+I_{r\mu_{3}})\) is the total intensity in the ring. The eigenvalue with a larger real part of this differential equation is
\[\begin{split}\lambda=&-1\\ &+\sqrt{{I_{t0}}^{2}-(\alpha+\frac{D_{\mu}+D_{-\mu}}{2}+\frac{I_{ t\mu}+I_{t(-\mu)}}{2}-2I)^{2}}\\ &-i\frac{D_{\mu}-D_{-\mu}+I_{t\mu}-I_{t(-\mu)}}{2}.\end{split} \tag{17}\]
For steady state of OPO, the real part of \(\lambda\) is zero, which suggests
\[I_{t0}=\sqrt{1+(\alpha+\frac{D_{\mu}+D_{-\mu}}{2}+\frac{I_{t\mu}+I_{t(-\mu)}} {2}-2I)^{2}}. \tag{18}\]
Its minimum occurs when \(\alpha\) equals
\[\alpha_{t}^{\text{opt}}=-\frac{D_{\mu}+D_{-\mu}}{2}-\frac{I_{t\mu}+I_{t(-\mu)} }{2}+2I. \tag{19}\]
The eigensolution of equation (16) satisfies:
\[\begin{split}& I_{t\mu}(-1+i(\alpha-\alpha_{t}^{\text{opt}}))=i{E_{t0} }^{*}{}^{2}E_{t\mu}E_{t(-\mu)}\\ & I_{t\mu}=I_{t(-\mu)}.\end{split} \tag{20}\]
The conclusions above can also be applied to the reflected direction:
\[\begin{split}& I_{r0}=\sqrt{1+(\alpha+\frac{D_{\mu}+D_{-\mu}}{2}+ \frac{I_{r\mu}+I_{r(-\mu)}}{2}-2I)^{2}}\\ &\alpha_{r}^{\text{opt}}=-\frac{D_{\mu}+D_{-\mu}}{2}-\frac{I_{r \mu}+I_{r(-\mu)}}{2}+2I\\ & I_{r\mu}(-1+i(\alpha-\alpha_{r}^{\text{opt}}))=i{E_{r0}}^{*}{} ^{2}E_{r\mu}E_{r(-\mu)}\\ & I_{r\mu}=I_{r(-\mu)}.\end{split} \tag{21}\]
On the other hand, when \(\xi\gg 1\), the large mode split approximation yields \(I_{t0}\approx I_{r0}\). Since the only difference between equation (18) and the first equality in equation (21) is the dependence on \(I_{t\mu}\) and \(I_{r\mu}\), it suggests that OPO can be generated either in only one direction or in both directions with \(I_{t\mu}\approx I_{r\mu}\). In our experiment, we find that the ratio between the converted power in either direction varies substantially while the total converted power is relative stable during the detuning sweep. We attribute differences between the analytic model predictions and experiment to chip-facet reflections and the assumption that only pump light is reflected.
Next, we prove that by sweeping \(\xi\), we can align the complex angle of \(E_{t0}\) with \(1-r\). Ignoring higher order OPO, equation (13) can be simplified to:
\[\begin{split} 0=&-(1+i(\alpha+I_{t0}-2I))E_{t0}-i \frac{\xi}{2}E_{r0}\\ &+2iE_{t\mu}E_{t(-\mu)}E_{t0}^{*}+F\\ 0=&-(1+i(\alpha+I_{r0}-2I))E_{r0}-i\frac{\xi}{2}E_{t0}\\ &+2iE_{r\mu}E_{r(-\mu)}E_{r0}^{*}+r(F-r_{\text{EF}}E_{t0}).\end{split} \tag{22}\]
Combine the above equation with equation (20) and (21):
\[\begin{split}\frac{F}{E_{r0}}=& i\frac{\xi}{2}+ \frac{E_{t0}}{E_{r0}}(1+\frac{2I_{t\mu}}{I_{t0}}\\ +& i(\alpha+I_{t0}-2I-\frac{2I_{t\mu}}{I_{t0}}( \alpha-\alpha_{t}^{\text{opt}})))\\ \frac{rF}{E_{t0}}=& rr_{\text{EF}}+i\frac{\xi}{2}+ \frac{E_{r0}}{E_{t0}}(1+\frac{2I_{r\mu}}{I_{r0}}\\ +& i(\alpha+I_{r0}-2I-\frac{2I_{r\mu}}{I_{r0}}( \alpha-\alpha_{r}^{\text{opt}}))).\end{split} \tag{23}\]
With large mode split approximation, \(E_{t0}=-E_{r0}+O(1/\xi)\) and
\[\begin{split}&\frac{I_{r0}}{I_{t0}}=(1+O(1/\xi))\\ &\frac{E_{t0}}{E_{r0}}+\frac{E_{r0}}{E_{t0}}=-2+O(1/\xi^{2}). \end{split} \tag{24}\]
Sum the two equalities in equation (23):
\[\begin{split}\frac{F(1-r)}{E_{t0}}=& O(1/\xi)+2-rr_{ \text{EF}}+\frac{I_{c}}{I_{t0}}\\ +& 2i(\alpha-\frac{\xi}{2}+I_{t0}-2I\\ &-\frac{I_{r\mu}(\alpha-\alpha_{r}^{\text{opt}})+I_{t\mu}( \alpha-\alpha_{t}^{\text{opt}})}{I_{t0}}).\end{split} \tag{25}\]
By adjusting \(\xi\), we can eliminate the imaginary part in the bracket on the right hand side of equation above, and thus align \(E_{t0}\) with \(1-r\).
### Transmission trace fitting
As discussed in the main text, the pump reflector phase is measured by fitting the transmission trace. In experiment, the signal \(V\) on the oscilloscope over the unnormalized detuning \(\Gamma\) can be measured directly as \(V(\Gamma)\). \(V\) is proportional to \(T\) and can be written as \(V=CT+B\) where \(B\) is the background noise. On the other hand, for low input power, the nonlinearity terms in modified LLE
can be ignored, and the steady state equation for pump mode can be written as:
\[\begin{split} 0=&-(1+i\alpha)E_{t0}+(F-i\frac{\xi}{2}E_{r0}) \\ 0=&-(1+i\alpha)E_{r0}+(r(F-r_{\rm EF}E_{t0})-i \frac{\xi}{2}E_{t0}).\end{split} \tag{26}\]
By solving the equation above, \(T\) can be expressed as:
\[\begin{split} T=&(1-R)|\frac{F-r_{\rm EF}E_{t0}}{F} |^{2}\\ =&(1-R)|\frac{(1+i\alpha)^{2}+(\xi/2)^{2}-r_{\rm EF} (1+i\alpha)}{(1+i\alpha)^{2}+(\xi/2)^{2}-ir_{\rm EF}r\xi/2}|^{2}.\end{split} \tag{27}\]
We assume that the devices on the same chip have approximately the same \(R\) and \(\kappa_{i}\), and they can be measured in experiment. Then, the unknown valuables in the equation above can be expressed as:
\[\begin{split} r_{\rm EF}&=\frac{2K}{K+1}=2(1- \frac{\kappa_{i}}{2\pi\Delta\nu})\\ r&=Re^{i\Phi}\\ \alpha&=2\Gamma/\Delta\nu.\end{split} \tag{28}\]
Then, \(V\) can be expressed in another way as \(V=CT(\Gamma,\xi,\Delta\nu,\Phi)+B\). By nonlinear fitting the measured \(V(\Gamma)\), we obtain the known parameters: \(\xi\), \(\Delta\nu\), and \(\Phi\).
### Experimental setup and output measurement
In our experiment, we use a circulator before the device to collect the reflected light, and the output spectra of both directions are measured by the same optical spectrum analyzer (OSA) by using an optical switch. Figure 5 (a) shows a diagram of our setup. Figure 5 (b) is a scanning electron microscope (SEM) image of our PhCR with bus reflector device. The teeth on the ring are the nanostructured side wall modulations, and the teeth on the bus waveguide are the pump reflector.
Figure 5 (c) and (d) show the measured spectra in transmitted (blue), and reflected (red) directions with the highest CE we achieve in experiment (\((41\pm 4)\%\)). Here, the power of the idler and signal in each direction are labeled by subscript i and s, and the transmitted and reflected directions are labeled by subscript t and r, respectively. The power in these spectra is the on-chip power which is calculated from the off-chip spectra measured by the OSA. We calculate CE by
\[\rm{CE}=(P_{\rm{ti}}+P_{\rm{ts}}+P_{\rm{ri}}+P_{\rm{rs}})/P_{p}, \tag{29}\]
where \(P_{p}\) is the on-chip input pump power, which is calculated from the off-chip power measured by a power meter before the circulator.
Figure 5: (a) A diagram of our experimental setup. (b) A SEM image of a PhCR with pump reflector device. (c),(d). The output spectra in transmitted (blue) and reflected (red) directions with CE = \((41\pm 4)\%\). The power of idler and signal is labeled by subscripts i and s, and subscripts t and r denote the transmitted and reflected directions, respectively. |
2304.02715 | Learning Knowledge-Rich Sequential Model for Planar Homography
Estimation in Aerial Video | This paper presents an unsupervised approach that leverages raw aerial videos
to learn to estimate planar homographic transformation between consecutive
video frames. Previous learning-based estimators work on pairs of images to
estimate their planar homographic transformations but suffer from severe
over-fitting issues, especially when applying over aerial videos. To address
this concern, we develop a sequential estimator that directly processes a
sequence of video frames and estimates their pairwise planar homographic
transformations in batches. We also incorporate a set of spatial-temporal
knowledge to regularize the learning of such a sequence-to-sequence model. We
collect a set of challenging aerial videos and compare the proposed method to
the alternative algorithms. Empirical studies suggest that our sequential model
achieves significant improvement over alternative image-based methods and the
knowledge-rich regularization further boosts our system performance. Our codes
and dataset could be found at https://github.com/Paul-LiPu/DeepVideoHomography | Pu Li, Xiaobai Liu | 2023-04-05T19:28:58Z | http://arxiv.org/abs/2304.02715v1 | # Learning Knowledge-Rich Sequential Model for Planar Homography Estimation in Aerial Video
###### Abstract
This paper presents an unsupervised approach that leverages raw aerial videos to learn to estimate planar homographic transformation between consecutive video frames. Previous learning-based estimators work on pairs of images to estimate their planar homographic transformations but suffer from severe over-fitting issues, especially when applying over aerial videos. To address this concern, we develop a sequential estimator that directly processes a sequence of video frames and estimates their pairwise planar homographic transformations in batches. We also incorporate a set of spatial-temporal knowledge to regularize the learning of such a sequence-to-sequence model. We collect a set of challenging aerial videos and compare the proposed method to the alternative algorithms. Empirical studies suggest that our sequential model achieves significant improvement over alternative image-based methods and the knowledge-rich regularization further boosts our system performance. Our codes and dataset could be found at [https://github.com/Paul-LiPu/DeepVideoHomography](https://github.com/Paul-LiPu/DeepVideoHomography)
## I Introduction
Estimating planar homographic transformations between consecutive video frames is a fundamental image task and is an essential part of many multimedia applications, including video management, robot navigation, content-based video retrieval, intelligent drones, self-driving vehicles, etc. The estimated homography matrix, for example, can be used for image stitching [1], image completion [2], monocular SLAM [3], camera calibration [4], 3d scene reconstruction [5], and camera tracking [6]. The matrix can also be used to approximate camera poses in aerial videos where facing-downward cameras are positioned far away from the ground and objects on the ground could be reasonably assumed to be from the same planar.
In the past literature, classical estimators usually build feature-level [7, 8] or pixel-level [9, 10, 11] correspondences between camera views and employ perspective geometry to recover their planar transformations. Each feature or pixel is represented as a feature descriptor (e.g., histograms) and is matched across camera views. Robust matching methods, e.g., RANSAC [12], might be used for eliminating incorrect or noisy correspondences. The feature-based methods are robust to some extent because the detected feature points and feature descriptors are invariant to rotation and scaling, and the RANSAC algorithm can potentially prune outlier correspondences. However, such a stage-wise pipeline involves many hand-crafted parameters, including thresholds for detecting feature points, choices of designing feature descriptors, and algorithms for matching feature descriptors. Those hyper-parameters could be tricky to calibrate in real-world multimedia applications.
A recent research stream aims to take advantage of the representative power of deep neural networks to develop end-to-end learning-based estimators. Detone et al. [13] propose to regress planar homography parameters from a pair of input images. A multi-layer convolutional neural network (CNN) is trained using synthetic data which includes multiple pairs of images transformed with randomly generated homography matrices. Nguyen et al. [14] introduce an unsupervised method that does not need labels of homography matrices to train the deep networks, as reviewed in Section III-A. Such an unsupervised method has better adaptability and stronger performance than the supervised one [13] but may suffer from severe over-fitting issues.
To address the above concerns, in this work, we develop a knowledge-rich sequence-to-sequence model to regress homography parameters for aerial videos. Figure 1 shows a typical result of the proposed method, which takes a sequence of video frames as inputs and generates a stitched image by a sequence of estimated homography matrices between consecutive video frames. For each pair of input images, our model employs a deep neural network to extract the feature representation of video frame pairs, regresses homography parameters, and warps one input image into the other image. A Long Short-Term Memory (LSTM) model is also employed to directly incorporate the temporal dependencies while estimating the sequence of homography matrices. Our method does
Fig. 1: Planar homography estimation and image stitching for aerial videos. Left: a sequence of video frames; right: the scene image stitched by the estimated homographic transformations.
not require any annotations and can be trained from raw video frames using standard gradient-based methods.
We also introduce a set of prior knowledge to regularize the learning of the proposed sequential estimator. These knowledge explicitly impose consistency constraints that should be satisfied while estimating the sequence of homography matrices from the input video sequence. One of the key observations is that for most aerial videos, the camera motion is smooth over time. The homographic transformations between consecutive frames are not arbitrarily different from each other and should be estimated in a coordinated fashion. This leads to a set of temporal knowledge Moreover, we might extract multiple image regions of different resolutions from video frames and estimate the homographic transformations from these image regions. Similarly the estimations cross image regions or scales in the same video frames should be consistent with each other, which leads to set of spatial and scale knowledge. Incorporating these different types of knowledge is capable of suppressing over-fitting issues while learning the deep model from raw videos.
## II Relationships to Previous Works
This work is closely related to three research streams in the areas of multimedia retrieval and video analysis.
**Geometry-based Homography Estimation** As aforementioned, classical homography estimators need to match features of interest or pixels across camera views and employ perspective geometry to recover the transformation parameters. The feature-based methods [1, 15, 16, 17] require to fine-tune many hand-crafted parameters for different scenes and may fail when there are few feature points. The pixel-based algorithms [18, 19] are usually of high complexity, which makes them unsuitable for time-critical video applications. For example, the classical estimator ECC [11] method takes around 600ms while the RANSAC [12] with SIFT [7] method, takes 45ms to process a pair of input images of 320 by 180 pixels on our experiment computer. In this work, we present an end-to-end network that is robust to different situations and can be implemented in high-performance parallel computing platforms (e.g., Graphical Processing Units) to boost system efficiency. It only takes around 15ms for our network to process a pair of input images (experiment settings introduced later).
**Learning-based Homography Estimation** The recently developed network-based methods [13, 20, 21, 14] present a promising direction to estimating homographic transformations. To train these networks, it is a common practice to generate a homography matrix and apply it to warp an image to be another image, which leads to a pair of training images with known homography parameters. These deep models can be effectively trained from the generated samples without human in the loop. The major concern is however the known over-fitting issue, i.e., a model might overly fit the synthetic samples so that it can hardly generalize to unseen realistic image pairs. In practice, the training scenarios are often very different from the testing scenarios, which makes the training situation even worse. To address the above concerns, we present a sequence-to-sequence LSTM model to estimate a sequence of homography matrices in batches and employ a set of prior knowledge to regularize the learning of such a deep model. Our method can be effectively trained from raw video sequences rather than static images, while satisfying various temporal, spatial and scale knowledge.
**Sequence-to-Sequence Models** Recurrent neural networks can process sequential inputs and propagate internal states of certain inputs to their sequential neighbors. They have been successfully used to model different sequential data, including language[22], and skeleton-base action signal [23], etc. Among different RNN models, Long Short-Term Memory(LSTM) is a popular choice for its ability to capture long-term dependencies along with short-term memory on sequential data [24, 25, 26, 27], including video data [28]. For aerial videos that are mostly undergoing smooth camera motions, there are strong correlations among the planar transformations on video frames in the same sequence. In this work, we extend the previous network-based estimators using LSTM techniques, which is the first piece of work in its catalog.
The **Contributions** of this work are two folds. Firstly, we reformulate the homography estimation of aerial videos to be a sequence-to-sequence task and develop a LSTM network to estimate the sequence of homography parameters, which is the first of its catalog in the literature. Secondly, we employ a set of spatial-scale-temporal knowledge to regularize training of the LSTM model and empirically validate its superior performance over alternative methods on challenging aerial videos.
## III Our Approach
This section presents the proposed learning-based method for estimating homography parameters in aerial videos. In the rest of this section, we first review the previous deep homography method [14] in Section III-A, and then introduce our deep sequence-to-sequence estimator in Section III-B. In Section III-C, we present a set of prior knowledge and discuss how they can be leveraged to guide the learning of our sequential model.
### _Background: Deep Homography_
The unsupervised network-based method proposed by Nguyen et al. [14] takes a pair of images as inputs. It first crops a pair of image patches from input images, and then employs a network to regress the transformation parameters between the two input images. The network includes three major components: (i) A network backbone is used to regress the offsets between the corner coordinates of the two corresponded image patches. This module outputs a 2 by 4 offset matrix \(H_{4pt}\). (ii) A Tensor Direct Linear Transform module [14] is introduced to calculate the corresponding 3 by 3 homography matrix \(\mathrm{H}\) from the offset matrix \(H_{4pt}\) and corner coordinates of the image patches. (iii) A Spatial Transformation Layer is used to register one image on the other using \(\mathrm{H}\). With the registered image pair, a pixel-wise photometric loss is calculated to
guide the training of such a deep model. The network is fully differentiable and can be trained using standard gradient-based methods. The detailed network architecture is showed in Table I.
This deep homography network [14] is trained on a collection of raw aerial images and clearly outperforms the previous supervised methods [13] over images with illumination noises. A significance of this method is that neither human annotations nor synthetic data [13] is needed for the training procedure. This unsupervised method is designed for dealing with pairs of static images. In this work, we extend it by introducing a deep sequence-to-sequence model to directly deal with aerial videos.
### _Deep Sequence-to-Sequence Estimator_
Our sequence-to-sequence network takes a sequence of video frames as inputs and aims to output a homography matrix between every pair of consecutive video frames. Figure 2 summarizes the sketch of the proposed network. For every two consecutive video frames, we sample multiple image patches from each frame, and employ a feature extractor to convert a pair of patches into a 1024-dim feature vector. We set the size of the input image patch to be 128 by \(128\) pixels in this work. A one-hidden-layer LSTM module is used to recurrently integrate the features of a patch pair and that of its preceding video frames. Then a regression layer is integrated to map the 1024-dim features from the LSTM layer into the corner offset matrices \(\mathbf{H}_{4pt}\). Similar to [14], a Tensor Direct Linear Transformation (DLT) module is employed to estimate the homography **H** and a Spatial Transform Layer is used to register the two input images together.
We employ a pixel-wise photometric loss function defined over the two registered input images [14]. Denote **I** as the input video sequence, \(I\) and \(J\) as two image patches sampled from two consecutive video frames. Let \(I(\mathbf{x})\) return the image intensity at the homogeneous coordinate \(\mathbf{x}~{}=[\mathrm{u},~{}\mathrm{v},~{}1]^{T}\), \(\mathbf{H}_{\mathrm{i,j}}\) denote the estimated homography matrix between the image patches \(I\) and \(J\). The pixel-wise loss function is defined as:
\[L(\mathbf{I})=\sum_{I}\sum_{J}\sum_{\mathbf{x}\in I}\left\|I(\mathbf{x})-J( \mathbf{\hat{H}}_{i,j}\cdot\mathbf{x})\right\|_{1} \tag{1}\]
where \(\cdot\) indicates the matrix-vector multiplication between a homography matrix and a coordinate vector.
It is noteworthy that the above loss function is fully unsupervised as it does not require annotations of homography matrices between video frames. The homography matrix is directly regressed from the input image pair and is used to warping one image into the other. In testing, however, we could output the estimated homography matrix for each pair of images. Our LSTM model explicitly incorporates the frame-to-frame correlations while estimating the sequence of homography matrices. This is crucial because, for example, two consecutive homography matrices over time might not be arbitrarily different but are closely dependent to each other.
### _Knowledge-rich Regularization_
We develop a set of regularization items to enforce knowledge-compatible consistency among the sequence of homography matrices estimated by the proposed sequence-to-sequence model. The previous network-based methods [14] that work on image pairs have severe overfitting problems while applying over aerial videos. Similarly, we employ a
Fig. 2: Network architecture of the proposed sequence-to-sequence model. The network takes as inputs one reference image and one target image. **Feature extractor**: the same VGG-like convolutional layers as the implementation in [14], which converts two input patches to a one-dimensional vector of features. **LSTM**: LSTM layers that integrate the extracted 1D features over time. **Regressor**: a fully connected layer that aims to regress the offset matrix \(\mathbf{H}_{4pt}\), which includes 4-corner offsets between the two input patches. **Tensor DLT**: a differentiable layer [14] that calculates homography matrix from the patch corner coordinates and offset matrix. **Spatial Transform**: spatial transform layers that warp one input image into the other one using the estimated homography matrix. **Photometric loss**: 11 loss between patches in reference frame and warped target frame.
LSTM model that bears a higher model complexity than the network in [14] and is more prone to encountering overfitting issues. We therefore derive a set of knowledge in temporal, spatial and scale spaces, respectively, and use them to regularize the loss function. These knowledge play important roles in preventing overfitting issues while training our model.
**Spatial regularization** Our method randomly samples multiple image patches from each video frame pair to form training samples. Each image patch is of 128 by 128 pixels. Multiple pairs of image patches could be generated from a single pair of video frames. This sampling strategy, like the data augmentation methods [29], can largely increase the size of training samples and boost system performance without access to extra aerial videos.
We introduce consistency constraints between the different patch pairs drawn from the same video frame pair. It is reasonably expected that these patch pairs would lead to the exactly same homography matrix. These consistency constraints are specified in the image space and are referred to as spatial knowledge in this work. Let \(a,b\) index the homography matrices estimated from two pairs of video frames, \(H_{a,t,t+1}\) the a-th homography matrix estimated for the frames t and t+1. We have the following the regularization item to encode the spatial knowledge:
\[R_{p}(\textbf{I})=\sum_{t}\sum_{a\neq b}\|H_{a,t,t+1}-H_{b,t,t+1}\|_{1} \tag{2}\]
which calculates the absolute differences between the estimated homography matrices. The above \(\ell\)-1 term is empirically more robust than other norms (e.g., \(\ell\)-2 norm) especially while dealing with outliers or noises.
**Scale regularization** We employ multiple patch resolutions while drawing image patches from video frames, which results in a multi-scale representation of the input aerial video. Image patches of different scales can provide rich representations of the input image, as described by the image scaling theory [30]. This multi-scale information is critical for regressing homography parameters as well. In this work, we employ two scales while sampling image patches from the input videos. We first sample image patches of 128 by 128 pixels and divide each patch into four non-overlapping patches of 64 by 64 pixels. Once drawn, each image patch is resized to be 128 by 128 pixels and fed into the deep LSTM model. Let \(m,n\) index two homography matrices so that the m-th matrix is estimated from a parent patch and the n-th matrix is from the related children patches. Let \(H_{m,t,t+1}\) the m-th homography matrix. We have the following equation to encode the scale consistency:
\[R_{s}(\textbf{I})=\sum_{t}\sum_{<m,n>}\|H_{m,t,t+1}-H_{n,t,t+1}\|_{1} \tag{3}\]
which calculates the cross-scale differences between the estimated matrices. Such a \(\ell\)-1 regularization is imposed across scales while the Eq (2) only involves image patches at the same scale.
**Temporal regularization** We employ two types of temporal knowledge to regularize the learning of the proposed LSTM model. The first type of temporal knowledge is derived from the fact that consecutive video frames tend to undergo similar transformation in aerial videos. Taking an aerial video sequence of 24 FPS for an instance, the duration of three video frames is as short as 83 ms. Therefore, we introduce a regularization item to minimize the absolute differences between the estimated homography parameters for consecutive video frame pairs. Let \(k,l\) index the homography matrices estimated for consecutive video frames, \(H_{k,t,t+1}\) indicate the k-th matrix estimated for the frames t and t+1. We have a regularization term,
\[R_{t1}(\textbf{I})=\sum_{t}\sum_{<k,l>}\|H_{k,t,t+1}-H_{l,t+1,t+2}\|_{1} \tag{4}\]
Minimizing the above term will encourage the predictions of homography parameters not to have dramatic changes over time and thus encode the smoothness constraint over camera motion in aerial videos.
The second type of temporal knowledge is used to encode the internal consistency between the homographic transformations in a sequence. For any given pixel **x** at time \(t\), to obtain its projection \(\hat{\textbf{x}}\) on the frame at time \(s\), where \(s>t\), we might sequentially apply the homography matrices from time t and s. Let \(\textbf{H}_{[t,s]}=\textbf{H}_{\text{s}-1,s}\cdot\ldots\textbf{H}_{\text{t}+1,t+2}\cdot\textbf{H}_{\text{t}+1}\), and we have, \(\hat{\textbf{x}}=\textbf{H}_{[t,s]}\cdot x\). The intensity values of \(\textbf{I}_{t}(\textbf{x})\) and \(\textbf{I}_{s}(\hat{\textbf{x}})\) are expected to be similar if the sequence of homography estimations are accurate enough. This leads to another temporal consistency constraint. Let K denote the length of the sequence used for our LSTM model, we have the following equation to encode such a temporal constraint,
\[R_{t2}(\textbf{I})=\sum_{t}\sum_{s=t+2}^{t+K-1}\sum_{\textbf{x}\in I_{t}}\big{\|} I_{t}(\textbf{x})-I_{s}(\textbf{H}_{[t,s]}\cdot\textbf{x})\big{\|}_{1} \tag{5}\]
The above equation is defined over an episode of \(K\) video frames and is complementary to the cross-frame loss function in Eq. (1). Note that we choose to use an episode of \(K=16\) video frames in this work. Using a higher order for the video episodes might capture stronger temporal constraints but will also increase the computational complexity.
In summary, our LSTM aims to minimize the following loss function with knowledge-rich regularization terms.
\[\mathrm{Loss}=L(\textbf{I})+\lambda_{p}R_{p}(\textbf{I})+\lambda_{s}R_{s}(\textbf{ I})+\lambda_{t1}R_{t1}(\textbf{I})+\lambda_{t2}R_{t2}(\textbf{I}) \tag{6}\]
where \(\lambda_{p},\lambda_{s},\lambda_{t1},\lambda_{t2}\) are weighting constants. We set \(\lambda_{p},\lambda_{s},\lambda_{t1}\) to be \(1/(KN)\) where \(K=9\) is the number of elements in a homography matrix, and \(N\) is the number of samples involved in each associated regularization term. Similarly, we set \(\lambda_{t2}\) to be \(1/N\). Minimizing the various regularization terms is capable of narrowing down the feasible space in optimization and suppressing the effects of over-fitting, which is a major concern for training an unsupervised homography network.
## IV Experiment
**Datasets1** We collect a set of aerial videos for training and testing purposes. The training set includes 141 video clips, each lasting between 20 to 40 seconds. All videos share the same aspect ratio of 16:9 and the frames are resized to 320 by 180 pixels before training. We randomly sample image patches at two scales: 128 by 128 and 64 by 64 pixels. In addition to these training videos, we collect another 22 video clips of 1280 x 720 pixels for testing purpose.
Footnote 1: The dataset and source codes of this work are available under this link: [https://github.com/Paul-LiPu/DeepVideoHomography](https://github.com/Paul-LiPu/DeepVideoHomography)
**Evaluation Metrics** We evaluate the proposed methods in both qualitative and quantitative ways. To quantity the estimation results, we reported MACE (Mean Average Corner Error) [13, 19, 21, 20, 31]. For each testing video, we identify multiple landmark points (e.g. corners of buildings) and annotate their image locations every 30 frames. We denote
Fig. 3: Visualization of neuron activation for the two trained networks: **BASE** (the first and third rows) and **REG-ALL** (the second and fourth rows) over five input images (one per column). Rows 1-2: activation on Conv2 layer; Row 3-4: activation on Conv4 layer. Pixels with warmer colors indicate higher activation for estimating the target homography parameters.
**M** as the number of testing sequences, \(N_{i}\) as the total number of annotated frames for i-th testing video,\(K_{t}\) as the number of annotated landmark points between (t-1)-th and t-th annotated frames, \(\hat{x_{j}}^{t}\) and \(x_{j}^{t}\) as j-th predicted landmark coordinates and ground truth landmark coordinates respectively. The MACE is calculated as:
\[MACE=\frac{1}{\sum_{i=1}^{M}(N_{i}-1)}\sum_{m=1}^{M}\sum_{t=2}^{N_{i}}(\frac{1} {K_{t}}\sum_{j=1}^{K_{t}}\|\hat{x_{j}}^{t}-x_{j}^{t}\|_{2}) \tag{7}\]
where \(\|\|_{2}\) is the \(\ell-2\) norm of a vector. These coordinates are registered to original video frame of 1280 by 720 pixels. The predicted landmark coordinates are obtained from last annotated landmark location and sequential homographic transformations among the 30 frames.
**Baseline methods** We compare the proposed sequence-to-sequence model to the original unsupervised method [14]. We apply the image-based method [14] over every pair of consecutive video frames separately and assemble all the estimated matrices together for evaluation purposes. We also compare to the traditional feature based method that extracts ORB [8] features and apply RANSAC [12] to get homographic transformations. We apply the ORB-RANSAC method over every pair of consecutive video frames. We also evaluate identity matrix as estimated homography and include its results for comparison.
**Implementation Variants** We implement seven variants of the proposed method for analyzing the individual contributions of the proposed techniques. (1) BASE, the previous unsupervised method [14] with the loss function in Eq. (1); (2) REG-P, learning the baseline network BASE with the spatial regularization term Eq. (2); (3) REG-S, learning BASE with the scale regularization Eq. (3); (4) REG-T, learning BASE with the two temporal regularization terms Eq. (4) and Eq. (5); (5) REG-ALL, learning BASE with all regularization terms; (6) LSTM, the proposed LSTM model with the loss function in Eq. (1) not regularized by any knowledge; (7) LSTM-REG-ALL, the LSTM model that employs all the three types of knowledge.
We train the BASE network for 300k iterations with a mini-batch size of 64 (each sample contains a patch pair from a frame pair). The initial learning rate is \(0.001\) and learning rate is multiplied by 0.1 every 100k iterations. The networks REG-T, REG-S, REG-P and REG-ALL are finetuned from BASE model with a batch size of \(32\) for 90k iterations. The LSTM networks are finetuned for 90k iterations with batch size of 8. The learning rate is initially \(0.001\) and decayed by factor 0.1 every 30k iterations for those experiments. All networks are implemented using Pytorch [32] and are initialized as introduced in [33]. The Adam optimizer [34] with default parameters is used for optimization.
**Efficiency** We implemented the proposed algorithms using a computer with Intel Core i5-4690 CPU and NVIDIA GTX-1080ti GPU. It takes 13.4 ms for LSTM-REG-ALL to process a video frame. The processing time includes image reading and preprocessing, and is averaged over all testing frames. The average run time of BASE, REG-T, REG-P, REG-S, REG-ALL is about 13.2 ms per frame. Note that the proposed regularization terms do not affect the network inference nor testing efficiency. The LSTM models with additional LSTM layers involve about 0.7% more computation than other networks. Moreover, the BASE and LSTM models have FLOPS of 1.27G and 1.28G respectively, which are very similar to each other.
**Network Visualization**. Once trained the BASE and REG-ALL networks, we first visualize the importance of each pixel for regressing the target homography matrix. For a testing image pair, we feed it to the network layer by layer, calculate its loss by Eq. (1), and backward propagate the loss to each layer. We then visualize the neuron activation weighted by gradients over every point in each feature map. More details about the visualization method can be found in the previous work [35]. Figure 3 visualizes feature maps from Conv2 (top two rows) and Conv4 (bottom two rows) layers of the networks BASE (Rows 1 and 3) and REG-ALL (Rows 2 and 4), respectively. Five testing images are used for visualization purposes. From these comparisons, we can observe that enforcing various knowledge can significantly enhance the local distinctness of the feature maps at different layers. Taking the layer Conv2 and the first image for an example, the highlighted pixels for the BASE model (row-1) spread over the whole image while those for the REG-ALL model (row-2) tend to appear in a few clustered regions. Similar results can be observed for other examples.
**Loss Distribution** We examine how the proposed knowledge-rich regularization items affect the photometric loss values over testing images. In this experiment, we randomly select 1000 pairs of image patches from a single pair of video frames, and apply the trained networks BASE, REG-T, REG-P, and REG-S over each image pair. Figure 4 plots the histogram of loss values for these four networks in (a), (b), (c) and (d), respectively. From the results, we can observe that the proposed spatial-scale-temporal knowledge can effectively
Fig. 4: Distributions of photometric loss values over a set of 1000 pairs of image patches sampled from one pair of video frames. (a) **BASE**; (b) **REG-T**; (c) **REG-P**; (d) **REG-S**.
reduce the expected loss value comparing to the baseline method.
**Quantitative result**. Table II reports the numerical results of the proposed methods and baseline methods. IDENTITY uses identity matrix as estimated homography matrix, while ORB2+RANSAC extracts ORB2 features and applies RANSAC to estimate the homography matrix. BASE, REG-T, REG-P, REG-S, REG-ALL, LSTM, and LSTM-REG-ALL predict the homography parameters by our trained deep models. We have two major observations. Firstly, the proposed spatial knowledge (REG-P), temporal knowledge (REG-T) and scale knowledge (REG-S) can significantly boost the unsupervised method BASE with good margins. These comparisons clearly demonstrate the effectiveness of the proposed knowledge-compatible consistency constraints. Secondly, the proposed LSTM model achieve the best performance among all baseline methods and implementation variants of our methods. Notably, the LSTM-REG-ALL model that employs spatial, temporal and scale knowledge outperforms the base LSTM model by a margin of 5.33 in terms of MACE.
**Qualitative result** Figure 5 plots three exemplar results of image stitching using the homographic transformations estimated by three models: BASE, REG-ALL and LSTM-REG-ALL. For each video sequence, we apply the three models to estimate the sequence of homography matrices, which are then used to register these video frames together. For the video in the first row, both REG-ALL and LSTM-REG-ALL can generate descent results whereas the BASE fails to work. For the second video, although all three models work well, the LSTM model generated less distorted pixels in the left side of the stitched image. For the third video sequence, the BASE and REG-ALL do not recover the geometry correctly whereas the proposed method can correctly reconstruct the scene structure. These comparisons clearly validate the effectiveness of the proposed sequence-to-sequence model and various knowledge.
## V Conclusion
This paper presents a deep sequence-to-sequence method for estimating homographic transformations from raw aerial videos without human annotations. Our method significantly outperforms previous image-based homography estimator on challenging aerial video data. The significance of our results highlight two insights: (i) The proposed sequence-to-sequence model is capable of leveraging sequential information for
Fig. 5: Image stitching results for three testing aerial videos by three methods: **BASE** (left column), **REG-ALL** (middle column) and **LSTM-REG-All** (right column).
video homography estimation; (ii) the various knowledge-compatible constraints can significantly improve system generalization on unseen video data. The proposed techniques have wide potentials in other video tasks, including camera pose estimation, 3D scene reconstruction, etc.
The proposed method frames a new direction in the area of video registration. Our current method is however limited by the fact that it manually samples image patches from video frames and estimates their transformation parameters. This strategy can be largely improved by a network-based method that learns to select image patches that could best represent the target scenario, e.g., image regions with distinctive features, without object movements, and/or without repetitive patterns. We will continue to investigate this direction in the future.
|
2305.15140 | Polynomial-Time Pseudodeterministic Construction of Primes | A randomized algorithm for a search problem is *pseudodeterministic* if it
produces a fixed canonical solution to the search problem with high
probability. In their seminal work on the topic, Gat and Goldwasser posed as
their main open problem whether prime numbers can be pseudodeterministically
constructed in polynomial time.
We provide a positive solution to this question in the infinitely-often
regime. In more detail, we give an *unconditional* polynomial-time randomized
algorithm $B$ such that, for infinitely many values of $n$, $B(1^n)$ outputs a
canonical $n$-bit prime $p_n$ with high probability. More generally, we prove
that for every dense property $Q$ of strings that can be decided in polynomial
time, there is an infinitely-often pseudodeterministic polynomial-time
construction of strings satisfying $Q$. This improves upon a
subexponential-time construction of Oliveira and Santhanam.
Our construction uses several new ideas, including a novel bootstrapping
technique for pseudodeterministic constructions, and a quantitative
optimization of the uniform hardness-randomness framework of Chen and Tell,
using a variant of the Shaltiel--Umans generator. | Lijie Chen, Zhenjian Lu, Igor C. Oliveira, Hanlin Ren, Rahul Santhanam | 2023-05-24T13:35:57Z | http://arxiv.org/abs/2305.15140v1 | # Polynomial-Time Pseudodeterministic Construction of Primes
###### Abstract
A randomized algorithm for a search problem is _pseudodeterministic_ if it produces a fixed canonical solution to the search problem with high probability. In their seminal work on the topic, Gat and Goldwasser [1] posed as their main open problem whether prime numbers can be pseudodeterministically constructed in polynomial time.
We provide a positive solution to this question in the infinitely-often regime. In more detail, we give an _unconditional_ polynomial-time randomized algorithm \(B\) such that, for infinitely many values of \(n\), \(B(1^{n})\) outputs a canonical \(n\)-bit prime \(p_{n}\) with high probability. More generally, we prove that for every dense property \(Q\) of strings that can be decided in polynomial time, there is an infinitely-often pseudodeterministic polynomial-time construction of strings satisfying \(Q\). This improves upon a subexponential-time construction of Oliveira and Santhanam [1].
Our construction uses several new ideas, including a novel bootstrapping technique for pseudodeterministic constructions, and a quantitative optimization of the uniform hardness-randomness framework of Chen and Tell [1], using a variant of the Shaltiel-Umans generator [1].
###### Contents
* 1 Introduction
* 1.1 Our Results
* 1.2 Proof Ideas
* 1.3 Technical Overview
* 1.3.1 Infinitely-Often Pseudodeterministic Polynomial-Time Constructions
* 1.3.2 Improving the Chen-Tell Targeted Hitting Set Generator
* 1.3.3 Modified Shaltiel-Umans Generator with Uniform Learning Reconstruction
* 2 Preliminaries
* 2.1 Finite Fields
* 2.2 Bounded-Space Turing Machines
* 2.3 Circuits Generated by Bounded-Space Turing Machines
* 2.4 Pseudorandom Generators and Hitting Set Generators
* 3 Polynomial-Time Pseudodeterministic Constructions for Dense Properties
* 4 Modified Shaltiel-Umans Generator with Uniform Learning Reconstruction
* 4.1 Technical Tools
* 4.1.1 Error-Correcting Codes
* 4.1.2 Generator Matrices
* 4.1.3 Random Self-Reducibility for Discrete Logarithm
* 4.1.4 Pseudorandom Generators from One-Way Permutations
* 4.1.5 Self-Correction for Polynomials
* 4.2 The Shaltiel-Umans Generator
* 4.3 Modified Shaltiel-Umans Generator: Proof of Theorem 4.1
* 5 Improved Chen-Tell Targeted Hitting Set Generator
* 5.1 Layered-Polynomial Representation
* 5.1.1 Construction of a Highly Uniform Circuit \(D\)
* 5.1.2 Arithmetization of \(D\)
* 5.1.3 Complexity of the Polynomials
* 5.2 Improved Chen-Tell Generator: Proof of Theorem 3.1
Introduction
How hard is it to construct an \(n\)-bit prime1? This is a fundamental problem in number theory and in complexity theory. Under reasonable assumptions, the problem is solvable in deterministic polynomial time. In more detail, Cramer's conjecture [14] in number theory asserts that the largest prime gap in any consecutive sequence of \(n\)-bit numbers is \(O(n^{2})\). Assuming this conjecture, we can solve the prime construction problem efficiently by testing the first \(O(n^{2})\) integers greater than \(2^{n-1}\) for primality and outputting the first one, where the primality tests are done efficiently using the algorithm of Agrawal, Kayal and Saxena [1]. An independent source of evidence for the efficiency of prime construction is the complexity-theoretic conjecture that \(\mathsf{DTIME}(2^{O(n)})\) requires Boolean circuits of exponential size on almost all input lengths. Under this conjecture, we can use the Impagliazzo-Wigderson pseudorandom generator [15] to _derandomize_ the simple randomized algorithm that outputs a random \(n\)-bit number, using the facts that primality testing is in polynomial time and that an \(\Omega(1/n)\) fraction of \(n\)-bit numbers are prime.
Footnote 1: Recall that a positive integer \(q\) is an \(n\)-bit prime if \(q\) is a prime number and \(2^{n-1}\leq q\leq 2^{n}-1\).
However, we seem very far from either settling Cramer's conjecture or proving strong complexity lower bounds. The best upper bound we can prove on the gap between consecutive \(n\)-bit primes is \(2^{(0.525+o(1))n}\)[1], and no super-linear circuit lower bounds are known for \(\mathsf{DTIME}(2^{O(n)})\)[13]. Indeed, the best unconditional result we have so far is that deterministic prime construction can be done in time \(2^{(0.5+o(1))n}\)[12], which is very far from the polynomial-time bound we seek. The Polymath 4 project (see [10]) sought to improve this upper bound using number-theoretic techniques but did not achieve an unconditional improvement.
In contrast to the situation with deterministic prime construction, it is easy to generate an \(n\)-bit prime _randomly_, as mentioned above: simply generate a random \(n\)-bit number, test it for primality in polynomial time, and output it if it is a prime. This algorithm has success probability \(\Omega(1/n)\) by the Prime Number Theorem, and the success probability can be amplified to be exponentially close to \(1\) by repeating the process \(\operatorname{poly}(n)\) times independently, and outputting the first of these \(\operatorname{poly}(n)\) numbers that is verified to be prime, assuming that there is at least one.
Gat and Goldwasser [11] asked whether it is possible to generate primes efficiently by a randomized process, such that the output is essentially _independent_ of the randomness of the algorithm. In other words, is there a polynomial-time randomized algorithm, which on input \(1^{n}\), constructs a _canonical_ prime of length \(n\) with high probability? They call such an algorithm a _pseudodeterministic_ algorithm, since the output of the algorithm is (almost) deterministic even though the algorithm might use random bits in its operation. Note that the randomized algorithm for prime generation we described in the previous paragraph is very far from being pseudodeterministic, as different runs of the algorithm are unlikely to produce the same prime. It is easy to see that a pseudodeterministic construction serves as an intermediate notion between a randomized construction (which is trivial for primes) and a deterministic construction (where little progress has been made so far).
[11] initiate a general theory of pseudodeterminism for search problems, motivated by applications in cryptography and distributed computing. Since then, there have been a number of papers on pseudodeterminism, in various contexts, such as query complexity [11, 12, 13], streaming algorithms [11, 12], parallel computation [11, 12], learning algorithms [10], Kolmogorov complexity [13, 14], space-bounded computation [10], proof systems [11, 12], number theory and computational algebra [11, 12], approximation algorithms [10], and many other settings (see, _e.g._, [1, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 88, 89, 91, 80, 83, 85, 87, 88, 84, 88, 89, 92, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 16, 18, 19, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 10, 11, 12, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 88, 89, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 79, 80, 83, 84, 85, 86, 87, 88, 89, 9, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 3
remained open: Is there a pseudodeterministic polynomial-time algorithm for prime construction? They describe this problem as "the most intriguing" and "perhaps the most compelling challenge for finding a unique output".
Unlike in the case of deterministic construction, number-theoretic techniques have so far not proven useful for the pseudodeterministic construction problem for primes. Using complexity-theoretic techniques, Oliveira and Santhanam [14] (see also [13]) showed that for any \(\varepsilon>0\), there is an algorithm that runs in time \(2^{n^{\varepsilon}}\) and succeeds on infinitely many input lengths.
### Our Results
In this paper, we design a significantly faster algorithm and provide an affirmative answer to the question posed by Gat and Goldwasser in the infinitely-often regime. Our main result can be stated in full generality as follows.
**Theorem 1.1** (Infinitely-Often Polynomial-Time Pseudodeterministic Constructions).: _Let \(Q\subseteq\{0,1\}^{*}\) be a language with the following properties:_
**(Density.)**: _there is a constant_ \(\rho\geq 1\) _such that for every_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(Q_{n}\triangleq Q\cap\{0,1\}^{n}\) _satisfies_ \(|Q_{n}|\geq n^{-\rho}\)_; and_
**(Easiness.)**: _there is a deterministic polynomial-time algorithm_ \(A_{Q}\) _that decides whether an input_ \(x\in\{0,1\}^{*}\) _belongs to_ \(Q\)_._
_Then there exist a probabilistic polynomial-time algorithm \(B\) and a sequence \(\{x_{n}\}_{n\in\mathbb{N}_{\geq 1}}\) of \(n\)-bit strings in \(Q\) such that the following conditions hold:_
1. _On every input length_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(\Pr_{B}[B(1^{n})\notin\{x_{n},\perp\}]\leq 2^{-n}\)_._
2. _On infinitely many input lengths_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(\Pr_{B}[B(1^{n})=x_{n}]\geq 1-2^{-n}\)_._
Interestingly, our construction is non-black-box, in the sense that changing the _code_ of the algorithm \(A_{Q}\) deciding property \(Q\) affects the canonical output of the corresponding algorithm \(B\). We will revisit this point when we discuss our techniques (see the remark at the end of Section1.3.2).
Letting \(Q\) be the set of prime numbers and noticing that \(Q\) is both dense (by the Prime Number Theorem) and easy (by the AKS primality test [1]), we immediately obtain the following corollary of Theorem1.1.
**Corollary 1.2** (Infinitely-Often Polynomial-Time Pseudodeterministic Construction of Primes).: _There is a randomized polynomial-time algorithm \(B\) such that, for infinitely many values of \(n\), \(B(1^{n})\) outputs a canonical \(n\)-bit prime \(p_{n}\) with high probability._
Corollary1.2 improves upon the subexponential-time infinitely-often pseudodeterministic construction of primes from [14] mentioned above. Note that the result for prime construction is a corollary of a far more general result about properties that are dense and easy. This is evidence of the surprising power of complexity theory when applied to a problem which seems to be about number theory (but where number-theoretic techniques have not so far been effective). The famous efficient primality testing algorithm of [1] similarly applied complexity-theoretic derandomization ideas to solve a longstanding open problem in computational number theory, though their argument does require more information about primes.
For a string \(w\in\{0,1\}^{*}\) and \(t\colon\mathbb{N}\to\mathbb{N}\), we let \(\mathsf{rK}^{t}(w)\) denote the length of the smallest randomized program that runs for at most \(t(|w|)\) steps and outputs \(w\) with probability at least \(2/3\)
(We refer to [10] for a formal definition and for an introduction to probabilistic notions of time-bounded Kolmogorov complexity.) By encoding the (constant-size) randomized polynomial-time algorithm \(B\) and each good input length \(n\) using \(O(1)+\log n\) bits in total, the following result holds.
**Corollary 1.3** (Infinitely Many Primes with Efficient Succinct Descriptions).: _There is a constant \(c\geq 1\) such that, for \(t(n)=n^{c}\), the following holds. For every \(m\geq 1\), there is \(n>m\) and an \(n\)-bit prime \(p_{n}\) such that \(\mathsf{rk}^{t}(p_{n})\leq\log(n)+O(1)\)._
In other words, there are infinitely many primes that admit very short efficient descriptions. The bound in Corollary1.3 improves upon the sub-polynomial bound on \(\mathsf{rk}^{\mathsf{poly}}(p_{n})\) from [11].
In the next subsection, we describe at a high level the ideas in the proof of Theorem1.1, and how they relate to previous work.
### Proof Ideas
The proof of Theorem1.1 relies on _uniform hardness-randomness tradeoffs_[20, 21]. For concreteness, assume that \(Q=\{Q_{n}\}_{n\in\mathbb{N}_{\geq 1}}\), with each \(Q_{n}\subseteq\{0,1\}^{n}\) consisting of the set of \(n\)-bit prime numbers. Let \(A_{Q}\) be a deterministic polynomial-time algorithm that decides \(Q\) (_e.g._, \(A_{Q}\) is the AKS primality test algorithm [1]). Before we present our algorithm and the main ideas underlying our result, it is instructive to discuss the approach of [10], which provides a subexponential-time pseudodeterministic construction that succeeds on infinitely many input lengths.
Subexponential-time constructions [10].We first recall how uniform hardness-randomness tradeoffs work. Given a presumed hard language \(L\), a uniform hardness-randomness tradeoff for \(L\) states that either \(L\) is easy for probabilistic polynomial-time algorithms, or else we can build a _pseudorandom set_\(G_{n}\subseteq\{0,1\}^{n}\) computable in subexponential time (thus also has subexponential size), which fools probabilistic polynomial-time algorithms on inputs of length \(n\) (for infinitely many \(n\)). In particular, Trevisan and Vadhan [21] give a uniform hardness-randomness tradeoff for a \(\mathsf{PSPACE}\)-complete language \(L_{\mathsf{TV}}\) they construct, which has certain special properties tailored to uniform hardness-randomness tradeoffs.2
Footnote 2: For the pseudorandomness experts, these special properties are _downward self-reducibility_ and _random self-reducibility_.
The subexponential-time construction in [10] uses a _win-win_ argument to derive an _unconditional_ pseudodeterministic algorithm from the uniform hardness-randomness tradeoff of [21]. There are two cases: either \(L_{\mathsf{TV}}\in\mathsf{BPP}\), or it is not. If the former is the case, then \(\mathsf{PSPACE}\subseteq\mathsf{BPP}\) by the \(\mathsf{PSPACE}\)-completeness of \(L_{\mathsf{TV}}\). Now, since we can in _polynomial space_ test all \(n\)-bit numbers using \(A_{Q}\) until we find the lexicographic first prime number, we can also do it in _randomized polynomial time_, _i.e._, there is a randomized algorithm \(B(1^{n})\) that runs in polynomial time and outputs the lexicographically first \(n\)-bit prime with high probability. Thus, in this case, the lexicographically first \(n\)-bit prime is the "canonical" output of the pseudodeterministic algorithm, and the algorithm works on _every_ input length \(n\).
Suppose, on the other hand, that \(L_{\mathsf{TV}}\not\in\mathsf{BPP}\). Using the uniform hardness-randomness tradeoff of [21], we have that for each \(\varepsilon>0\), there is a pseudorandom set \(G=\{G_{n}\}\), where each \(G_{n}\subseteq\{0,1\}^{n}\) is of size at most \(2^{n^{\varepsilon}}\), such that for infinitely many \(n\), \(G_{n}\) fools the algorithm \(A_{Q}\) on inputs of length \(n\). Since \(A_{Q}\) accepts an \(\Omega(1/n)\) fraction of strings of length \(n\) by the Prime Number Theorem, we have that the fraction of strings in \(G_{n}\) that are prime is \(\Omega(1/n)\) (by choosing the error parameter of the uniform hardness-randomness tradeoff to be small enough). In particular, there
must exist an element of \(G_{n}\) that is prime. Since \(G_{n}\) is computable in subexponential time, we can define a subexponential time _deterministic_ algorithm that enumerates elements of \(G_{n}\) and tests each one for primality until it finds and outputs one that is prime. This algorithm is deterministic but it runs in subexponential time, and is only guaranteed to be correct for infinitely many \(n\).
Thus, in either case, we have a pseudodeterministic algorithm for constructing primes that runs in subexponential time and works infinitely often. Note that we do not know a priori which of the two cases above holds, and therefore the argument is somewhat non-constructive. By exploiting further properties of the uniform hardness-randomness tradeoff, [1] manage to give an explicit construction algorithm that runs in subexponential time infinitely often.
Win-win arguments.The above argument gives a subexponential-time construction, but the win-win structure of the argument seems incapable of giving an optimal polynomial-time construction. Indeed, this is the case for many win-win arguments used in complexity theory:
* A win-win argument based on the Karp-Lipton theorem [13] gives that \(\Sigma_{2}\mathsf{EXP}\) requires super-polynomial size Boolean circuits [12], but seems incapable of giving truly exponential (\(2^{\Omega(n)}\)) Boolean circuit lower bounds.
* A win-win argument based on uniform hardness-randomness tradeoffs gives that either \(\mathsf{E}\subseteq\mathsf{BPP}\) or \(\mathsf{BPP}\) can be simulated infinitely often in deterministic subexponential time on average [14], but it remains unknown if such a tradeoff holds at the "high end", _i.e._, whether it is the case that either \(\mathsf{E}\) is in probabilistic subexponential-time or else \(\mathsf{BPP}\) can be simulated infinitely often in deterministic polynomial time on average.
* A win-win argument based on the Easy Witness Lemma gives that if \(\mathsf{NEXP}\subseteq\mathsf{SIZE}(\text{poly})\), then \(\mathsf{NEXP}=\mathsf{MA}\)[15], but it is unknown if any interesting uniform collapse follows from the simulation of \(\mathsf{NEXP}\) by subexponential-size Boolean circuits.
In each of these cases, the win-win argument seems to have inherent limitations that prevent us from getting optimal lower bounds or tradeoffs. Indeed, a paper by Miltersen, Vinodchandran and Watanabe [13] studies the "fractional exponential" lower bounds that seem to be the best provable using win-win arguments in the context of Boolean circuit lower bounds for exponential-time classes.3
Footnote 3: For example, a function \(f:\mathbb{N}\to\mathbb{N}\) is _sub-half-exponential_ if \(f(f(n)^{c})^{c}\leq O(2^{n})\) for every constant \(c\). (The exact definition of sub-half-exponential functions may be different in different papers.) Functions such as \(n^{k}\) and \(2^{\log^{k}n}\) are sub-half-exponential, while \(2^{\varepsilon n}\) and \(2^{n^{\varepsilon}}\) are not. It is known that \(\Sigma_{2}\mathsf{EXP}\) cannot be computed by \(f(n)\)-size circuits for every sub-half-exponential \(f\), but it remains open to show that \(\Sigma_{2}\mathsf{EXP}\) requires circuit complexity \(2^{n^{\varepsilon}}\) for any constant \(\varepsilon>0\).
Thus, in order to obtain a polynomial-time pseudodeterministic algorithm for primality, it seems that we need to go beyond win-win arguments. One natural idea is to apply uniform hardness-randomness tradeoffs _recursively_. However, this seems hard to do with the uniform hardness-randomness tradeoff of [13]. Their tradeoff applies only to the special language \(L_{\mathsf{TV}}\). If we argue based on the hardness or other properties of \(L_{\mathsf{TV}}\), then in the case where \(L_{\mathsf{TV}}\in\mathsf{BPP}\), we get a pseudodeterministic polynomial-time algorithm for constructing primes, but in the case where \(L_{\mathsf{TV}}\not\in\mathsf{BPP}\), we get a subexponential-time constructible pseudorandom set, and it is unclear how to apply the uniform hardness-randomness tradeoff to the algorithm for constructing this set.
Recursive application of uniform hardness-randomness tradeoffs.One of our main ideas is to exploit very recent work on uniform hardness-randomness tradeoffs [10] which applies
to _generic_ computations, as long as they satisfy certain mild properties. These tradeoffs yield _hitting sets_ rather than pseudorandom sets based on hardness -- a hitting set \(\mathsf{H}\subseteq\{0,1\}^{M}\) is a set that has non-empty intersection with every \(Q_{M}\subseteq\{0,1\}^{M}\) that is dense (_i.e._, accepts at least a \(1/\mathrm{poly}(M)\) fraction of strings) and is efficiently computable. It turns out that for our application to pseudodeterministic algorithms, uniform hardness-randomness tradeoffs that yield hitting sets are sufficient.
Specifically, Chen and Tell [21] show that for any multi-output function \(f\colon\{1^{n}\}\to\{0,1\}^{n}\) computed by uniform Boolean circuits of size \(T=T(n)\) and depth \(d=d(n)\), either there is a hitting set \(\mathsf{H}\subseteq\{0,1\}^{M}\) computable in time \(\mathrm{poly}(T)\), or \(f(1^{n})\) can be computed with high probability in time \((d+n)\cdot\mathrm{poly}(M)\) (which could be much less than \(T\)). Note that this tradeoff is applicable to _any_ multi-output function \(f\) given bounds on its uniform circuit complexity.
Our key idea is that this more generic uniform hardness-randomness tradeoff can be applied _recursively_. Indeed, we apply it to multi-output functions which capture the very task we are trying to solve, _i.e._, constructing a prime! In our base case, we use the function \(f\) which does a brute-force search over \(n\)-bit numbers and outputs the lexicographically first one which is prime. This function can be computed by uniform Boolean circuits of size \(2^{O(n)}\) and depth \(\mathrm{poly}(n)\), and hence we can apply the Chen-Tell tradeoff to it. We set \(M=n^{\beta}\) for some large enough constant \(\beta>1\) in the tradeoff. If we have that \(f(1^{n})\) is computable with high probability in time \((d+n)\cdot\mathrm{poly}(M)\), then we are done, since this gives us a pseudodeterministic algorithm for primes at length \(n\). If not, we have that there is a hitting set \(\mathsf{H}\subseteq\{0,1\}^{n^{\beta}}\) computable in time \(2^{O(n)}\). In particular, by iterating over the elements of \(\mathsf{H}\) and outputting the first one that is prime, we gain over the naive brute-force search algorithm, since we are now outputting a prime of length \(n^{\beta}\) in time \(2^{O(n)}\). Now _this_ new algorithm can be captured by a multi-output function with output length \(n^{\beta}\) to which we apply the Chen-Tell tradeoff again. In each recursive step, we either obtain a pseudodeterministic polynomial-time construction of primes, or we obtain a significantly faster deterministic construction of primes (of a larger input length). Intuitively, analyzing this process after \(O(\log n)\) steps of recursion, we can hope to show that at least one of the steps leads to a polynomial-time pseudodeterministic algorithm at the input length considered at that step.
This doesn't quite work as stated because the Chen-Tell tradeoff uses the Nisan-Wigderson generator [14], which is not known to have optimal parameters for all levels of hardness.4 Our recursive process explores essentially all possible levels of hardness for the uniform hardness-randomness tradeoff, since each recursive step corresponds to a different level of hardness. Using the original Chen-Tell tradeoff gives a _quasi-polynomial-time_ pseudodeterministic construction, but in order to get a polynomial-time pseudodeterministic construction, we need to work harder.
Footnote 4: Informally speaking, given a “hard truth table” of length \(T\), we want to construct a hitting set \(\mathsf{H}\subseteq\{0,1\}^{M}\) in \(\mathrm{poly}(T)\) time; however, the Nisan–Wigderson generator requires \(2^{\Theta(\log^{2}T/\log M)}\) time to construct.
Another crucial idea for us is to optimize the Chen-Tell tradeoff by using the Shaltiel-Umans generator [14] rather than the Nisan-Wigderson generator. This idea comes with its own implementation challenges, since the Shaltiel-Umans generator is not known to have a crucial learnability property that is required for the uniform hardness-randomness tradeoff. We sidestep this issue using a further win-win analysis, together with some other tricks; see Section1.3.3 for details. This enables us to achieve an optimal polynomial-time pseudodeterministic construction on infinitely many input lengths, and thereby establish Theorem1.5. We note that the subexponential-time construction of [14] also only works for infinitely many input lengths, and it is still open even to get a subexponential-time construction that works on all input lengths.
The intuitive description here does not address several subtleties that arise in the proof, such as maintaining the right uniformity and depth conditions when recursively applying the uniform hardness-randomness tradeoff. We refer to Section1.3 for a more detailed discussion of such matters.
### Technical Overview
As explained above, we consider a chain of \(t=O(\log n)\) recursively defined (candidate) HSGs \(\mathsf{H}_{0},\mathsf{H}_{1},\ldots,\mathsf{H}_{t}\) operating over different input lengths. These HSGs are obtained from the recent construction of Chen and Tell [14], which we informally describe next. Recall that we use \(Q_{M}\) to denote the easy and dense property over inputs of length \(M\).
The Chen-Tell [14] targeted HSG ("ideal version").Let \(c\geq 1\) be a large enough constant, and let \(f\colon\{1^{n}\}\to\{0,1\}^{n}\) be a family of unary functions computed by (uniform) Boolean circuits of size \(T=T(n)\) and depth \(d=d(n)\). Then, for every \(\log T\leq M\leq T\) there is a set \(\mathsf{H}\subseteq\{0,1\}^{M}\) computable in
\[\text{time }\widetilde{T}\triangleq T^{c}\ \text{ and }\ \text{depth }\widetilde{d} \triangleq d\cdot\log(T)+M^{c}\]
such that, if \(Q_{M}\subseteq\{0,1\}^{M}\)_avoids_\(\mathsf{H}\), (_i.e._, \(Q_{M}\) is dense but \(Q_{M}\cap\mathsf{H}=\varnothing\)), then we can compute \(f(1^{n})\) with high probability in time \((d+n)\cdot M^{c}\).
In other words, if \(f\) admits _low-depth_ circuits, we can construct a candidate HSG \(\mathsf{H}\) over length-\(M\) inputs such that breaking the generator \(\mathsf{H}\) allows us to compute \(f(1^{n})\) in time \(\operatorname{poly}(n,d,M)\). For \(d,M\ll T\), this can be much faster than the original time \(T\) required to compute \(f\).
The statement above differs from the results in [14] (stated for unary functions) in two important ways. First, the claimed upper bound on \(\widetilde{T}\) (the running time of the HSG) is not obtained by [14] for all choices of \(M\). Secondly, we have not formally specified the _uniformity_ of the family of circuits computing \(f\). While these are crucial points in [14] and when proving our result, for simplicity we will assume for now that this upper bound can be achieved and omit the discussion on uniformity.
Bootstrapping the win-win argument.We now review the idea discussed in Section1.2, using notations that will be more convenient for the remainder of this technical overview. Fix an arbitrary \(n\in\mathbb{N}_{\geq 1}\), and consider the corresponding property \(Q_{n}\subseteq\{0,1\}^{n}\) decided by \(A_{Q}(x)\) on inputs of length \(n\). Our initial \(\mathsf{H}_{0}\) is trivial and set to \(\{0,1\}^{n}\). (Intuitively, this corresponds to the first case of the [13] argument sketched above where \(L_{\mathsf{TV}}\in\mathsf{BPP}\).) Consider now a "brute-force" algorithm \(\mathsf{BF}(1^{n})\) that computes the first \(x\in\mathsf{H}_{0}\) such that \(A_{Q}(x)=1\). We let \(f(1^{n})\triangleq\mathsf{BF}(1^{n})\) in the Chen-Tell HSG. Note that \(f(1^{n})\) can be uniformly computed in time \(T=2^{O(n)}\) and depth \(d=\operatorname{poly}(n)\), since \(A_{Q}(x)\) runs in polynomial time and all elements of \(\mathsf{H}_{0}\) can be tested in parallel. We set \(M(n)\triangleq n^{\beta}\), where \(\beta>1\) is a large enough constant. Let \(\mathsf{H}_{1}\subsetneqq\{0,1\}^{M}\) be the candidate HSG provided by Chen-Tell. Note that \(\mathsf{H}_{1}\) can be computed in time \(\widetilde{T}=2^{O(n)}\) and depth \(\widetilde{d}=\operatorname{poly}(n)\).
Next, we consider a win-win argument based on whether \(Q_{M}\) avoids \(\mathsf{H}_{1}\). If this is the case, then Chen-Tell guarantees that we can compute \(f(1^{n})=\mathsf{BF}(1^{n})\in Q_{n}\) with high probability in time \((d+n)\cdot M^{c}=\operatorname{poly}(n)\). In other words, we can pseudodeterministically produce a string in \(Q_{n}\) in polynomial time. On the other hand, if \(\mathsf{H}_{1}\cap Q_{M}\neq\varnothing\), we now have a set \(\mathsf{H}_{1}\) of strings of length \(M=n^{\beta}\) that contains a string in \(Q_{M}\) and that can be deterministically computed in time \(2^{O(n)}\). That is, we are back to the former case, except that we can compute \(\mathsf{H}_{1}\) (a set containing at least
one \(M\)-bit prime) in time much faster than \(2^{O(M)}\). Crucially, in contrast to the approach of [13], the Chen-Tell HSG does not limit us to the use of the special language \(L_{\mathsf{TV}}\), effectively allowing us to reapply the same argument (with a speedup) over a larger input length.
In the next subsection, we discuss the "bootstrapping" and its parameters in more detail and explain how it gives a polynomial-time pseudodeterministic construction, assuming we have the ideal version of [10] described above.
#### 1.3.1 Infinitely-Often Pseudodeterministic Polynomial-Time Constructions
Let \(n_{0}\in\mathbb{N}\) be an "initial" input length, and \(t=O(\log n_{0})\) be a parameter. For each \(1\leq i\leq t\), we define the \(i\)-th input length to be \(n_{i}\triangleq n_{i-1}^{\beta}\), for a large enough constant \(\beta>1\). Our goal is to design a pseudodeterministic algorithm for finding elements in \(Q\) that will be correct on _at least one of the input lengths_\(n_{0},n_{1},\ldots,n_{t}\). On each input length \(n_{i}\) we will have:
1. the property \(Q_{n_{i}}\) that we want to hit;
2. a candidate hitting set generator \(\mathsf{H}_{i}\subseteq\{0,1\}^{n_{i}}\); and
3. the brute-force algorithm \(\mathsf{BF}_{i}:\{1^{n_{i}}\}\to\{0,1\}^{n_{i}}\), which iterates through all elements in \(\mathsf{H}_{i}\) and outputs the first element that is in \(Q_{n_{i}}\).
Note that \(\mathsf{BF}_{i}\) is completely defined by \(\mathsf{H}_{i}\). Suppose that \(\mathsf{H}_{i}\) can be computed (deterministically) in time \(T_{i}\) and depth \(d_{i}\), then \(\mathsf{BF}_{i}\) can also be computed (deterministically) in time \(T_{i}^{\prime}\triangleq T_{i}\cdot\operatorname{poly}(n_{i})\) and depth \(d_{i}^{\prime}\triangleq d_{i}\cdot\operatorname{poly}(n_{i})\). As discussed above, initially, \(\mathsf{H}_{0}\triangleq\{0,1\}^{n_{0}}\) is the trivial hitting set generator, \(T_{0}\triangleq 2^{O(n_{0})}\), and \(d_{0}\triangleq\operatorname{poly}(n_{0})\).
For each \(0\leq i<t\), we let \(f(1^{n_{i}})\triangleq\mathsf{BF}_{i},M\triangleq n_{i+1}\), and invoke the Chen-Tell HSG to obtain the HSG \(\mathsf{H}_{i+1}\subseteq\{0,1\}^{n_{i+1}}\). Recall that Chen-Tell guarantees the following: Suppose that \(Q_{M}=Q_{n_{i+1}}\) avoids the HSG \(\mathsf{H}_{i+1}\), then one can use \(Q_{n_{i+1}}\) to compute \(f(1^{n_{i}})\) with high probability in time \(\operatorname{poly}(d_{i}^{\prime},n_{i},M)\leq\operatorname{poly}(d_{i},n_{i})\), by our choice of parameters. Recall that if \(\mathsf{H}_{i}\) indeed hits \(Q_{n_{i}}\), then \(f(1^{n_{i}})\) implements the brute-force algorithm and outputs the first element in \(\mathsf{H}_{i}\cap Q_{n_{i}}\) (_i.e._, a _canonical_ element in \(Q_{n_{i}}\)). To reiterate, Chen-Tell gives us the following win-win condition:
* _either_\(Q_{n_{i+1}}\) avoids \(\mathsf{H}_{i+1}\), in which case we obtain a probabilistic algorithm that outputs a canonical element in \(Q_{n_{i}}\) (thus a pseudodeterministic algorithm) in \(\operatorname{poly}(d_{i},n_{i})\) time;
* _or_\(\mathsf{H}_{i+1}\) hits \(Q_{n_{i+1}}\), in which case we obtain a hitting set \(\mathsf{H}_{i+1}\) that hits \(Q_{n_{i+1}}\), thereby making progress on input length \(n_{i+1}\).
The HSG \(\mathsf{H}_{i+1}\) can be computed in time \(T_{i+1}\triangleq(T_{i}^{\prime})^{c}\) and depth \(d_{i+1}\triangleq d_{i}^{\prime}\cdot\log T_{i}^{\prime}+n_{i+1}^{c}\). Crucially, although \(T_{0}\) is exponential in \(n_{0}\), it is possible to show by picking a large enough \(\beta>1\) that the sequence \(\{n_{i}\}_{i\in\mathbb{N}}\) grows faster than the sequence \(\{T_{i}\}_{i\in\mathbb{N}}\), and eventually when \(i=t=O(\log n_{0})\), it will be the case that \(T_{t}\leq\operatorname{poly}(n_{t})\) and we can apply the brute-force algorithm to find the first element in \(\mathsf{H}_{t}\) that is in \(Q_{n_{t}}\) in time polynomial in \(n_{t}\).
A more precise treatment of the growth of the two sequences \(\{n_{i}\}\) and \(\{T_{i}\}\) are as follows. There is some absolute constant \(\alpha\geq 1\) such that \(T_{0}\leq 2^{\alpha n_{0}}\) and
\[T_{i+1}\leq T_{i}^{\alpha}\text{ (for each $0\leq i<t$)}.\]
We set \(\beta\triangleq 2\alpha\) (recall that each \(n_{i+1}=n_{i}^{\beta}\)). It follows from induction that for each \(0\leq i\leq t\),
\[T_{i+1}\leq T_{0}^{\alpha^{i}}=2^{\alpha^{i+1}n_{0}}\ \text{ and }\ n_{i+1}=n_{i}^{\beta}=n_{0}^{\beta^{i+1}}=n_{0}^{(2\alpha)^{i+1}}.\]
Since
\[\frac{\log T_{t}}{\log n_{t}}\leq\frac{\alpha^{t}n_{0}}{(2\alpha)^{t}\log n_{0}}= \frac{n_{0}}{2^{t}\log n_{0}},\]
it follows that when \(t\approx\log(n_{0}/\log n_{0})\), \(T_{t}\) will be comparable to \(n_{t}\) (rather than \(2^{n_{t}}\)). Similarly, one can show that \(d_{i}\leq\operatorname{poly}(n_{i})\) for every \(i\leq t\).
Informal description of the algorithm and correctness.To wrap up, we arrive at the following pseudocdeterministic algorithm that is correct on at least one of the input lengths \(n_{0},n_{1},\ldots,n_{t}\). On input length \(n_{i}\), if \(i=t\), then we use \(\operatorname{poly}(T_{t})\leq\operatorname{poly}(n_{t})\) time to find the first string in \(\mathsf{H}_{i}\) that is also in \(Q_{n_{i}}\) (_i.e._, simulate \(\mathsf{BF}_{i}\)); otherwise, use \(Q_{n_{i+1}}\) as a distinguisher for the Chen-Tell hitting set \(\mathsf{H}_{i}\) and print the output of \(\mathsf{BF}_{i}\) in \(\operatorname{poly}(n_{i},d_{i})\leq\operatorname{poly}(n_{i})\) time. To see that our algorithm succeeds on at least one \(n_{i}\), consider the following two cases:
1. Suppose that \(\mathsf{H}_{t}\) indeed hits \(Q_{n_{t}}\). Then clearly, our algorithm succeeds on input length \(n_{t}\).
2. On the other hand, suppose that \(\mathsf{H}_{t}\) does not hit \(Q_{n_{t}}\). Since our trivial HSG \(\mathsf{H}_{0}\) hits \(Q_{n_{0}}\), there exists an index \(0\leq i<t\) such that \(\mathsf{H}_{i}\) hits \(Q_{n_{i}}\) but \(Q_{n_{i+1}}\) avoids \(\mathsf{H}_{i+1}\). Since \(Q_{n_{i+1}}\) avoids \(\mathsf{H}_{i+1}\), Chen-Tell guarantees that we can speed up the computation of \(\mathsf{BF}_{i}\) using \(Q_{n_{i+1}}\) as an oracle. Since \(\mathsf{H}_{i}\) hits \(Q_{n_{i}}\), the output of \(\mathsf{BF}_{i}\) is indeed a canonical element in \(Q_{n_{i}}\). It follows that our algorithm succeeds on input length \(n_{i}\).
This completes the sketch of the algorithm and its correctness. We note that while this exposition explains how the second bullet of Theorem1.1 is achieved, it does not address the behavior of the algorithm on other input lengths (_i.e._, the first bullet in the same statement). For simplicity, we omit this here and refer to the formal presentation in Section3.6
Footnote 6: Alternatively, the guarantee from the first bullet of Theorem1.1 can always be achieved via a general argument. We refer to [13, Proposition 2] for the details.
While the aforementioned construction conveys the gist of our approach, there are two important issues with our presentation. Firstly, as explained before, the results of [11] do not achieve the _ideal parameters_ of the HSG stated above. Secondly, we have only vaguely discussed the _circuit uniformity_ of the function \(f(1^{n})\). The uniformity of \(f\) is critical for the reconstruction procedure of [11] to run in time comparable to the circuit depth of \(f\). On the other hand, since our HSGs and functions \(f\) (corresponding to the algorithm \(\mathsf{BF}\)) are recursively defined, the circuit uniformity of the [11] generator itself becomes another critical complexity measure in the proof.
In the next subsection, we discuss the Chen-Tell generator in more detail and explain how to obtain an improved generator construction satisfying our requirements.
#### 1.3.2 Improving the Chen-Tell Targeted Hitting Set Generator
The uniform hardness-to-randomness framework of Chen-Tell builds on two important ingredients:7
Footnote 7: Below we will focus on the high-level picture of the Chen–Tell framework without diving into too many details. Our presentation is also somewhat different from the original presentation in [11].
1. A _layered-polynomial representation_ of a shallow uniform circuit.
2. A hitting set generator with a _uniform learning reconstruction_ algorithm.
Layered-polynomial representation.We now discuss the first ingredient. Let \(f\colon\{0,1\}^{n}\to\{0,1\}^{n}\) be a logspace-uniform circuit family of size \(T(n)\) and depth \(d(n)\).8 Let \(M\colon\mathbb{N}\to\mathbb{N}\) be the parameter for output length. Building on the doubly efficient interactive proof system by [1] (and its subsequent simplification by [12]), for any \(z\in\{0,1\}^{n}\), [13] showed that there is a sequence of polynomials \(\{P_{i}^{z}\}_{i\in[d^{\prime}]}\) for \(d^{\prime}=d\cdot\operatorname{polylog}(T)\) with the following nice properties:
Footnote 8: Intuitively, a circuit family is logspace-uniform if each circuit in the family can be printed by a fixed machine that runs in space that is of logarithmic order in the size of the circuits. See Section2.3 for the precise definition of logspace-uniform circuits.
* (**Arithmetic setting.**) Let \(\mathbb{F}\) be a finite field of size \(M^{c}\) for a large universal constant \(c>1\), and let \(m\) be of order \(\frac{\log T}{\log M}\). All the \(P_{i}^{z}\) map \(\mathbb{F}^{m}\) to \(\mathbb{F}\) and have total degree at most \(M\).
* (**Base case.**) There is an algorithm \(\mathsf{Base}\) such that, given the input \(z\in\{0,1\}^{n}\) and \(\vec{w}\in\mathbb{F}^{m}\), computes \(P_{1}^{z}(\vec{w})\) in \(\operatorname{poly}(M)\) time.
* (**Downward self-reducibility.**) There is an oracle algorithm \(\mathsf{DSR}\) that, given input \(i\in\{2,\ldots,d^{\prime}\}\) and \(\vec{w}\in\mathbb{F}^{m}\), together with the oracle access to \(P_{i-1}^{z}(\cdot)\), computes \(P_{i}^{z}(\vec{w})\) in \(\operatorname{poly}(M)\) time.
* (**Faithful representation.**) There is an oracle algorithm \(\mathsf{OUT}\) that, given input \(i\in[n]\) and oracle access to \(P_{d^{\prime}}^{z}\), outputs \(f(z)_{i}\) in \(\operatorname{poly}(M)\) time.
Intuitively, these polynomials form an _encoded_ version of the computation of \(f\) in the sense that they admit both _downward self-reducibility_ and _random self-reducibility_: every \(P_{i}^{z}\) has low degree and hence admits error correction properties; downward self-reducibility follows from definition.
We note that the proof of this result depends in a crucial way on the logspace-uniformity of the circuit family computing \(f\). (This allows one to arithmetize a formula of bounded size that computes the direct connection language of the circuit, while also controlling the circuit uniformity of the resulting polynomials.)
Hitting set generators with a uniform learning reconstruction algorithm.The second ingredient of [13] is the Nisan-Wigderson generator combined with Reed-Muller codes [14, 15]. The most important property of this generator is that it supports a uniform learning reconstruction algorithm. In more detail, for a polynomial \(P\colon\mathbb{F}^{m}\to\mathbb{F}\), the generator \(\mathsf{NW}^{P}\) takes \(s=O\Big{(}\frac{\log^{2}T}{\log M}\Big{)}\) bits as seed, such that there is a uniform oracle algorithm \(R\) (for "reconstruction") where the following holds. Given oracle access to both \(P\) and an oracle \(D\colon\{0,1\}^{M}\to\{0,1\}\) that distinguishes \(\mathsf{NW}^{P}(U_{s})\) from the uniform distribution, \(R^{P,D}\) runs in \(\operatorname{poly}(M)\) time and with high probability outputs a polynomial-size \(D\)-oracle circuit that computes \(P\).
Now, the hitting set \(H_{f}(z)\) is defined as
\[H_{f}(z)\triangleq\bigcup_{i\in[d^{\prime}]}\mathsf{NW}^{P_{i}^{z}}\;.\]
The uniform reconstruction algorithm.One key observation here is that if a distinguisher \(D\colon\{0,1\}^{M}\to\{0,1\}\) avoids \(H_{f}(z)\), meaning that \(D\) accepts a large fraction of inputs from \(\{0,1\}^{M}\) but rejects all strings in \(H_{f}(z)\), then clearly \(D\) also distinguishes all \(\mathsf{NW}^{P_{i}^{z}}(U_{s})\) from the uniform distribution. Following [14], [13] then shows that there is a uniform oracle algorithm \(R_{f}\) that takes input \(z\in\{0,1\}^{n}\) and any "avoider" \(D\) of \(H_{f}(z)\) as oracle, and outputs \(f(z)\) with high probability. In more detail, \(R_{f}\) works as follows:
1. It is given input \(z\in\{0,1\}^{n}\) and oracle access to an avoid \(D\colon\{0,1\}^{M}\to\{0,1\}\) of \(H_{f}(z)\).
2. For every \(i\in\{2,\ldots,d^{\prime}\}\): 1. The goal of the \(i\)-th step is to construct a \(\operatorname{poly}(M)\)-size \(D\)-oracle circuit \(C_{i}\) that computes \(P_{i}^{z}\). 2. It runs the learning reconstruction algorithm \(R^{P_{i}^{z},D}\) to obtain a \(\operatorname{poly}(M)\)-size \(D\)-oracle circuit. To answer queries to \(P_{i}^{z}\), we first run the algorithm \(\mathsf{DSR}\) to convert them into queries to \(P_{i-1}^{z}\). Next, when \(i=2\), we answer these queries by calling \(\mathsf{Base}\) directly, and when \(i>2\) we answer these queries by evaluating our \(D\)-oracle circuit \(C_{i-1}\).
3. For every \(i\in[n]\), output \(\mathsf{OUT}^{C^{D}_{d^{\prime}}}(i)\).
**Issue with the original Chen-Tell construction: Super-logarithmic seed length of \(\mathsf{NW}\).** The main issue with the construction above is that \(\mathsf{NW}^{P_{i}^{z}}\) has seed length \(O\!\left(\frac{\log^{2}T}{\log M}\right)\). In particular, this means that when \(\log M\leq o(\log T)\), the hitting set \(H_{f}(z)\) has super-polynomial size, and therefore cannot be computed in \(\operatorname{poly}(T)\) time as in the "ideal version" of [12] stated above.9 Hence, to improve the computation time of \(H_{f}(z)\) to \(\operatorname{poly}(T)\), we need an HSG with seed length \(O(\log T)\) for all possible values of \(M\), together with a uniform learning reconstruction, when it is instantiated with polynomials. Jumping ahead, we will replace \(\mathsf{NW}\) with the Shaltiel-Umans Hitting Set Generator [13], obtaining an optimized version of the Chen-Tell generator with better parameters. However, the original generator from [13] does not provide a uniform learning reconstruction procedure. By a clever use of the classical construction of a _cryptographic pseudorandom generator from a one-way permutation_ and of another idea, we managed to modify their construction to allow a uniform learning reconstruction. See the next subsection for more details.
Footnote 9: Indeed, if we rely on the original Chen–Tell construction to implement the bootstrapping method described above, we would only obtain a quasi-polynomial-time pseudodeterministic construction, instead of a polynomial-time one.
**Controlling the circuit uniformity of the optimized Chen-Tell generator.** As stressed above, in order to construct a layered-polynomial representation for \(f\) with the aforementioned parameters, it is crucial that \(f\) admits a logspace-uniform circuit family. Since we will rely on multiple applications of the generator, and each new function \(\mathsf{BF}\) on which the result is invoked contains as a subroutine the code of the previous generator, we must _upper bound the circuit uniformity_ of our optimized Chen-Tell generator. This turns out to require a delicate manipulation of all circuits involved in the proof and of the Turing machines that produce them, including the components of the Shaltiel-Umans generator. For this reason, whenever we talk about a Boolean circuit in the actual proof, we also bound the description length and space complexity of its corresponding machine. Additionally, as we manipulate a super-constant number of circuits (and their corresponding machines) in our construction, we will also consider the complexity of producing the code of a machine \(M_{2}\) encoding a circuit \(C_{2}\) from the code of a machine \(M_{1}\) encoding a circuit \(C_{1}\) (see, e.g., the "Moreover" part in the statement of Theorem 3.1). The details are quite tedious, but they are necessary for verifying the correctness and running time of our algorithm. In order to provide some intuition for it, we notice that as we move from the HSG \(\mathsf{H}_{i}\) to \(\mathsf{H}_{i+1}\), we also increase the corresponding input length parameter from \(n_{i}\) to \(n_{i+1}=n_{i}^{\beta}\). While there is an increase in the uniformity complexity, it remains bounded relative to the new input length. We omit the details in this proof overview.
Non-black-box behavior.We note that the recursive application of the Chen-Tell generator is responsible for the _fully non-black-box_ behavior of our pseudodeterministic construction. Indeed, since we invoke the Chen-Tell generator on each function \(\mathsf{BF}\) (which contains the code of the algorithm \(A_{Q}\) deciding property \(Q\) as a subroutine), the collection of strings in the hitting set generator depends on the layered-polynomial representation that is obtained from the _code_ of \(\mathsf{BF}\). As a consequence, our construction has the unusual feature that the canonical outputs of the algorithm \(B\) in Theorem1.1 are affected by the code of \(A_{Q}\). In other words, by using a different primality test algorithm (or by making changes to the code implementing the AKS routine), one might get a different \(n\)-bit prime!
The parameters of our hitting set generator appear in Section3. The proof of the result is given in Section5.
#### 1.3.3 Modified Shaltiel-Umans Generator with Uniform Learning Reconstruction
As explained above, in order to complete the proof of Theorem1.1 we need to design a variant of the Shaltiel-Umans generator [10] with a _uniform learning reconstruction_ procedure.
The Shaltiel-Umans generator takes as input a low-degree polynomial \(P:\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) (in our case \(p\) will be a power of \(2\)) and produces a set of binary strings (which is supposed to be a hitting set). The construction of this generator also relies on "generator matrices". A matrix \(A\in\mathbb{F}_{p}^{m\times m}\) is a _generator matrix_ if it satisfies \(\{A^{i}\cdot\vec{1}\}_{1\leq i<p^{m}}=\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\). Roughly put, the matrix \(A\) can be thought of as performing multiplication with a generator of the multiplicative group of \(\mathbb{F}_{p^{m}}\).
Recall that a generator has a uniform learning reconstruction algorithm if the following holds. Given an algorithm \(D\) that avoids the output of the generator constructed using \(P\), as well as \(P\) itself, we can _uniformly_ and _efficiently_ generate (with high probability) a \(D\)-oracle circuit that computes the polynomial \(P\). (In other words, we can query \(P\) while producing the circuit, but the circuit itself does not have access to \(P\).)
However, the reconstruction procedure provided by the original Shaltiel-Umans generator only guarantees the following: If the generator is constructed using \(P\) and some generator matrix \(A\), then using an algorithm \(D\) that avoids the output of the generator, and _given the matrix_\(A\) and oracle access to \(P\), one can obtain a (\(D\)-oracle) circuit \(C:[p^{m}-1]\to\mathbb{F}_{p}^{m}\) such that \(C(i)=P(A^{i}\cdot\vec{1})\).10 (For the precise statement, see Theorem4.9.) That is, this reconstruction is not a uniform learning algorithm in the following sense:
Footnote 10: In fact, the circuit only computes \(P(A^{i}\cdot\vec{v})\) for some \(\vec{v}\) output by the reconstruction algorithm. We assume \(\vec{v}=\vec{1}\) here for simplicity.
1. It needs to know the matrix \(A\) (which can be viewed as non-uniform advice).
2. Given oracle access to \(P\), it only learns a circuit that computes the mapping \(i\mapsto P(A^{i}\cdot\vec{1})\), instead of a circuit that computes \(P(\vec{x})\) on a given \(\vec{x}\in\mathbb{F}_{p}^{m}\).
We now describe how to modify the Shaltiel-Umans generator to make its reconstruction a uniform learning algorithm.
For the first issue, our idea is that, instead of using a generator matrix that is obtained by brute-force search as in the original construction (we note that the reconstruction cannot afford to perform the brute-force search due to its time constraints), we will use a generator matrix that is from a small set of matrices that can be constructed _efficiently_. More specifically, using results about finding primitive roots of finite fields (_e.g._, [11]), we show that one can efficiently and deterministically construct a set \(S\) of matrices that contains at least one generator matrix. The
advantage is that the reconstruction algorithm can still afford to compute this set \(S\). Note that although we don't know which matrix in \(S\) is a valid generator matrix (as verifying whether a matrix is a generator matrix requires too much time), we can try all the matrices from \(S\), and one of them will be the correct one. This allows us to obtain a list of candidate circuits, one of which computes \(P\) (provided that we can also handle the second issue, which will be discussed next). Then by selecting from the list a circuit that is sufficiently close to \(P\) (note that given oracle access to \(P\), we can easily test whether a circuit is close to \(P\) by sampling) and by using the _self-correction_ property of low-degree polynomials, we can obtain a circuit that computes \(P\) exactly.
With the above idea, we may now assume that in the reconstruction we know the generator matrix \(A\) used by the Shaltiel-Umans generator. Next, we describe how to handle the second issue. Recall that the reconstruction algorithm of the Shaltiel-Umans generator gives a circuit \(C\) such that \(C(i)=P(A^{i}\cdot\vec{1})\), for \(i\in[p^{m}-1]\), and we want instead a circuit that given \(\vec{x}\in\mathbb{F}_{p}^{m}\) computes \(P(\vec{x})\). Now suppose given \(\vec{x}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\), we can also _efficiently_ compute the value \(i\in[p^{m}-1]\) such that \(A^{i}\cdot\vec{1}=\vec{x}\). Then we would be able to combine this with \(C\) to get a circuit \(E\) that computes \(P\), _i.e._, if \(\vec{x}=\vec{0}\) then \(E\) outputs \(P(\vec{0})\) (where the value \(P(\vec{0})\) can be hardcoded); otherwise, \(E\) computes \(i\) for \(\vec{x}\) as described above and then outputs \(C(i)\). However, the task of finding such \(i\) given \(A\) and \(\vec{x}\) is essentially the _discrete logarithm problem_, for which no efficient algorithm is known!
A classical result in cryptography is that one can construct a pseudorandom generator based on the hardness of the discrete logarithm problem (see, _e.g._, [1, 2]). More generally, given a permutation \(f\) whose inverse admits _random self-reducibility_11, one can construct a generator \(G\) based on \(f\) so that if there is a distinguisher \(D\) that breaks \(G\), then it can be used to invert \(f\) via a uniform reduction. Our idea is to consider the bijection \(f:[p^{m}-1]\to\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\) such that for each \(i\in[p^{m}-1]\), \(f(i)=A^{i}\cdot\vec{1}\) (where the random self-reducibility of \(f^{-1}\) follows easily from that of the discrete logarithm problem), and try to construct a pseudorandom generator \(G\) based on \(f\). We then combine the output of \(G\) with that of the Shaltiel-Umans generator constructed with the polynomial \(P\) and the generator matrix \(A\). Now if there is an algorithm \(D\) that avoids this combined generator, which means \(D\)_simultaneously_ avoids both the Shaltiel-Umans generator and the generator \(G\), then \(D\) can be used to obtain
Footnote 11: Roughly speaking, a function has random self-reducibility if computing the function on a given instance can be efficiently reduced to computing the function for uniformly random instances.
* a circuit \(C\) such that \(C(i)=P(A^{i}\cdot\vec{1})\) for every \(i\in[p^{m}-1]\), and
* a circuit \(C^{\prime}\) that inverts \(f\), _i.e._, \(C^{\prime}(\vec{x})\) outputs \(i\) such that \(A^{i}\cdot\vec{1}=\vec{x}\) for every \(\vec{x}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\).
Then it is easy to combine \(C\) and \(C^{\prime}\) to obtain a circuit that computes \(P\).
A careful implementation of these ideas allows us to obtain a variant of the Shaltiel-Umans generator with uniform learning reconstruction, as needed in our optimized Chen-Tell generator. We refer to Theorem 4.1 in Section 4 for more details.
This completes the sketch of the proof of Theorem 1.1.
## 2 Preliminaries
For a positive integer \(k\), we use \([k]\) to denote the set \(\{1,2,\dots,\}\). We use \(\mathbb{N}\) to denote all non-negative integers and \(\mathbb{N}_{\geq 1}\) to denote all positive integers.
For \(x,y\in\{0,1\}^{*}\), we use \(x\circ y\) to denote their concatenation.12 For a function \(f\colon\{0,1\}^{\ell}\to\{0,1\}\)
we use \(\mathsf{tt}(f)\) to denote the \(2^{\ell}\)-length truth-table of \(f\) (_i.e._, \(\mathsf{tt}(f)=f(w_{1})\circ f(w_{2})\circ\ldots\circ f(w_{2^{\ell}})\), where \(w_{1},\ldots,w_{2^{\ell}}\) is the enumeration of all strings from \(\{0,1\}^{\ell}\) in the lexicographical order).
Unless explicitly stated otherwise, we assume that all circuits are comprised of Boolean NAND gates of fan-in two. In several places in the paper we will need the following notion, which strengthens the standard notion of a time-computable function by requiring the function to be computable in logarithmic space. The depth of a circuit is defined to be the maximum length (measured by the number of edges) of any input-to-output path.
**Definition 2.1** (Logspace-Computable Functions).: We say that a function \(T\colon\mathbb{N}\to\mathbb{N}\) is logspace-computable if there exists an algorithm that gets input \(1^{n}\), runs in space \(O(\log(T(n)))\), and outputs \(T(n)\).
For convenience, we consider circuit families indexed by a tuple of parameters. Specifically, a circuit family with \(k\) input parameters \(\vec{\ell}=(\ell_{1},\ell_{2},\ldots,\ell_{k})\in\mathbb{N}^{k}\) is defined as \(\{C_{\vec{\ell}}\}_{\vec{\ell}\in\mathbb{N}^{k}}\), where each \(C_{\vec{\ell}}\) is a circuit.
### Finite Fields
Throughout this paper, we will only consider finite fields of the form \(\operatorname{GF}(2^{2\cdot 3^{\lambda}})\) for some \(\lambda\in\mathbb{N}\) since they enjoy simple representations that will be useful for us. We say \(p=2^{r}\) is a _nice power_ of \(2\), if \(r=2\cdot 3^{\lambda}\) for some \(\lambda\in\mathbb{N}\).
Let \(\ell\in\mathbb{N}\) and \(n=2\cdot 3^{\ell}\). In the following, we use \(\mathbb{F}\) to denote \(\mathbb{F}_{2^{n}}\) for convenience. We will always represent \(\mathbb{F}_{2^{n}}\) as \(\mathbb{F}_{2}[\mathbf{x}]/(\mathbf{x}^{n}+\mathbf{x}^{n/2}+1)\).13 That is, we identify an element of \(\mathbb{F}_{2^{n}}\) with an \(\mathbb{F}_{2}[\mathbf{x}]\) polynomial with degree less than \(n\). To avoid confusion, given a polynomial \(P(\mathbf{x})\in\mathbb{F}_{2}[\mathbf{x}]\) with degree less than \(n\), we will use \((P(\mathbf{x}))_{\mathbb{F}}\) to denote the unique element in \(\mathbb{F}\) identified with \(P(\mathbf{x})\).
Footnote 13: \(\mathbf{x}^{2\cdot 3^{\ell}}+\mathbf{x}^{3^{\ell}}+1\in\mathbb{F}_{2}[\mathbf{x}]\) is irreducible, see [22, Theorem 1.1.28].
Let \(\kappa^{(n)}\) be the natural bijection between \(\{0,1\}^{n}\) and \(\mathbb{F}=\operatorname{GF}(2^{n})\): for every \(a\in\{0,1\}^{n}\), \(\kappa^{(n)}(a)=\left(\sum_{i\in[n]}a_{i}\cdot\mathbf{x}^{i-1}\right)_{\mathbb{ F}}\). We always use \(\kappa^{(n)}\) to encode elements from \(\mathbb{F}\) by Boolean strings. That is, whenever we say that an algorithm takes an input from \(\mathbb{F}\), we mean it takes a string \(x\in\{0,1\}^{n}\) and interprets it as an element of \(\mathbb{F}\) via \(\kappa^{(n)}\). Similarly, whenever we say that an algorithm outputs an element from \(\mathbb{F}\), we mean it outputs a string \(\{0,1\}^{n}\) encoding that element via \(\kappa^{(n)}\). For simplicity, sometimes we use \((a)_{\mathbb{F}}\) to denote \(\kappa^{(n)}(a)\). Also, when we say the \(i\)-th element in \(\mathbb{F}\), we mean the element in \(\mathbb{F}\) encoded by the \(i\)-th lexicographically smallest Boolean string in \(\{0,1\}^{n}\).
### Bounded-Space Turing Machines
Our argument is robust to specific details about the computational model, but in order to estimate the relevant bounds, we must fix a model. We use the standard model of space-bounded computation (see [1, Section 5] or [1, Section 4]). A deterministic space-bounded Turing machine has three tapes: an input tape (that is read-only); a work tape (that is read/write) and an output tape (that is write-only and uni-directional). We assume that the machine's alphabet is \(\Sigma\triangleq\{0,1\}\). The space complexity of the machine is the number of used cells on the work tape. For concreteness, we assume that the work tape contains initially only \(\square\) ("blank") symbols, and that the machine writes symbols from \(\Sigma\) in the tape.
Throughout the paper, we will describe a space-bounded Turing machine by fixing a universal Turing machine \(U\) that has an additional read-only _program tape_ such that \(\mathsf{TM}(x)\) is defined to be
the output of \(U\) with the program tape initialized as \(\mathsf{TM}\).14 Abusing the notation, we often use \(\mathsf{TM}\) to denote both the Turing machine and a binary string description of the Turing machine. Without loss of generality, we also assume our description is _paddable_ meaning that for every \(\mathsf{TM}\in\{0,1\}^{*}\) and \(k\in\mathbb{N}\), \(\mathsf{TM}\) and \(\mathsf{TM}\circ 0^{k}\) represent the same machine. To avoid certain technicalities, we will always assume that the space bound of a Turing machine \(\mathsf{TM}\) is greater than its description size.
Footnote 14: The advantage of fixing a universal Turing machine is that now our Turing machine always has a constant number of states, which is helpful when bounding the number of configurations of a Turing machine of super-constant size.
Configurations of space-bounded machines.On a fixed input \(x\in\{0,1\}^{n}\), a space-\(s\) Turing machine \(\mathsf{TM}\) has \(2^{s^{\prime}}\) possible configurations, where \(s^{\prime}=s^{\prime}(s,n)=s+O(\log s)+\log n\). Each configuration can be described by \(s^{\prime}\) bits. Here, \(s\) measures the space used by the universal Turing machine \(U\) that simulates \(\mathsf{TM}\) on input \(x\). In more detail, it can be described by the content of \(U\)'s work tape, \(U\)'s current state, and the location of \(U\)'s heads, including the head on the input/program tape. (Note that a configuration does not include the content of the output tape, which does not affect the next step of the machine.)
We will need the following fact for determining the relationship between configurations of a Turing machine. Recall that a sequence \(\{D_{n}\}_{n\geq 1}\) of size-\(T(n)\) computational devices is _logspace-uniform_ if there is a machine \(M(1^{n})\) that runs in space \(O(\log T(n))\) and outputs \(D_{n}\) (or equivalently, decides the direct connection language of \(D_{n}\)).
**Fact 2.2**.: _Given a description of Turing machine \(\mathsf{TM}\in\{0,1\}^{*}\), a space bound \(s\in\mathbb{N}\), an input \(x\in\{0,1\}^{n}\), and two configurations \(\gamma,\gamma^{\prime}\in\{0,1\}^{s^{\prime}}\), there is an algorithm \(\mathbb{A}_{\mathsf{mxt}}\) that determines whether \(\gamma^{\prime}\) is the next configuration obtained by running \(\mathsf{TM}\) for one step on input \(x\). Moreover, \(\mathbb{A}_{\mathsf{mxt}}\) can be computed by a logspace-uniform \(O(m^{3})\)-size \(O(\log m)\)-depth formula and by an \(O(m)\)-space algorithm, where \(m\) is the total number of input bits. (Here, we assume that if \(\gamma\) is the accepting state or the rejecting state, then the next configuration of \(\gamma\) is always \(\gamma\) itself.)_
### Circuits Generated by Bounded-Space Turing Machines
In this paper we often use the following two representations of a circuit (recall that throughout this paper all circuits consist entirely of fan-in two \(\mathtt{NAND}\) gates).
* (**Adjacency relation tensor.**) A circuit \(C\) of size \(T\) is given as a tensor \(T_{C}\in\{0,1\}^{T\times T\times T}\) such that for every tuple \((u,v,w)\in[T]^{3}\), \(T_{C}(u,v,w)=1\) if and only if the gates in \(C\) indexed by \(v\) and by \(w\) feed into the gate in \(C\) indexed by \(u\).
* (**Layered adjacency relation tensor.**) A circuit \(C\) of width \(T\) and depth \(d\) is given as a list of \(d\) tensors \(T_{C}^{(i)}\in\{0,1\}^{T\times T\times T}\), where \(i\in[d]\), such that for every layer \(i\in[d]\) and tuple \((u,v,w)\in[T]^{3}\), \(T_{C}^{(i)}(u,v,w)=1\) if and only if the gates in the \((i-1)\)-th layer of \(C\) indexed by \(v\) and by \(w\) feed into the gate in the \(i\)-th layer of \(C\) indexed by \(u\). Here, the input gates are on the \(0\)-th layer, and the output gates are on the \(d\)-th layer. Without loss of generality we can assume all layers have exactly \(T\) gates.
In both cases above, when evaluating \(C\) in a context, we will also specify two integers \(n_{\mathsf{in}}\) and \(n_{\mathsf{out}}\) to denote the number of input/output gates; see the definition of \(\mathsf{Circuit}[T,s,n_{\mathsf{in}},n_{\mathsf{out}}](\mathsf{TM})\) given below for details.
While we will mostly use the (unlayered) adjacency relation tensor representation, the layered variant will be very convenient in Section 5.1.
We define next a more general notion of a space-uniform circuit family with input parameters. This will be useful in some situations where we need to compute explicit space bounds for uniformity and index circuits by a tuple of parameters.
**Definition 2.3** (\(\alpha\)-Space-Uniform Circuits).: Let \(k\in\mathbb{N}\) and \(\alpha,T\colon\mathbb{N}^{k}\to\mathbb{N}\). We say that a circuit family with \(k\) input parameters \(\left\{C_{\vec{\ell}}^{\text{\tiny\sc r}}\right\}_{\vec{\ell}\in\mathbb{N}^{k}}\) of size \(T=T(\vec{\ell}\,)\) is \(\alpha\)-space-uniform if there exists an algorithm \(A\) such that:
1. (**Decides the adjacency relation.**) The algorithm gets \(\vec{\ell}\in\mathbb{N}^{k}\) and \((u,v,w)\in\{0,1\}^{3\log(T)}\) as input and accepts if and only if the gates in \(C_{\vec{\ell}}\,\)indexed by \(v\) and by \(w\) feed into the gate in \(C_{\vec{\ell}}\,\)indexed by \(u\). (That is, the algorithm computes the adjacency relation tensor of \(C_{\vec{\ell}}\,\))
2. (**Runs in \(\alpha(\vec{\ell})\) space.**) For input parameters \(\vec{\ell}\in\mathbb{N}^{k}\), the algorithm runs in space \(\alpha(\vec{\ell}\,)\).
We say \(\left\{C_{\vec{\ell}}\right\}_{\vec{\ell}\in\mathbb{N}^{k}}\) is logspace-uniform if it is \(\mu\log T\)-space-uniform for some constant \(\mu\).
Circuit determined by a Turing machine through the adjacency relation tensor.We will also consider the circuit determined by a Turing machine in the non-asymptotic setting. More specifically, given a Turing machine \(\mathsf{TM}\in\{0,1\}^{*}\), parameters \(T,s,n_{\mathsf{in}},n_{\mathsf{out}}\in\mathbb{N}\), we use \(\mathsf{Circuit}[T,s,n_{\mathsf{in}},n_{\mathsf{out}}](\mathsf{TM})\) to denote the circuit whose adjacency relation is determined by running \(\mathsf{TM}\) with space bound \(s\) over all triples \((u,v,w)\in\{0,1\}^{3\log T}\) with \(u>v>w\). The first \(n_{\mathsf{in}}\) out of \(T\) gates are the input gates, and the last \(n_{\mathsf{out}}\) out of \(T\) gates are the output gates. If \(\mathsf{TM}\) fails to halt on some triples using \(s\) bits of space, or the resulting circuit is invalid (_i.e._, inputs are not source, or outputs are not sink), we let \(\mathsf{Circuit}[T,s,n_{\mathsf{in}},n_{\mathsf{out}}](\mathsf{TM})=\bot\).
Given two circuits \(C_{1}\colon\{0,1\}^{n_{1}}\to\{0,1\}^{n_{2}}\) and \(C_{2}\colon\{0,1\}^{n_{2}}\to\{0,1\}^{n_{3}}\), one can compose them into a single circuit \(C_{2}\circ C_{1}\colon\{0,1\}^{n_{1}}\to\{0,1\}^{n_{3}}\) in a natural way (_i.e._, by identifying the outputs of \(C_{1}\) with the inputs of \(C_{2}\)). Suppose \(C_{1}\) is a circuit of size \(T_{1}\) and depth \(d_{1}\), and \(C_{2}\) is a circuit of size \(T_{2}\) and depth \(d_{2}\), then \(C_{2}\circ C_{1}\) has size \(T_{1}+T_{2}\) and depth \(d_{1}+d_{2}\). Also, if \(C_{1},C_{2}\) are given by two Turing machines \(\mathsf{TM}_{1}\) and \(\mathsf{TM}_{2}\), we can easily generate another Turing machine \(\mathsf{TM}_{3}\) that specifies \(C_{2}\circ C_{1}\). Formally, we will pick a universal machine such that we have the following simple fact on the description length of \(\mathsf{TM}_{3}\), whose proof we omit.
**Fact 2.4** (Turing Machine Description of Circuit Composition).: _There is a universal constant \(c_{\mathsf{comp}}\in\mathbb{N}\) such that the following holds. Given the descriptions of Turing machines \(\mathsf{TM}_{1}\) and \(\mathsf{TM}_{2}\), parameters_
\[\vec{\ell}_{1}=(T_{1},s_{1},n_{1},n_{2}),\qquad\vec{\ell}_{2}=(T_{2},s_{2},n_{ 2},n_{3})\in\mathbb{N}^{4},\]
_and letting_
\[C_{1}=\mathsf{Circuit}[\vec{\ell}_{1}](\mathsf{TM}_{1}),\ C_{2}=\mathsf{Circuit }[\vec{\ell}_{2}](\mathsf{TM}_{2}),\ \ \text{and}\ \ \vec{\ell}_{3}=(T_{1}+T_{2},2\cdot(s_{1}+s_{2})+c_{\mathsf{comp}},n_{1},n_{3}),\]
_there is a polynomial-time algorithm \(\mathbb{A}_{\mathsf{comp}}\) that given \(\mathsf{TM}_{1},\mathsf{TM}_{2},\vec{\ell}_{1},\vec{\ell}_{2}\) as input, outputs the description of a Turing machine \(\mathsf{TM}_{3}\) such that_15
Footnote 15: We note that if either \(C_{1}=\bot\) or \(C_{2}=\bot\), then there is no guarantee on \(\mathbb{A}_{\mathsf{comp}}\)’s behavior.
\[(C_{2}\circ C_{1})=\mathsf{Circuit}[\vec{\ell}_{3}](\mathsf{TM}_{3})\ \ \text{and}\ \ |\mathsf{TM}_{3}|\leq 2\cdot(|\mathsf{TM}_{1}|+|\mathsf{TM}_{2}|+\log n_{2})+c_{ \mathsf{comp}}.\]
### Pseudorandom Generators and Hitting Set Generators
**Definition 2.5** (Avoiding and Distinguishing).: Let \(m,t\in\mathbb{N}\), \(D\colon\{0,1\}^{m}\to\{0,1\}\), and \(Z=(z_{i})_{i\in[t]}\) be a list of strings from \(\{0,1\}^{m}\). Let \(\varepsilon\in(0,1)\). We say that \(D\)_\(\varepsilon\)-distinguishes_\(Z\), if
\[\bigg{|}\Pr_{r\leftarrow\{0,1\}^{m}}[D(r)=1]-\Pr_{i\leftarrow[t]}[D(z_{i})=1 ]\bigg{|}\geq\varepsilon.\]
We say that \(D\)_\(\varepsilon\)-avoids_\(Z\), if \(\Pr_{r\leftarrow\{0,1\}^{m}}[D(r)=1]\geq\varepsilon\) and \(D(z_{i})=0\) for every \(i\in[t]\).
## 3 Polynomial-Time Pseudodeterministic Constructions for Dense Properties
In this section, we prove our main result, restated below for convenience.
**Theorem 1.1** (Infinitely-Often Polynomial-Time Pseudodeterministic Constructions).: _Let \(Q\subseteq\{0,1\}^{*}\) be a language with the following properties:_
**(Density.)** _there is a constant_ \(\rho\geq 1\) _such that for every_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(Q_{n}\triangleq Q\cap\{0,1\}^{n}\) _satisfies_ \(|Q_{n}|\geq n^{-\rho}\)_; and_
**(Easiness.)** _there is a deterministic polynomial-time algorithm_ \(A_{Q}\) _that decides whether an input_ \(x\in\{0,1\}^{*}\) _belongs to_ \(Q\)_._
_Then there exist a probabilistic polynomial-time algorithm \(B\) and a sequence \(\{x_{n}\}_{n\in\mathbb{N}_{\geq 1}}\) of \(n\)-bit strings in \(Q\) such that the following conditions hold:_
1. _On every input length_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(\Pr_{B}[B(1^{n})\notin\{x_{n},\bot\}]\leq 2^{-n}\)_._
2. _On infinitely many input lengths_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(\Pr_{B}[B(1^{n})=x_{n}]\geq 1-2^{-n}\)_._
We will need the following theorem, which is obtained by combining [15] and [16]. The proof is presented in Section 5.
**Theorem 3.1** (Improved Chen-Tell Hitting Set Generator).: _There exists a universal \(c\in\mathbb{N}_{\geq 1}\), a deterministic algorithm \(\mathsf{H}^{\mathsf{ct}}\), and a probabilistic oracle algorithm \(\mathsf{R}^{\mathsf{ct}}\) such that the following holds. Let \(\kappa,\rho\in\mathbb{N}\). Let \(T,d,M,n\in\mathbb{N}\) all be sufficiently large such that \(n\leq T\), \(d\leq T\), and \(c\cdot\log T\leq M\leq T^{1/(c\rho)}\). Denote \(\vec{\ell}\triangleq(n,T,d,M,\kappa,\rho)\) as the input parameters._
_For a Turing machine \(\mathsf{TM}\) with description size \(|\mathsf{TM}|=\kappa\cdot\log T\), we let_
\[C_{\mathsf{TM}}\triangleq\mathsf{Circuit}[T,\kappa\cdot\log T,n,n](\mathsf{TM }).\]
_Assume the circuit \(C_{\mathsf{TM}}\neq\bot\) and \(C_{\mathsf{TM}}\) has depth at most \(d\)._
**(Generator.)**: _The generator_ \(\mathsf{H}^{\mathsf{ct}}_{\vec{\ell}}\)__(_we write_ \(\mathsf{H}^{\mathsf{ct}}_{\vec{\ell}}\) _to denote that_ \(\mathsf{H}^{\mathsf{ct}}\) _takes_ \(\vec{\ell}\) _as input parameters_) _takes the description of a Turing machine_ \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\) _as input, and outputs a list of_ \(M\)_-bit strings. We assume that the list has exactly_ \(T^{(c\cdot\kappa)/2}\) _entries._ _Let_ \(\widetilde{T}\triangleq T^{c\cdot\kappa}\) _and_ \(\widetilde{d}\triangleq c\cdot(d\log T+\kappa^{2}\log^{2}T)+M^{c}\)_. There is a Turing machine_ \(\mathsf{TM}_{\mathsf{H}}\) _with description length_ \(c\log\widetilde{T}\) _such that for_
\[C_{\mathsf{H}}\triangleq\mathsf{Circuit}\bigg{[}\widetilde{T},c\cdot\kappa \log T,n,\left(\widetilde{T}\right)^{1/2}\cdot M\bigg{]}(\mathsf{TM}_{\mathsf{ H}}),\]
_it holds that (_1_)_ \(C_{\mathsf{H}}(1^{n})=\mathsf{H}_{\widetilde{\ell}}^{\mathsf{ct}}(\mathsf{TM})\) _and (_2_)_ \(C_{\mathsf{H}}\) _has depth_ \(\widetilde{d}\)_. Moreover, there is a polynomial-time_16 _algorithm_ \(\mathbb{A}^{\mathtt{ct}}\) _that on inputs_ \(\widetilde{\ell}\) _and_ \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\)_, outputs the description of_ \(\mathsf{TM}_{\mathsf{H}}\)_._
Footnote 16: In this paper, whenever we say an algorithm \(\mathbb{A}\) that generates Turing machines or other succinct descriptions _runs in polynomial time_, we mean the running time is polynomial in the total number of input bits. In this case, the time bound is polynomial in the description length of \(\widetilde{\ell}\) and \(\mathsf{TM}\), _i.e._, \(\operatorname{poly}(\kappa\log T)\).
**(Reconstruction.)**: _The reconstruction algorithm_ \(\mathsf{R}^{\mathtt{ct}}\) _takes the description of a Turing machine_ \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\) _as input, receives an oracle_ \(D\colon\{0,1\}^{M}\to\{0,1\}\)_, and satisfies the following:_
**(Soundness.)**: _For every oracle_ \(D\colon\{0,1\}^{M}\to\{0,1\}\)_,_ \((\mathsf{R}^{\mathtt{ct}})_{\widetilde{\ell}}^{D}\left(\mathsf{TM}\right)\) _runs in time_ \((d+n)\cdot M^{c\rho}\) _and with probability at least_ \(1-2^{-M}\)_, its output is either_ \(C_{\mathsf{TM}}(1^{n})\) _or_ \(\bot\)_._
**(Completeness.)**: _If_ \(D\)__\((1/M^{\rho})\)_-avoids_ \(\mathsf{H}_{\widetilde{\ell}}^{\mathtt{ct}}(\mathsf{TM})\)_, then_ \((\mathsf{R}^{\mathtt{ct}})_{\widetilde{\ell}}^{D}\left(\mathsf{TM}\right)\) _outputs_ \(C_{\mathsf{TM}}(1^{n})\) _with probability at least_ \(1-2^{-M}\)_._
We are now ready to prove Theorem1.1.
Proof of Theorem1.1.: We start with some notations.
Notation.Let \(n_{0}\in\mathbb{N}\) be sufficiently large. We define \(n_{0}^{(0)}=n_{0}\), and for every \(\ell\in\mathbb{N}_{\geq 1}\),
\[n_{0}^{(\ell)}=2^{2^{n_{0}^{(\ell-1)}}}.\]
Now, fix \(\ell\in\mathbb{N}\). For simplicity of notation, in the following we will use \(n_{i},\mathsf{H}_{i},t\) to denote \(n_{i}^{(\ell)},\mathsf{H}_{i}^{(\ell)},t^{(\ell)}\), which will be defined later.
Construction of hitting sets.For some parameter \(t\) that we set later, we will define a sequence of input lengths \(n_{1},\ldots,n_{t}\), with the hope that we can construct a string in \(Q\) pseudodeterministically on at least one of the input lengths.
Let \(\beta\in\mathbb{N}_{\geq 1}\) be a sufficiently large constant to be chosen later. For every \(i\in[t]\), we set \(n_{i}=(n_{i-1})^{\beta}\). For each \(i\in\{0,\ldots,t\}\), we will construct a hitting set \(\mathsf{H}_{i}\subseteq\{0,1\}^{n_{i}}\), which is computable by a logspace-uniform \(T_{i}\)-size \(d_{i}\)-depth circuit. As the base case, we set \(\mathsf{H}_{0}\) as the whole set \(\{0,1\}^{n_{0}}\). We note that there is a logspace-uniform \(T_{0}\)-size \(d_{0}\)-depth circuit that outputs all elements in \(\mathsf{H}_{0}\), where \(T_{0}=2^{2n_{0}}\) and \(d_{0}=2n_{0}\).
Let \(\kappa\in\mathbb{N}\) be a large enough constant to be specified later. Let \(c\) be the universal constant from Theorem3.1.
Informal description.We will first give a somewhat informal description of the construction of the \(\mathsf{H}_{i}\), in particular, we will omit details about the uniformity of the circuits (whose analysis is rather tedious). We hope this can help the reader to gain some intuition first. Later we will carefully analyze the uniformity of the circuits for \(\mathsf{H}_{i}\).
For each \(i\in[t]\), we construct \(\mathsf{H}_{i}\) as follows:
1. We define \(\mathsf{BF}_{i-1}\) as the circuit implementing the following algorithm: Enumerate every element in \(\mathsf{H}_{i-1}\subseteq\{0,1\}^{n_{i-1}}\), and output the first element that is in \(Q_{n_{i-1}}\); if no such element exists, then \(\mathsf{BF}_{i-1}(n)\) outputs \(\bot\); Using the assumed polynomial-time algorithm \(A_{Q}\) for deciding membership in \(Q\), \(\mathsf{BF}_{i-1}\) can be implemented by a \(T_{i-1}^{\prime}\)-size \(d_{i-1}^{\prime}\)-depth circuit, where \[T_{i-1}^{\prime}=T_{i-1}\cdot\operatorname{poly}(n_{i-1})\ \text{ and }\ d_{i-1}^{\prime}=d_{i-1}+\operatorname{poly}(n_{i-1}).\]
2. We then set \(\mathsf{H}_{i}\) as the hitting set from Theorem3.1 constructed with the Turing machine describing the circuit \(\mathsf{BF}_{i-1}\) and output length \(n_{i}\).17 By Theorem3.1, \(\mathsf{H}_{i}\) can be implemented by a \(T_{i}\)-size \(d_{i}\)-depth circuit, where \[T_{i}=\operatorname{poly}(T_{i-1}^{\prime})\ \text{ and }\ d_{i}=O(d_{i-1}^{\prime}\cdot\log T_{i-1}^{\prime}+\log^{2}T_{i-1}^{ \prime})+\operatorname{poly}(n_{i}).\] (Here we are being informal, see below for a more precise description.)
Footnote 17: We do not discuss how to construct the Turing machine here, the details can be found in the formal construction below.
Formal construction.Next we carefully detail the construction. Let \(\mu\in\mathbb{N}_{\geq 1}\) be a large enough constant. First, we define a Turing machine \(\mathsf{TM}_{\mathsf{H}_{0}}\) of description size \(\mu\) that describes a \(T_{0}\)-size \(d_{0}\)-depth circuit \(C_{\mathsf{H}_{0}}\) for \(\mathsf{H}_{0}\) on input \(1^{n_{0}}\) in \(\mu\log T_{0}\) space. Formally
\[\mathsf{Circuit}[T_{0},\mu\cdot\log T_{0},n_{0},\sqrt{T_{0}}\cdot n_{0}]( \mathsf{TM}_{\mathsf{H}_{0}})=C_{\mathsf{H}_{0}}.\]
Let \(\tau\in\mathbb{N}\) be a large enough constant such that the running time of \(A_{Q}\) on \(n\)-bit inputs is bounded by \(n^{\tau/3}\).
We will make sure all \(\mathsf{H}_{i}\) has exactly \(\sqrt{T_{i}}\) elements. (This is satisfied for \(i=0\) since \(T_{0}=2^{2n_{0}}\).)
Now, for each \(i\in[t]\), we will define a Turing machine \(\mathsf{TM}_{\mathsf{H}_{i}}\) such that
\[\mathsf{Circuit}[T_{i},\mu\cdot\log T_{i},n_{i},\sqrt{T_{i}}\cdot n_{i}]( \mathsf{TM}_{\mathsf{H}_{i}})=C_{\mathsf{H}_{i}},\]
where \(C_{\mathsf{H}_{i}}\) has depth at most \(d_{i}\). We will also ensure the invariance that \(|\mathsf{TM}_{\mathsf{H}_{i}}|\leq\mu\cdot\log T_{i}\). By our choice of \(\mu\), the above is satisfied when \(i=0\). The machine \(\mathsf{TM}_{\mathsf{H}_{i}}\) is defined in two steps: In the first step we define a machine \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) describing the circuit \(\mathsf{BF}_{i-1}\), and in the second step we plug \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) in Theorem3.1 to obtain the machine \(\mathsf{TM}_{\mathsf{H}_{i}}\).
A Turing machine \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) for \(\mathsf{BF}_{i-1}\).We first define a Turing machine \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) such that \(\mathsf{TM}_{\mathsf{BF}_{i-1}}(1^{n_{i-1}})\) outputs a circuit for the algorithm \(\mathsf{BF}_{i-1}\). Recall that \(\mathsf{BF}_{i-1}\) works as follows: Enumerate every element in \(\mathsf{H}_{i-1}\subseteq\{0,1\}^{n_{i-1}}\) and output the first element that is in \(Q_{n_{i-1}}\); if no such element exists, then \(\mathsf{BF}_{i-1}(n)\) outputs \(\bot\);
Using the assumed polynomial-time algorithm \(A_{Q}\) for deciding membership in \(Q\), we first construct a Turing machine \(\mathsf{TM}_{\mathsf{test}}\) with description size \(\mu\) such that
\[C_{\mathsf{test}}=\mathsf{Circuit}\Big{[}T_{i-1}\cdot(n_{i-1})^{\tau/2},\mu \cdot\log T_{i-1},\sqrt{T_{i-1}}\cdot n_{i-1},n_{i-1}\Big{]}(\mathsf{TM}_{ \mathsf{test}})\]
has depth \((n_{i-1})^{\tau/2}\), takes a list of \((T_{i-1})^{1/2}\) strings from \(\{0,1\}^{n_{i-1}}\), and outputs the lexicographically first one in \(Q_{n_{i-1}}\) (if no such string exists, outputs \(\bot\) instead).
Applying creftype 2.4 to compose \(C_{\mathsf{H}_{i-1}}\) and \(C_{\mathsf{test}}\), we obtain the desired Turing machine \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) that constructs a circuit \(C_{\mathsf{BF}_{i-1}}\) computing \(\mathsf{BF}_{i-1}\). Noting that \(\mu\) is sufficiently large, we have that \(\mathsf{TM}_{\mathsf{BF}_{i-1}}\) takes
\[2\cdot\big{(}\big{|}\mathsf{TM}_{\mathsf{H}_{i-1}}\big{|}+\mu+\log n_{i-1}+ \log T_{i-1}\big{)}\leq 3\mu\cdot\log T_{i-1}\]
bits to describe and uses
\[2\cdot(\mu\cdot\log T_{i-1}+\mu\cdot\log T_{i-1}+\log T_{i-1})+\mu\leq 5\mu\cdot \log T_{i-1}\]
space. We now set \(T_{i-1}^{\prime}=T_{i-1}\cdot n_{i-1}^{\tau}\) and \(d_{i-1}^{\prime}=d_{i-1}+n_{i-1}^{\tau}\), and we have
\[\mathsf{Circuit}\big{[}T_{i-1}^{\prime},5\mu\cdot\log T_{i-1},n_{i-1},n_{i-1} \big{]}(\mathsf{TM}_{\mathsf{BF}_{i-1}})=C_{\mathsf{BF}_{i-1}},\]
where \(C_{\mathsf{BF}_{i-1}}\) has depth at most \(d_{i-1}^{\prime}\).
The Turing machine \(\mathsf{TM}_{\mathsf{H}_{i}}\) for \(\mathsf{H}_{i}\).Recall that \(\mathsf{H}_{i}\) is defined as the hitting set \(\mathsf{H}^{\mathsf{ct}}\) of Theorem3.1 constructed with the circuit \(\mathsf{BF}_{i-1}\) and output length \(n_{i}\) in the informal argument. We now formally define \(\mathsf{H}_{i}\) as the hitting set
\[\mathsf{H}_{n_{i-1},T_{i-1}^{\prime},d_{i-1}^{\prime},n_{i},\kappa,\rho}^{ \mathsf{ct}}\big{(}\mathsf{TM}_{\mathsf{BF}_{i-1}}\big{)}.\]
To apply Theorem3.1, we first need to ensure that
\[5\mu\cdot\log T_{i-1}\leq\kappa\log T_{i-1}^{\prime},\]
which is satisfied by setting \(\kappa\geq 5\mu\). We also need to ensure that
\[n_{i-1}\leq T_{i-1}^{\prime},\ \ d_{i-1}^{\prime}\leq T_{i-1}^{\prime},\ \ \text{and}\ \ c\cdot\log T_{i-1}^{\prime}\leq n_{i}\leq(T_{i-1}^{\prime})^{1/(c\rho)}. \tag{1}\]
By Theorem3.1, we know that
\[\mathsf{TM}_{\mathsf{H}_{i}}=\mathbb{A}_{n_{i-1},T_{i-1}^{\prime},d_{i-1}^{ \prime},n_{i},\kappa,\rho}^{\mathsf{ct}}\big{(}\mathsf{TM}_{\mathsf{BF}_{i-1}} \big{)}\]
describes a \(T_{i}\)-size, \(d_{i}\)-depth circuit \(C_{\mathsf{H}_{i}}\) such that \(C_{\mathsf{H}_{i}}(1^{n_{i-1}})\) computes \(\mathsf{H}_{i}\). Moreover, \(\mathsf{TM}_{\mathsf{H}_{i}}\) takes \(c\cdot\kappa\cdot\log T_{i-1}^{\prime}\leq\mu\cdot\log T_{i}\) space and \(c\cdot\log T_{i}\) bits to describe, where
\[T_{i}=(T_{i-1}^{\prime})^{c\cdot\kappa}\ \ \text{and}\ \ d_{i}=c\cdot(d_{i-1}^{ \prime}\log T_{i-1}^{\prime}+\kappa^{2}\cdot\log^{2}T_{i-1}^{\prime})+n_{i}^{ c}.\]
Formally, we have
\[C_{\mathsf{H}_{i}}=\mathsf{Circuit}[T_{i},\mu\cdot\log T_{i},n_{i},\sqrt{T_{i }}\cdot n_{i}](\mathsf{TM}_{\mathsf{H}_{i}})\]
as desired. Our invariance on \(|\mathsf{TM}_{\mathsf{H}_{i}}|\) is satisfied by setting \(\mu>c\).
Analysis of \(T_{i}\) and \(d_{i}\) and justification of (1).We set \(t\) to be the first integer such that
\[n_{t+1}>T_{t}^{1/(c\rho)}.\]
In the following we first show that \(t\leq\log n_{0}\).
We first analyze the growth of \(T_{i}\) and \(T_{i}^{\prime}\). For every \(i<t\), by our choice of \(t\), we have that \(n_{i}<n_{i+1}\leq T_{i}^{1/(c\rho)}<T_{i}\) and hence \(T_{i}^{\prime}=T_{i}\cdot n_{i}^{\tau}\leq T_{i}^{\tau+1}\). Then, from \(T_{i+1}=(T_{i}^{\prime})^{c\cdot\kappa}\), we have \(T_{i+1}\leq T_{i}^{c\cdot(\tau+1)\cdot\kappa}\) and consequently \(\log T_{i+1}\leq c\cdot(\tau+1)\cdot\kappa\cdot\log T_{i}\). Letting \(\lambda=c\cdot(\tau+1)\cdot\kappa\), we have
\[\log T_{i}\leq\lambda^{i}\cdot\log T_{0}=\lambda^{i}\cdot 2n_{0}\]
for every \(i\leq t\).
Recall that \(n_{i}=n_{i-1}^{\beta}\), we have \(\log n_{i}=\beta^{i}\cdot\log n_{0}\). For \(T_{t}<n_{t}\) to hold, we only need to ensure the following:
\[\lambda^{i}\cdot 2n_{0}<\beta^{i}\cdot\log n_{0}\] \[\iff 2n_{0}/\log n_{0}<(\beta/\lambda)^{i}.\]
Now we will set \(\beta\geq 100\lambda\). Let \(\bar{t}\leq\log n_{0}\) be the first integer satisfying the above. We claim that \(t\leq\bar{t}\). Since otherwise \(\bar{t}<t\), and we would have \(n_{\bar{t}}>T_{\bar{t}}\) (which certainly implies \(n_{\bar{t}+1}>T_{\bar{t}}^{1/(c\rho)}\)) by our choice of \(\bar{t}\). This contradicts our choice of \(t\). Therefore, we have established that \(t\leq\log n_{0}\).
Now we turn to analyze \(d_{i}\) for \(i\leq t\). Note that \(d_{0}=2n_{0}\), and for \(i\geq 1\), we have
\[d_{i}=O\big{(}(d_{i-1}+n_{i-1}^{\tau})\cdot\log T_{i-1}^{\prime}+\log^{2}T_{i- 1}^{\prime}\big{)}+n_{i}^{c}.\]
We will show that for every \(i<t\), \(d_{i}\leq 2n_{i}^{c}\). Clearly this holds for \(i=0\).
Since \(\log T_{i-1}^{\prime}\leq\log T_{i-1}+O(\log n_{i-1})\leq\lambda^{i-1}\cdot 2n_{0 }+O(\log n_{i-1})\leq n_{i-1}\) (recall here that \(n_{i-1}=(n_{0})^{\beta^{i-1}}\) and \(\beta=100\lambda\)), we have
\[d_{i}\leq O\big{(}(n_{i-1}+n_{i-1}^{\tau})\cdot n_{i-1}+n_{i-1}^{2}\big{)}+n_{i }^{c}.\]
We can set \(\beta\) large enough so that \(d_{i}\leq(n_{i-1})^{\beta}+n_{i}^{c}\leq 2\cdot n_{i}^{c}\). From definition, we also have \(d_{i}^{\prime}\leq 2n_{i}^{c}+n_{i}^{\tau}\) for every \(i<t\).
Now we are ready to justify the conditions from (1) are satisfied for \(i\in[t]\). By our choice of \(t\) and the definition of \(T_{i-1}^{\prime}\), we have \(n_{i-1}\leq T_{i-1}\leq T_{i-1}^{\prime}\). To see \(d_{i-1}^{\prime}\leq T_{i-1}^{\prime}\) holds, recall that \(T_{i-1}^{\prime}=T_{i-1}\cdot n_{i-1}^{\tau}\), and we have \(d_{i-1}^{\prime}\leq 2n_{i-1}^{c}+n_{i-1}^{\tau}\leq T_{i-1}\cdot n_{i-1}^{ \tau}=T_{i-1}^{\prime}\) by setting \(\tau>c\). We also have that \(c\log T_{i-1}^{\prime}=c(\log T_{i-1}+\tau\log n_{i-1})=c(\lambda^{i}\cdot 2n_ {0}+\tau\log n_{i-1})<n_{i}\) since \(n_{0}<(n_{i})^{1/\beta}\) and \(\lambda^{i}\leq\log_{n_{0}}n_{i}\). Finally, by our choice of \(t\), we have \(n_{i}\leq T_{i-1}^{1/(\ep)}<\big{(}T_{i-1}^{\prime}\big{)}^{1/(\ep)}\).
Informal argument of the correctness.We first give a somewhat informal argument below, and then give the precise argument later.
We will argue that for every \(\ell\in\mathbb{N}\), there exists an \(i\in\{0,1,\ldots,t^{(\ell)}\}\) that our polynomial-time pseudodeterministic algorithm for constructing an element from \(Q\) works on input length \(n_{i}^{(\ell)}\).
Let \(i\geq 0\) be the largest integer such that \(\mathsf{H}_{i}\subseteq\{0,1\}^{n_{i}}\) is a hitting set of \(Q_{n_{i}}\). (Note that such \(i\) exists, since \(\mathsf{H}_{0}=\{0,1\}^{n_{0}}\) is a hitting set of \(Q_{n_{0}}\).) If \(i=t\), then we can simply run \(\mathsf{BF}_{t}\) to obtain an element in \(Q_{n_{t}}\) deterministically. Note that this takes time \(\operatorname{poly}(T_{t})=\operatorname{poly}(n_{t})\), since by our choice of \(t\), \(T_{t}\leq n_{t}^{c\cdot\beta\cdot\rho}\).
Otherwise, we have \(i<t\). In this case, we know that \(Q_{n_{i+1}}\) avoids the hitting set \(\mathsf{H}_{i+1}\) (here we use the fact that \(Q_{n_{i+1}}\) accepts more than an \(n_{i+1}^{-\rho}\) fraction of strings from \(\{0,1\}^{n_{i+1}}\)). By the reconstruction part of Theorem 3.1, there is a \(\operatorname{poly}(n_{i+1})\cdot d_{i}^{\prime}\) randomized time algorithm that simulates \(\mathsf{BF}_{i}\) with probability at least \(1-2^{n_{i+1}}\). Since \(\mathsf{H}_{i}\) is a hitting set for \(Q_{n_{i}}\), this gives us a pseudodeterministic algorithm with \(\operatorname{poly}(n_{i+1})\) time that finds a canonical element in \(Q_{n_{i}}\). Since \(n_{i+1}=\operatorname{poly}(n_{i})\), our pseudodeterministic algorithm runs in polynomial time.
Formal description of the algorithm \(B\).First, note that by our choice of \(t\) and \(\beta\), it holds that \(n_{0}^{(\ell+1)}>n_{t^{(\ell)}}^{(\ell)}\). On an input length \(n\in\mathbb{N}_{\geq 1}\), our algorithm \(B\) is defined as follows:
1. Given input \(1^{n}\) for \(n\in\mathbb{N}_{\geq 1}\).
2. Compute the largest \(\ell\in\mathbb{N}\) such that \(n_{0}^{(\ell)}\leq n\), then compute the largest \(i\) such that \(n_{i}^{(\ell)}\leq n\). Output \(\bot\) and abort immediately if \(n_{i}^{(\ell)}\neq n\). From now on we use \(n_{i},T_{i},d_{i}\), etc. to denote \(n_{i}^{(\ell)},T_{i}^{(\ell)},d_{i}^{(\ell)}\), etc.
3. For every \(j\in\{0,1,\ldots,i\}\), compute \(T_{j},T_{j}^{\prime},d_{j},d_{j}^{\prime},\mathsf{TM}_{\mathsf{H}_{j}},\mathsf{ TM}_{\mathsf{BF}_{j}}\). There are two cases: * Case I: \(n_{i+1}\leq T_{i}^{1/(\ep)}\): In this case, we have that \(i<t\). Run \[\big{(}\mathsf{R}^{\mathsf{ct}}\big{)}_{n_{i},T_{i}^{\prime},d_{i}^{\prime},n_{ i+1},\kappa,\rho}^{Q_{n_{i+1}}}(\mathsf{TM}_{\mathsf{BF}_{i}})\] and set \(z_{n}\) be its output. * Case II: \(n_{i+1}>T_{i}^{1/(\ep)}\): In this case, we have that \(t\leq i\). Compute \(t\) first (recall that \(t\) is the first integer such that \(n_{t+1}>T_{t}^{1/(\ep)}\)). Output \(\bot\) and abort immediately if \(i>t\). Otherwise, construct \(C_{\mathsf{BF}_{i}}\) from \(\mathsf{TM}_{\mathsf{BF}_{i}}\) and set \(z_{n}=C_{\mathsf{BF}_{i}}(1^{n})\).
4. Output \(z_{n}\) if \(A_{Q}(z_{n})=1\) and \(\bot\) otherwise.
From our choice of parameters and Theorem3.1, the algorithm \(B\) runs in \(\operatorname{poly}(n)\) time.
Analysis of the algorithm \(B\).Finally we show that the algorithm \(B\) satisfies our requirements. We call an input length \(n\in\mathbb{N}_{\geq 1}\)_valid_ if there exist \(\ell\in\mathbb{N}\) and \(i\in\{0,\ldots,t^{(\ell)}\}\) such that \(n=n_{i}^{(\ell)}\), and we call \(n\)_invalid_ otherwise.18 For every \(n\in\mathbb{N}_{\geq 1}\), let \(y_{n}\) be the lexicographically first element in \(Q_{n}\).
Footnote 18: By our choice of parameters, such pair \((\ell,i)\) is unique for a valid \(n\).
For every invalid \(n\in\mathbb{N}_{\geq 1}\), we simply set \(x_{n}=y_{n}\). For every valid \(n\in\mathbb{N}_{\geq 1}\), we set \(x_{n}\) as follows:
\[x_{n}=\begin{cases}C_{\mathtt{BF}_{i}}(1^{n_{i}}),&\text{if }C_{\mathtt{BF}_{i}}( 1^{n_{i}})\in Q_{n_{i}},\\ y_{n},&\text{if otherwise.}\end{cases}\]
We first observe that for all invalid \(n\in\mathbb{N}_{\geq 1}\), it holds that \(B(1^{n})=\bot\) with probability \(1\). Now we are ready to show that for every \(n\in\mathbb{N}_{\geq 1}\), \(\Pr_{B}[B(1^{n})\notin\{x_{n},\bot\}]\leq 2^{-n}\). Clearly we only need to consider valid \(n\).
Fix a valid \(n\in\mathbb{N}_{\geq 1}\). From the soundness of the reconstruction part of Theorem3.1, it follows that \(z_{n}\in\{C_{\mathtt{BF}_{i}}(1^{n}),\bot\}\) with probability at least \(1-2^{-n}\) (if \(i=t\), then \(z_{n}=C_{\mathtt{BF}_{i}}(1^{n})\) with probability \(1\)). If \(C_{\mathtt{BF}_{i}}(1^{n_{i}})\in Q_{n_{i}}\), then \(x_{n}=C_{\mathtt{BF}_{i}}(1^{n_{i}})\) and \(z_{n}\in\{x_{n},\bot\}\) with high probability; otherwise we have \(z_{n}=\bot\). In both cases the soundness of \(B\) holds.
Next, we show that for infinitely many \(n\in\mathbb{N}_{\geq 1}\), we have \(\Pr_{B}[B(1^{n})=x_{n}]\geq 1-2^{-n}\). Following the informal argument, for every \(\ell\in\mathbb{N}\), let \(i\geq 0\) be the largest integer such that \(\mathsf{H}_{i}\subseteq\{0,1\}^{n_{i}^{(\ell)}}\) is a hitting set of \(Q_{n_{i}^{(\ell)}}\). Letting \(n=n_{i}^{(\ell)}\), we will show that \(B(1^{n})\) outputs \(x_{n}\) with probability at least \(1-2^{-n}\), which would finish the proof.
If \(i=t\), since \(\mathsf{H}_{i}\) is a hitting set for \(Q_{n}\), it follows that \(z_{n}=C_{\mathtt{BF}_{i}}(1^{n})\in Q_{n}\), and we have \(B(1^{n})=x_{n}\) with probability \(1\). If \(i<t\), we know that \(Q_{n_{i+1}}\)\((1/n_{i+1}^{\rho})\)-avoids the hitting set \(\mathsf{H}_{i+1}\). By the completeness of the reconstruction part of Theorem3.1, we have that equals \(C_{\mathtt{BF}_{i}}(1^{n})\) with probability at least \(1-2^{-n}\). Moreover, in this case, since \(\mathsf{H}_{i}\) is a hitting set of \(Q_{n}\), we know \(z_{n}\in Q_{n}\) and \(z_{n}=x_{n}\), which completes the proof.
Let \(B\) be the algorithm given by Theorem1.1. We note that, by using \(1\) bit of advice to encode if a given input length \(n\) satisfies \(\Pr_{B}[B(1^{n})=x_{n}]\geq 1-2^{-n}\), we can obtain an efficient algorithm that outputs a canonical answer with high probability (_i.e._, satisfies the promise of a pseudodeterministic algorithm) _on all input lengths_ and is correct on infinitely many of them. We state the result below as it might be useful in future work.
**Corollary 3.2** (Pseudodeterministic Polynomial-Time Construction with \(1\) Bit of Advice that Succeeds Infinitely Often).: _Let \(Q\) be a dense and easy language. There exist a polynomial-time probabilistic algorithm \(A\) and a sequence of advice bits \(\{\alpha_{i}\in\{0,1\}\}_{i\in\mathbb{N}_{\geq 1}}\) such that_
* _for all_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(A(1^{n},\alpha_{n})\) _outputs a canonical_ \(x_{n}\in\{0,1\}^{n}\) _with probability at least_ \(1-2^{-n}\)_, and_
* _for infinitely many_ \(n\in\mathbb{N}_{\geq 1}\)_,_ \(x_{n}\in Q\cap\{0,1\}^{n}\)
Modified Shaltiel-Umans Generator with Uniform Learning Reconstruction
In order to prove Theorem3.1, we will need the following result.
**Theorem 4.1** (A HSG with Uniform Learning Reconstruction).: _There exist an algorithm \(\mathsf{H}\) and a probabilistic oracle algorithm \(\mathsf{R}^{(-)}\) such that the following holds. Let \(p\) be a nice power of \(2\), \(m\) be a power of \(3\), \(\Delta,M\in\mathbb{N}\) with \(p>\Delta^{2}m^{7}M^{9}\), and let \(\vec{\ell}\triangleq(p,m,M,\Delta)\) be the input parameters._
* _The generator_ \(\mathsf{H}_{\vec{\ell}}\) _takes as input a polynomial_ \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) _with total degree at most_ \(\Delta\)_, specified as a list of_ \(p^{m}\) _evaluations of_ \(P\) _on all points from_ \(\mathbb{F}_{p}^{m}\) _in the lexicographic order, and outputs a set of strings in_ \(\{0,1\}^{M}\)_. Moreover,_ \(\mathsf{H}_{\vec{\ell}}\) _can be implemented by a logspace-uniform circuit of size_ \(\operatorname{poly}(p^{m})\) _and depth_ \(\operatorname{poly}(\log p,m,M)\)_._
* _The reconstruction algorithm_ \(\mathsf{R}_{\vec{\ell}}^{D,P}\)_, where_ \(D\colon\{0,1\}^{M}\to\{0,1\}\) _is any function that_ \((1/M)\)_-avoids_ \(\mathsf{H}_{\vec{\ell}}(P)\)_, runs in time_ \(\operatorname{poly}(p,m)\) _and outputs, with probability at least_ \(1-1/p^{m}\)_, a_ \(D\)_-oracle circuit that computes_ \(P\)_._
The rest of this section is dedicated to the proof of Theorem4.1.
### Technical Tools
#### 4.1.1 Error-Correcting Codes
**Theorem 4.2** (List-Decoding Reed-Solomon Codes [14]).: _Let \(b\), \(a\), and \(d\) be integers such that \(a>\sqrt{2d\cdot b}\). Given \(b\) distinct pairs \((x_{i},y_{i})\) in a field \(\mathbb{F}\), there are at most \(2\cdot b/a\) polynomials \(g\) of degree \(d\) such that \(g(x_{i})=y_{i}\) for at least \(a\) pairs. Furthermore, a list of all such polynomials can be computed in time \(\operatorname{poly}(b,\log|\mathbb{F}|)\)._
In particular, if \(a=\alpha\cdot b\) for some \(0<\alpha\leq 1\), provided that \(\alpha>\sqrt{2d/b}\), there are at most \(O(1/\alpha)\) degree-\(d\) polynomials that agree with an \(\alpha\)-fraction of the \(b\) pairs.
#### 4.1.2 Generator Matrices
**Definition 4.3** (Generator Matrices).: Let \(p\) be a power of \(2\) and \(m\in\mathbb{N}\). We say that \(A\in\mathbb{F}_{p}^{m\times m}\) is a _generator matrix_ for \(\mathbb{F}_{p}^{m}\) if \(A\) is invertible, \(A^{p^{m}-1}=I\), and \(\{A^{i}\cdot\vec{v}\}_{1\leq i<p^{m}}=\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\) for any nonzero \(\vec{v}\in\mathbb{F}_{p}^{m}\).19
Footnote 19: In fact, it is not hard to see that the third condition implies the first two. We include those two conditions in this definition as they will be useful later.
**Theorem 4.4** ([15]).: _Let \(n\in\mathbb{N}\). Given any irreducible polynomial \(f\) of degree \(n\) over \(\mathbb{F}_{2}\), one can deterministically construct in time \(\operatorname{poly}(n)\) a set \(S_{n}\) that contains at least one primitive root of the multiplicative group of \(\mathbb{F}_{2}[\mathbf{x}]/(f)\)._
We need the following lemma to deterministically construct generator matrices. Note that it is unclear how to deterministically construct a single generator matrix. Instead, we reduce the task of constructing such matrices to the task of constructing primitive roots of \(\mathbb{F}_{p^{m}}\). Then, we invoke Theorem4.4 to construct a _set_ of matrices that contains at least one generator matrix. It turns out that this set of matrices suffices for our purposes.
**Lemma 4.5**.: _Let \(p\) be a nice power of \(2\) and \(m\) be a power of \(3\). One can deterministically construct in time \(\operatorname{poly}(\log p,m)\) a set of matrices in \(\mathbb{F}_{p}^{m\times m}\) that contains at least one generator matrix for \(\mathbb{F}_{p}^{m}\)._
Proof Sketch.: Let \(p=2^{2\cdot 3^{\alpha}}\) and \(m=3^{\beta}\), where \(\alpha,\beta\in\mathbb{N}\). First recall that
\[\mathbb{F}_{p^{m}}=\frac{\mathbb{F}_{2}[\mathbf{x}]}{\left(\mathbf{x}^{2\cdot 3^{ \alpha+\beta}}+\mathbf{x}^{3^{\alpha+\beta}}+1\right)}\quad\text{ and }\quad\mathbb{F}_{p}=\frac{\mathbb{F}_{2}[\mathbf{y}]}{\left(\mathbf{y}^{2 \cdot 3^{\alpha}}+\mathbf{y}^{3^{\alpha}}+1\right)}.\]
We view the field \(\mathbb{F}_{p^{m}}\) as an \(m\)-dimensional vector space over \(\mathbb{F}_{p}\) with a basis \((1,\mathbf{x},\mathbf{x}^{2},\ldots,\mathbf{x}^{3^{\beta}-1})\), using the following (bijective) mapping. Let \(v\in\mathbb{F}_{p^{m}}\), then we can write
\[v \triangleq\sum_{t=0}^{2\cdot 3^{\alpha+\beta}-1}\hat{v}_{t} \cdot\mathbf{x}^{t}\] (where \[\hat{v}_{t}\in\mathbb{F}_{2}\] ) \[=\sum_{i=0}^{2\cdot 3^{\alpha}-1}\sum_{j=0}^{3^{\beta}-1}\hat{v}_{i,j }\cdot\mathbf{x}^{i\cdot 3^{\beta}+j}\] \[=\sum_{j=0}^{3^{\beta}-1}\ \mathbf{x}^{j}\cdot\Bigg{(}\sum_{i=0}^{ 2\cdot 3^{\alpha}-1}\hat{v}_{i,j}\cdot\mathbf{x}^{i\cdot 3^{\beta}}\Bigg{)}.\]
By mapping \(\mathbf{x}^{3^{\beta}}\) to \(\mathbf{y}\), we get that for every \(j\in[3^{\beta}-1]\),
\[\sum_{i=0}^{2\cdot 3^{\alpha}-1}\hat{v}_{i,j}\cdot\mathbf{x}^{i\cdot 3^{\beta} }=\sum_{i=0}^{2\cdot 3^{\alpha}-1}\hat{v}_{i,j}\cdot\mathbf{y}^{i},\]
which represents an element in \(\mathbb{F}_{p}\), so \(v\) corresponds under the mapping to an element in the vector space \(\mathbb{F}_{p}^{m}\) over \(\mathbb{F}_{p}\).
Next, analogously to [12, Lemma 4.4], we observe that:
1. multiplication by a fixed element \(g\in\mathbb{F}_{p^{m}}\) within the field corresponds to a linear transformation \(A_{g}\in\mathbb{F}_{p}^{m\times m}\) within the vector space \(\mathbb{F}_{p}^{m}\) (with respect to the above map and its inverse);
2. \(A_{g}\in\mathbb{F}_{p}^{m\times m}\) can be obtained in time \(\operatorname{poly}(\log p,m)\) given \(g\in\mathbb{F}_{p^{m}}\);
3. if \(g\) is a primitive root of \(\mathbb{F}_{p^{m}}\), then \(A_{g}\) is a generator matrix for \(\mathbb{F}_{p}^{m}\).
The lemma now follows from these observations and Theorem4.4.
#### 4.1.3 Random Self-Reducibility for Discrete Logarithm
**Lemma 4.6**.: _There is a probabilistic polynomial-time oracle algorithm \(\mathsf{DLCorr}^{(-)}\) such that the following holds. Let \(p\) be a power of \(2\), \(m\in\mathbb{N}\), \(\varepsilon>0\), \(A\) be a generator matrix for \(\mathbb{F}_{p}^{m}\), and let \(g\) be any probabilistic procedure that satisfies_
\[\Pr_{\vec{v}\leftarrow\mathbb{F}_{p}^{m}\setminus\{\vec{0}\},\;g}\!\!\left[g( \vec{v})\text{ outputs }i\in[p^{m}-1]\text{ such that }A^{i}\cdot\vec{1}=\vec{v}\right]\geq\varepsilon.\]
_Then for every \(\vec{u}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\), \(\mathsf{DLCorr}^{g}(p,m,1^{\lceil 1/\varepsilon\rceil},A,\vec{u})\) outputs \(\ell\in[p^{m}-1]\) such that \(A^{\ell}\cdot\vec{1}=\vec{u}\) with probability at least \(2/3\)._
Proof Sketch.: The algorithm is an adaptation of the worst-case to average-case reduction for the discrete logarithm problem. Given \(\vec{u}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\), we pick a random \(j\in[p^{m}-1]\) and set \(\vec{v}\triangleq A^{j}\cdot\vec{u}\). Let \(i\triangleq g(\vec{v})\). Since \(\vec{v}\) is uniformly distributed, with probability at least \(\varepsilon\) we have \(A^{i}\cdot\vec{1}=\vec{v}\). We check if this is the case in polynomial time (note that we can compute \(A^{i}\) in polynomial time by repeated squaring). Suppose this is indeed the case, then \(A^{i}\cdot\vec{1}=\vec{v}=A^{j}\cdot\vec{u}\). Recall that \(A\) is invertible. If \(i>j\), we output \(\ell\triangleq i-j\). If \(i=j\), we have \(\vec{u}=\vec{1}\). In this case, we output \(\ell\triangleq p^{m}-1\). Finally, if \(j>i\), we output \(\ell\triangleq t-(j-i)\).
By sampling \(O(1/\varepsilon)\) many values of \(j\), with probability at least \(2/3\), there is at least one invocation \(i\triangleq g(\vec{v})\) such that \(A^{i}\cdot\vec{1}=\vec{v}\) indeed holds. Therefore, the success probability of our algorithm is at least \(2/3\).
#### 4.1.4 Pseudorandom Generators from One-Way Permutations
**Theorem 4.7** ([14, 15, 16]).: _There exist a deterministic oracle algorithm \(\mathsf{CryptoG}^{(-)}\) and a probabilistic oracle algorithm \(\mathsf{Invert}^{(-)}\) such that the following holds. Let \(s,M\in\mathbb{N}\) be the input parameters, and let \(f\colon\{0,1\}^{s}\to\{0,1\}^{s}\) be a permutation._
1. \(\mathsf{CryptoG}^{f}_{s,M}\) _outputs a set of_ \(2^{2s}\)__\(M\)_-bit strings. Moreover,_ \(\mathsf{CryptoG}^{f}_{s,M}\) _can be implemented by a logspace-uniform_ \(f\)_-oracle circuit of size_ \(\mathrm{poly}(2^{s},M)\) _and depth_ \(\mathrm{poly}(s,M)\)_._
2. \(\mathsf{Invert}^{(-)}_{s,M}\) _takes_ \(x\in\{0,1\}^{s}\) _as input and runs in_ \(\mathrm{poly}(s,M)\) _time. For any function_ \(D\colon\{0,1\}^{M}\to\{0,1\}\) _that_ \(\varepsilon\)_-distinguishes_ \(\mathsf{CryptoG}^{f}_{s,M}\) _from_ \(\{0,1\}^{M}\)_, we have_ \[\Pr_{x\leftarrow\{0,1\}^{s}}\Bigl{[}\mathsf{Invert}^{f,D}_{s,M}(x)=f^{-1}(x) \Bigr{]}\geq\frac{\varepsilon}{\mathrm{poly}(M)}.\]
Proof Sketch.: The generator \(\mathsf{CryptoG}^{(-)}\) follows from the well-known construction of pseudorandom generators from one-way permutations using the Goldreich-Levin Theorem [16]. More specifically,
\[\mathsf{CryptoG}^{f}_{s,M}\triangleq\bigcup_{x,r\in\{0,1\}^{s}}\Bigl{(} \langle x,r\rangle,\langle f(x),r\rangle,\langle f(f(x)),r\rangle,\ldots, \Bigl{\langle}f^{(M-1)}(x),r\Bigr{\rangle}\Bigr{)},\]
where \(\langle\cdot\rangle\) denotes the inner product mod \(2\) function and \(f^{(i)}\) denotes the composition of \(f\) with itself \(i\) times.
The "inverting" algorithm \(\mathsf{Invert}^{(-)}\) and its correctness rely on standard techniques in pseudorandomness such as the hybrid argument, Yao's theorem on the equivalence between pseudorandomness and unpredictability [15], and the Goldreich-Levin decoding algorithm [16]. (See _e.g._, [1, Section 9.3].)
Finally, to see that \(\mathsf{CryptoG}^{f}_{s,M}\) can be implemented by a logspace-uniform \(f\)-oracle circuit of size \(\mathrm{poly}(2^{s},M)\) and depth \(\mathrm{poly}(M)\), we first note that there is a Turing machine that given \(s,M\in\mathbb{N}\) and \(x,r\in\{0,1\}^{s}\), computes the \(M\)-bit string \(\langle x,r\rangle,\langle f(x),r\rangle,\langle f(f(x)),r\rangle,\ldots, \bigl{\langle}f^{M-1}(x),r\bigr{\rangle}\) in \(\mathrm{poly}(s,M)\) time using \(f\) as an oracle. Then by the fact that any time-\(t\) Turing machine can be simulated by a logspace-uniform circuit of size \(O(t^{2})\), computing a single \(M\)-bit string in \(\mathsf{CryptoG}^{f}_{s,M}\) can be done using a logspace-uniform \(f\)-oracle circuit of size \(\mathrm{poly}(s,M)\). The desired conclusion follows from the observation that we can compute these \(2^{2s}\)\(M\)-bit strings in parallel.
#### 4.1.5 Self-Correction for Polynomials
**Theorem 4.8** (A Self-Corrector for Polynomials, cf. [12, 13]).: _There is a probabilistic oracle algorithm \(\mathsf{PCorr}^{(-)}\) such that the following holds. Let \(p\) be a power of \(2\), \(m,\Delta\in\mathbb{N}\) such that \(\Delta<p/3\). Let \(g\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) be such that there exists a polynomial \(P\) of total degree at most \(\Delta\) for which_
\[\Pr_{\vec{x}\in\mathbb{F}_{p}^{m}}[g(\vec{x})\neq P(\vec{x})]\leq 1/4.\]
_Then for all \(\vec{x}\in\mathbb{F}_{p}^{m}\), \(\mathsf{PCorr}^{g}(p,m,\Delta,\vec{x})\) runs in time \(\operatorname{poly}(\Delta,\log p,m)\) and outputs \(P(\vec{x})\) with probability at least \(2/3\)._
### The Shaltiel-Umans Generator
We state a version of the hitting set generator constructed by Shaltiel and Umans [13] that will be convenient for our purposes.
**Theorem 4.9** (Implicit in [13]).: _There exist a deterministic algorithm \(\mathsf{HSU}\) and a probabilistic oracle algorithm \(\mathsf{RSU}^{(-)}\) such that the following holds. Let \(p\) be a power of \(2\), \(m,M,\Delta\in\mathbb{N}\) with \(p>\Delta^{2}m^{7}M^{9}\), \(\vec{\ell}\triangleq(p,m,M,\Delta)\) be the input parameters, and let_
* \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) _be a polynomial with total degree at most_ \(\Delta\)_, given as a list of_ \(p^{m}\) _evaluations of_ \(P\) _on all points from_ \(\mathbb{F}_{p}^{m}\) _in lexicographic order, and_
* \(A\) _be a generator matrix for_ \(\mathbb{F}_{p}^{m}\)_._
_Then_
1. _The generator_ \(\mathsf{HSU}_{\vec{\ell}}(P,A)\) _outputs a set of strings in_ \(\{0,1\}^{M}\)_. Moreover,_ \(\mathsf{HSU}_{\vec{\ell}}\) _can be implemented by a logspace-uniform circuit of size_ \(\operatorname{poly}(p^{m})\) _and depth_ \(\operatorname{poly}(\log p,m)\)_._
2. _The reconstruction algorithm_ \(\mathsf{RSU}_{\vec{\ell}}^{D,P}(A)\)_, where_ \(D\colon\{0,1\}^{M}\to\{0,1\}\) _is any function that_ \((1/M)\)_-avoids_ \(\mathsf{HSU}_{\vec{\ell}}(P,A)\)_, runs in_ \(\operatorname{poly}(p,m)\) _time and outputs, with probability at least_ \(1-1/p^{2m}\)_, a vector_ \(\vec{v}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\) _and a_ \(D\)_-oracle circuit_ \(C:[p^{m}-1]\to\mathbb{F}_{p}\) _such that_ \[C(i)=P(A^{i}\cdot\vec{v})\text{ for every }i\in[p^{m}-1].\]
The statement of Theorem 4.9 and the HSG result of [13] differ in two aspects:
* First, we use a _polynomial_ instead of a Boolean function to construct the HSG, which fits more naturally into the framework of Chen-Tell [10] (see also Section 5).
* Second, we explicitly calculated a circuit depth upper bound for computing the HSG, which is not stated in [13].
Nevertheless, Theorem 4.9 easily follows from the arguments in [13]. For completeness, we review the construction of [13] and present a proof sketch of Theorem 4.9 in this subsection.
The generator.We first construct \(m\) candidate "\(p\)-ary PRGs" \(G^{(0)}_{\text{$p$-ary}},G^{(1)}_{\text{$p$-ary}},\cdots,G^{(m-1)}_{\text{$p$-ary}} :\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}^{M}\); note that the inputs and outputs of these "\(p\)-ary PRGs" are elements in \(\mathbb{F}_{p}\). In particular:
\[G^{(j)}_{\text{$p$-ary}}(\vec{x})=\Big{(}P(A^{p^{j}\cdot 1}\vec{x}),P(A^{p^{j} \cdot 2}\vec{x}),\cdots,P(A^{p^{j}\cdot M}\vec{x})\Big{)}.\]
Then we convert each \(p\)-ary PRG into a (usual binary) PRG by invoking [15, Lemma 5.6]. More precisely, for each \(0\leq j<m\), we interpret \(G^{(j)}_{\text{$p$-ary}}\) as a PRG that takes a binary seed of length \(m\log p\) and outputs \(M\) elements in \(\{0,1\}^{\log p}\), using the canonical bijection \(\kappa^{(\log p)}\) between \(\mathbb{F}_{p}\) and \(\{0,1\}^{\log p}\). Then, for \(G^{(j)}_{\text{$p$-ary}}:\{0,1\}^{m\log p}\to(\{0,1\}^{\log p})^{M}\), given seeds \(x\in\{0,1\}^{m\log p}\) and \(r\in\{0,1\}^{\log p}\), we define
\[G^{(j)}(x,r)=\Big{(}\langle G^{(j)}_{\text{$p$-ary}}(x)_{1},r\rangle,\langle G ^{(j)}_{\text{$p$-ary}}(x)_{2},r\rangle,\ldots,\langle G^{(j)}_{\text{$p$-ary} }(x)_{M},r\rangle\Big{)}.\]
Here, \(\langle\cdot\rangle\) denotes the inner product mod 2 function. In other words, we combine \(G^{(j)}_{\text{$p$-ary}}\) with the _Hadamard code_ to obtain a Boolean PRG \(G^{(j)}\colon\{0,1\}^{m\log p+\log p}\to\{0,1\}^{M}\).
Finally, our HSG will be the union of all PRGs \(G^{(j)}\). That is, our algorithm \(\mathsf{HSU}_{\vec{\ell}}(P,A)\) enumerates every \(0\leq j<m\), \(x\in\{0,1\}^{m\log p}\), \(r\in\{0,1\}^{\log p}\), and prints the string \(G^{(j)}(x,r)\).
To see that \(\mathsf{HSU}_{\vec{\ell}}\) can be computed by a logspace-uniform low-depth circuit, we argue that given appropriate indexes \(j\) and \(i\), the \(i\)-th bit of \(G^{(j)}(x,r)\) can be computed by a logspace-uniform low-depth circuit. The bit we want to compute is
\[G^{(j)}(x,r)_{i}=\langle G^{(j)}_{\text{$p$-ary}}(x)_{i},r\rangle=\langle P(A ^{p^{j}\cdot i}\vec{x}),r\rangle,\]
where \(\vec{x}\) is the vector in \(\mathbb{F}_{p}^{m}\) encoded by \(x\). By repeated squaring, we can output a (logspace-uniform) circuit of size and depth \(\operatorname{poly}(\log p,m)\) that computes \(A^{p^{j}\cdot i}\). Multiplying \(A^{p^{j}\cdot i}\) with \(\vec{x}\), indexing (_i.e._, finding the \((A^{p^{j}\cdot i}\vec{x})\)-th entry of \(P\)), and computing Boolean inner product have logspace-uniform circuits of size \(\operatorname{poly}(M,p^{m})=\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(m,\log p,\log M)=\operatorname{poly}(\log p,m)\). Since we need to output \(m\cdot p^{m+1}\) strings of length \(M\) and each output bit can be computed by a logspace-uniform circuit of size \(\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(\log p,m)\), the complexity upper bounds for computing \(\mathsf{HSU}_{\vec{\ell}}\) follows.
Now we consider the reconstruction algorithm. Suppose there is an adversary \(D:\{0,1\}^{M}\to\{0,1\}\) that \((1/M)\)-avoids \(\mathsf{HSU}_{\vec{\ell}}(P,A)\). It follows that \(D\)\((1/M)\)-distinguishes every binary PRG \(G^{(j)}\).
From distinguishers to next-element predictors.For each \(0\leq j<m\), we use \(D\) to build a "next-element predictor" \(D^{(j)}\) for \(G^{(j)}_{\text{$p$-ary}}\). Since \(D\)\((1/M)\)-distinguishes \(G^{(j)}\), it can be used to build a next-_bit_ predictor \(D^{(j)}_{\mathsf{bit}}\) such that
\[\Pr_{i\leftarrow[M],x\leftarrow\{0,1\}^{m\log p},r\leftarrow\{0,1\}^{\log p}} \Bigl{[}D^{(j)}_{\mathsf{bit}}\Bigl{(}G^{(j)}(x,r)_{1},\ldots,G^{(j)}(x,r)_{i-1 }\Bigr{)}=G^{(j)}(x,r)_{i}\Bigr{]}\geq 1/2+1/M^{2}.\]
Therefore, with probability \(\geq 1/2M^{2}\) over \(i\leftarrow[M]\) and \(x\leftarrow\{0,1\}^{m\log p}\), the probability over \(r\leftarrow\{0,1\}^{\log p}\) that \(D^{(j)}_{\mathsf{bit}}\) predicts the \(i\)-th bit of \(G^{(j)}(x,r)\) given its first \(i-1\) bits correctly is at least \(1/2+1/2M^{2}\). In this case, using the list-decoding algorithm for Hadamard code [10], we can find a list of \(O(M^{4})\) elements that contains \(G^{(j)}_{\text{$p$-ary}}(x)_{i}\). (In fact, the trivial list-decoding algorithm suffices here, since it runs in time \(\operatorname{poly}(p)\).) We call this procedure the _next-element predictor_\(D^{(j)}\); it takes as input
\[u_{M-1}\triangleq P(A^{-(M-1)p^{j}}\vec{x}),u_{M-2}\triangleq P(A^{-(M-2)p^{j}} \vec{x}),\ldots,u_{1}\triangleq P(A^{-p^{j}}\vec{x}),\]
where \(\vec{x}\leftarrow\mathbb{F}_{p}^{m}\) is a random vector. It randomly selects \(i\leftarrow[M]\), invokes \(D^{(j)}_{\mathsf{bit}}\) and the list-decoding algorithm for the Hadamard code, and outputs a list of \(O(M^{4})\) elements. With probability \(\Omega(1/M^{2})\) over \(\vec{x}\leftarrow\mathbb{F}_{p}^{m}\) and the internal randomness of \(D^{(j)}_{\mathsf{bit}}\), this list will contain \(P(\vec{x})\).
We repeat \(D^{(j)}\) for \(O(m\log p)\) times and fix its internal randomness, so that in what follows we can assume \(D^{(j)}\) is deterministic. With probability at least \(1-1/(10p^{2m})\), for every \(0\leq j<m\), \(D^{(j)}\) will be correct in the following sense: For some \(\rho\triangleq 1/\Theta(M^{2}m\log p)\), \(D^{(j)}\) outputs \(\rho^{-2}\) elements, and
\[\Pr_{\vec{x}\leftarrow\mathbb{F}_{p}^{m}}\Bigl{[}P(\vec{x})\in D^{(j)}(u_{M-1 },u_{M-2},\ldots,u_{1})\Bigr{]}>\rho.\]
Learn Next Curve.We will use the following notation from [13]. Let \(r\triangleq O(m\log p)\) be a parameter denoting the number of reference points, and \(v\triangleq(m+1)r-1\) denotes the degree of curves.20 A _curve_ is a polynomial \(C:\mathbb{F}_{p}\rightarrow\mathbb{F}_{p}^{m}\) with degree \(v\). (That is, each coordinate of \(C\) is a univariate polynomial of degree \(v\) over \(\mathbb{F}_{p}\).) Recall that \(A\in\mathbb{F}_{p}^{m\times m}\) is the generator matrix. We use \(AC\) to denote the curve where for each \(t\in\mathbb{F}_{p}\), \(AC(t)=A\cdot C(t)\) (note that \(AC\) is still a degree-\(v\) polynomial). We also use \(P(C)\) to denote the function such that for every \(t\in\mathbb{F}_{p}\), \(P(C)(t)=P(C(t))\); the _evaluation table_ of \(P(C)\) is the length-\(p\) vector where for every \(t\in\mathbb{F}_{p}\), the \(t\)-th entry of the vector is \(P(C(t))\).
Footnote 20: The parameter \(v\) is set in the proof of [13, Lemma 5.14].
Now, we recall the implementation of an important subroutine called Learn Next Curve as defined in [13, Section 5.5]. Learn Next Curve takes as input a next curve \(C:\mathbb{F}_{p}\rightarrow\mathbb{F}_{p}^{m}\), a set of reference points\(R\subseteq\mathbb{F}_{p}\) of size \(r\), a stride\(0\leq j<m\), as well as input evaluations; the input evaluations consist of two parts, namely the evaluation tables of \(P(A^{-ip^{j}}C)\) for every \(1\leq i<M\) and the values of \(P(C(t))\) for every \(t\in R\). The intended output evaluations consist of the evaluation table of \(P(C)\).
In particular, Learn Next Curve starts by obtaining a set of \(\rho^{-2}\) values
\[S_{t}\triangleq D^{(j)}\Bigl{(}P(A^{-(M-1)p^{j}}C(t)),P(A^{-(M-2)p^{j}}C(t)), \ldots,P(A^{-p^{j}}C(t))\Bigr{)}\]
for each \(t\in\mathbb{F}_{p}\). Then it calls the algorithm from Theorem4.2 on the pairs \(\{(t,e)\}_{t\in\mathbb{F}_{p},e\in S_{t}}\) to obtain the list of all polynomials \(Q\) such that \(Q(t)\in S_{t}\) for many coordinates \(t\). (This takes \(\operatorname{poly}(p\rho^{-2},\log p)\leq\operatorname{poly}(p,m)\) time.) If this list contains a unique polynomial \(Q\) such that \(Q(t)=P(C(t))\) for every \(t\in R\), then we output this polynomial; otherwise we output \(\bot\). It is clear that Learn Next Curve runs in \(\operatorname{poly}(p,m)\) time.
We say Learn Next Curve_succeeds_ (on next curve, reference points, and stride), if whenever the input evaluations are the intended values, the output evaluations are also the intended values. Let
\[\varepsilon_{\mathrm{LNC}}\triangleq O(v\rho^{-1}/p)^{v/2}+(8\rho^{-3})(v \deg(P)/p)^{r}.\]
It is proven in [13, Lemma 5.12] that, assuming \(p>32\deg(P)v/\rho^{4}\), if the next curve and reference points are chosen uniformly at random, Learn Next Curve succeeds with probability \(1-\varepsilon_{\mathrm{LNC}}\). Since \(\deg(P)=\Delta\), \(\rho^{-1}=\Theta(M^{2}m\log p)\), \(v=O(m^{2}\log p)\), and \(p>\Delta^{2}m^{7}M^{9}\), it is indeed the case that \(p>32\deg(P)v/\rho^{4}\). Also note that
\[\varepsilon_{\mathrm{LNC}}\leq O(\rho^{3}/32\deg(P))^{v/2}+(8\rho^{-3})(\rho^{ 4}/32)^{r}\leq(1/2)^{r-1}\ll 1/(10p^{4m}).\]
A first attempt for the reconstruction algorithm would be the following. Let \(i\in[p^{m}-1]\), and suppose that we want to compute \(P(A^{i}\overline{1})\). We write \(i\) in \(p\)-ary as \(i=\sum_{j=0}^{m-1}i_{j}p^{j}\) (where each
\(i_{j}\in\{0,1,\ldots,p-1\}\)). Recall that for each next curve\(C\) and stride\(j\), given the evaluation tables of \(P(A^{-kp^{j}}C)\) for every \(1\leq k<M\), we can learn the evaluation table of \(P(C)\) in one invocation of Learn Next Curve. Therefore, we proceed in \(m\) rounds, where for each \(0\leq l<m\), the \(l\)-th round performs the following computation:
* Let \(i^{\prime}\triangleq\sum_{j=0}^{l-1}i_{j}p^{j}\). Suppose that at the beginning of the \(l\)-th round, we already know the evaluation tables of \(P(A^{kp^{l}+i^{\prime}}C)\) for each \(1\leq k<M\). (For \(l=0\), these values can be hardcoded as advice; for \(l\geq 1\), they should be obtained from the previous round.) We invoke Learn Next Curve\(M(p-1)\) times with stride \(l\) to obtain the evaluation tables of \(P(A^{kp^{l}+i^{\prime}}C)\) for each \(1\leq k<M\cdot p\). The \(l\)-th round ends here; note that we have obtained the evaluation tables required in the \((l+1)\)-th round (namely \(P(A^{kp^{l+1}+i_{l}p^{l}+i^{\prime}}C)\) for every \(1\leq k<M\)).
However, there is one issue with this approach: to learn a curve \(C\), we also need to provide Learn Next Curve with the evaluations of some reference points on \(C\). To resolve this issue, [15] introduced an _interleaved learning_ procedure that involves two curves \(C_{1}\) and \(C_{2}\). These two curves possess nice intersection properties that for certain choices of \(k\) and \(l\), \(A^{k}C_{1}\) and \(A^{l}C_{2}\) intersect on at least \(r\) points. This enables us for example to learn the evaluation table of \(P(A^{l}C_{2})\) whenever we know the evaluation table of \(P(A^{k}C_{1})\), by using the evaluations of \(P(A^{k}C_{1})\) at reference points\(R\), where \(R\) is the intersection of \(A^{k}C_{1}\) and \(A^{l}C_{2}\).
Interleaved learning.In what follows, we use \([C_{1}\cap C_{2}]\) to denote the set \(\{t\in\mathbb{F}_{p}:C_{1}(t)=C_{2}(t)\}\). We say two curves \(C_{1}\) and \(C_{2}\) are _good_ if they satisfy the following properties:
* \(C_{1}(1)\neq\vec{0}\);
* for all \(1\leq i<p^{m}\) and all \(0\leq j<m\), \([A^{i+p^{j}}C_{1}\cap A^{i}C_{2}]\) and \([A^{i}C_{1}\cap A^{i}C_{2}]\) are of size \(\geq r\);
* for all \(1\leq i<p^{m}\) and all \(0\leq j<m\), Learn Next Curve succeeds given next curve\(A^{i+p^{j}}C_{1}\), reference points\([A^{i+p^{j}}C_{1}\cap A^{i}C_{2}]\), and stride\(j\); and
* for all \(1\leq i<p^{m}\) and all \(0\leq j<m\), Learn Next Curve succeeds given next curve\(A^{i}C_{2}\), reference points\([A^{i}C_{1}\cap A^{i}C_{2}]\), and stride\(j\).
By [15, Lemma 5.14], there is a \(\operatorname{poly}(v,p)\)-time randomized algorithm that, with probability \(1-2mp^{m}\cdot\varepsilon_{\text{LNC}}\geq 1-1/(10p^{2m})\), outputs two curves \(C_{1}\) and \(C_{2}\) that are good.
The basic step in the reconstruction algorithm is called _interleaved learning_ in [15]. This step has the following guarantee: For a stride\(j\), given the correct evaluation tables of \(P(A^{i-kp^{j}}C_{1})\) and \(P(A^{i-kp^{j}}C_{2})\) for every \(1\leq k<M\), we can compute the correct evaluation tables of \(P(A^{i}C_{1})\) and \(P(A^{i}C_{2})\). In particular, _interleaved learning_ consists of the following two steps:
* first, we invoke Learn Next Curve with next curve \(A^{i}C_{1}\), reference points\([A^{i-p^{j}}C_{2}\cap A^{i}C_{1}]\), and stride\(j\);
* then, we invoke Learn Next Curve with next curve \(A^{i}C_{2}\), reference points\([A^{i}C_{1}\cap A^{i}C_{2}]\), and stride\(j\).
Note that we assume that all invocations of Learn Next Curve succeed, as this happens with high probability \((1-1/(10p^{2m}))\).
The reconstruction algorithm.Recall that our reconstruction algorithm needs to output two elements: a vector \(\vec{v}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\) and a \(D\)-oracle circuit \(C:[p^{m}-1]\to\mathbb{F}_{p}\) such that \(C(i)=P(A^{i}\cdot\vec{v})\) for every \(i\in[p^{m}-1]\).
We first compute the curves \(C_{1}\) and \(C_{2}\) that are good with probability \(1-1/(10p^{2m})\). Our reconstruction algorithm will be correct provided that \(C_{1}\) and \(C_{2}\) are good (and that we fixed good internal randomness of our next-element predictors \(D^{(j)}\)); this happens with probability \(\geq 1-1/(10p^{2m})-1/(10p^{2m})\geq 1-1/p^{2m}\). The vector we output will be \(\vec{v}\triangleq C_{1}(1)\) (which is non-zero if \(C_{1}\) and \(C_{2}\) are good). It remains to output a circuit \(C\) such that for every \(i\in[p^{m}-1]\), \(C(i)=P(A^{i}\cdot\vec{v})\).
Given an integer \(i\), our circuit \(C\) first writes \(i\) in \(p\)-ary as \(i=\sum_{j=0}^{m-1}i_{j}p^{j}\). Then, it proceeds in \(m\) rounds, where for each \(0\leq l<m\), the \(l\)-th round performs the following:
* Let \(i^{\prime}\triangleq\sum_{j=0}^{l-1}i_{j}p^{j}\). Suppose that at the beginning of the \(l\)-th round, we already know the evaluation tables of \(P(A^{kp^{l}+i^{\prime}}C_{1})\) and \(P(A^{kp^{l}+i^{\prime}}C_{2})\) for each \(1\leq k<M\). We perform interleaved learning \(M(p-1)\) times with stride \(l\) to obtain the evaluation tables of \(P(A^{kp^{l}+i^{\prime}}C_{1})\) and \(P(A^{kp^{l}+i^{\prime}}C_{2})\) for each \(1\leq k<M\cdot p\). The \(l\)-th round ends here; note that we have obtained the evaluation tables required to perform the \((l+1)\)-th round (namely, \(P(A^{kp^{l+1}+i}p^{l+i^{\prime}}C_{1})\) and \(P(A^{kp^{l+1}+i}p^{l+i^{\prime}}C_{2})\) for every \(1\leq k<M\)).
Finally, after the \((m-1)\)-th round, we have obtained the evaluation table of \(P(A^{i}C_{1})\), and we can simply output \(P(A^{i}C_{1}(1))=P(A^{i}\vec{v})\) as the answer.
Note that the interleaved learning procedure needs to invoke the next-element predictor, therefore our circuit \(C\) will be a \(D\)-oracle circuit. Also, at the beginning of the first (0-th) round, we need the evaluation tables of \(P(A^{k}C_{1})\) and \(P(A^{k}C_{2})\) for each \(0\leq k<M\). Our reconstruction algorithm can simply query the polynomial \(P\) to obtain these values and hardcode them into our circuit \(C\). It is clear that our reconstruction algorithm runs in \(\operatorname{poly}(p,m)\) time and succeeds with probability \(\geq 1-1/p^{2m}\).
### Modified Shaltiel-Umans Generator: Proof of Theorem4.1
In this subsection, we prove Theorem4.1, which is restated below.
**Theorem 4.1** (A HSG with Uniform Learning Reconstruction).: _There exist an algorithm \(\mathsf{H}\) and a probabilistic oracle algorithm \(\mathsf{R}^{(-)}\) such that the following holds. Let \(p\) be a nice power of \(2\), \(m\) be a power of \(3\), \(\Delta,M\in\mathbb{N}\) with \(p>\Delta^{2}m^{7}M^{9}\), and let \(\vec{\ell}\triangleq(p,m,M,\Delta)\) be the input parameters._
* _The generator_ \(\mathsf{H}_{\vec{\ell}}\) _takes as input a polynomial_ \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) _with total degree at most_ \(\Delta\)_, specified as a list of_ \(p^{m}\) _evaluations of_ \(P\) _on all points from_ \(\mathbb{F}_{p}^{m}\) _in the lexicographic order, and outputs a set of strings in_ \(\{0,1\}^{M}\)_. Moreover,_ \(\mathsf{H}_{\vec{\ell}}\) _can be implemented by a logspace-uniform circuit of size_ \(\operatorname{poly}(p^{m})\) _and depth_ \(\operatorname{poly}(\log p,m,M)\)_._
* _The reconstruction algorithm_ \(\mathsf{R}_{\vec{\ell}}^{D,P}\)_, where_ \(D\colon\{0,1\}^{M}\to\{0,1\}\) _is any function that_ \((1/M)\)_-avoids_ \(\mathsf{H}_{\vec{\ell}}(P)\)_, runs in time_ \(\operatorname{poly}(p,m)\) _and outputs, with probability at least_ \(1-1/p^{m}\)_, a_ \(D\)_-oracle circuit that computes_ \(P\)_._
Proof.: One difference between our generator and the Shaltiel-Umans generator (Theorem4.9) is that the reconstruction procedure in the latter only learns a circuit \(C_{0}\) that computes the mapping \(i\mapsto P(A^{i}\cdot\vec{v})\) (for some \(\vec{v}\) output by the reconstruction procedure), where \(A\) is the generator matrix used in the Shaltiel-Umans construction, instead of a circuit that computes \(P\) itself. Let us assume for simplicity that the circuit \(C_{0}\) computes \(i\mapsto P(A^{i}\cdot\vec{1})\). Note that if given \(\vec{x}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\) (which
is the input on which we intend to evaluate \(P\)), we could _efficiently_ compute the value \(i\in[p^{m}-1]\) such that \(A^{i}\cdot\vec{1}=\vec{x}\), then we would be able to combine this with the circuit \(C_{0}\) to compute \(P\) (roughly speaking, by first computing \(i\) and then outputting \(C_{0}(i)\)). However, there are two issues with this approach:
1. First, we do not know the generator matrix \(A\), as we need our reconstruction algorithm to be uniform and thus we cannot hardcode \(A\).
2. Second, the task of finding such \(i\) given \(\vec{x}\) and \(A\) is essentially the _discrete logarithm problem_, for which no efficient algorithm is known.
To handle the first issue, we will construct our generator by using the Shaltiel-Umans construction based on a generator matrix that is from a small set \(S\) given by Lemma4.5. Then in the reconstruction, we will try all the matrices from \(S\), which can be generated efficiently, to obtain a list of candidate circuits. We then select from the list a circuit that is sufficiently close to \(P\) and use a self-corrector to compute \(P\) everywhere. For the second issue, we first observe that the mapping \(f\colon i\mapsto A^{i}\cdot\vec{1}\) is a _permutation_. Treating \(f\) as a "cryptographic one-way permutation" and invoking Theorem4.7, we can construct a "cryptographic pseudorandom generator", which has a uniform reconstruction algorithm. We can then combine the output of this "cryptographic pseudorandom generator" with that of the Shaltiel-Umans generator so that if there is an algorithm \(D\) that avoids this combined generator, then \(D\) can also be used to invert \(f\) efficiently! Details follow.
The construction of \(\mathsf{H}\).For a matrix \(A\in\mathbb{F}_{p}^{m\times m}\), let \(f_{A}\colon[p^{m}-1]\cup\{0\}\to\mathbb{F}_{p}^{m}\) be such that
\[f_{A}(i)\triangleq\begin{cases}0^{n}&\text{if }i=0\\ A^{i}\cdot\vec{1}&\text{if }1\leq i<p^{m}.\end{cases}\]
We will also view \(f\) as a function mapping \(s\) bits to \(s\) bits, where \(s\triangleq m\cdot\log p\). Also note that if \(A\) is a generator matrix for \(\mathbb{F}_{p}^{m}\), then \(f_{A}\) is a permutation.
Let \(\mathsf{HSU}\) be the generator from Theorem4.9 and \(\mathsf{CryptoG}^{(-)}\) be the generator from Theorem4.7. Also, let \(S\subseteq\mathbb{F}_{p}^{m\times m}\) be the set of matrices constructed using Lemma4.5. We define
\[\mathsf{H}_{\vec{\ell}}(P)\triangleq\bigcup_{A\in S}\left(\mathsf{HSU}_{\vec{ \ell}}(P,A)\bigcup\mathsf{CryptoG}^{f_{A}}_{s,M}\right).\]
The complexity of \(\mathsf{H}\).We argue that \(\mathsf{H}_{\vec{\ell}}\) can be implemented by a logspace-uniform circuit of size \(\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(\log p,m,M)\).
First note that given \(A\), \(f_{A}\) can be computed in \(\operatorname{poly}(\log p,m)\) time. Then again by the fact that every time-\(t\) Turing machine can be simulated by a logspace-uniform circuit of size \(O(t^{2})\), \(f_{A}\) can be computed by a logspace-uniform circuit of size \(\operatorname{poly}(\log p,m)\). This means given \(A\), \(\mathsf{CryptoG}^{f_{A}}_{s,M}\), which by Theorem4.7 has a logspace-uniform \(f_{A}\)-oracle circuit of size \(\operatorname{poly}(2^{s},M)\) and depth \(\operatorname{poly}(s,M)\), can be implemented by a logspace-uniform circuit of size \(\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(\log p,m,M)\), where we have used that \(s=m\cdot\log p\) and \(M\leq p^{1/9}\). Also, by Theorem4.9, \(\mathsf{HSU}_{\vec{\ell}}\) has a logspace-uniform circuit of size \(\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(\log p,m,M)\). To compute \(\mathsf{H}_{\vec{\ell}}(P)\), we just need to compute \(\mathsf{HSU}_{\vec{\ell}}(P,A)\) and \(\mathsf{CryptoG}^{f_{A}}_{s,M}\) for all \(A\in S\) in parallel, where \(S\) can also be computed in time \(\operatorname{poly}(\log p,m)\) and hence has logspace-uniform circuit of size \(\operatorname{poly}(\log p,m)\). This yields a logspace-uniform circuit of size \(\operatorname{poly}(p^{m})\) and depth \(\operatorname{poly}(\log p,m,M)\) computing \(\mathsf{H}_{\vec{\ell}}\).
The reconstruction.Given oracle access to the polynomial \(P\) and a function \(D\) that \((1/M)\)-avoids \(\mathsf{H}_{\vec{\ell}}(P)\), we want to output a \(D\)-circuit that computes \(P\). We do this in two stages. In the first stage, we obtain a list of candidate circuits, one for each \(A\in S\), that (with high probability) contains at least one circuit that computes \(P\). In the second stage, we will select, from the list of candidate circuits, one that is sufficiently close to \(P\) and combine it with a self-corrector to obtain a circuit that computes \(P\) on all inputs.
We now describe the first stage. Let \(A^{*}\) be the lexicographically first matrix in \(S\) that is a generator matrix for \(\mathbb{F}_{p}^{m}\), and consider the two sets
\[\mathsf{HSU}_{\vec{\ell}}(P,A^{*})\quad\text{and}\quad\mathsf{Crypto}\mathsf{ G}_{\mathsf{s},M}^{f_{A^{*}}},\]
which are subsets of \(\mathsf{H}_{\vec{\ell}}(P)\). Since \(D\) avoids \(\mathsf{H}_{\vec{\ell}}\), it also avoids _both_\(\mathsf{HSU}_{\vec{\ell}}(P,A^{*})\) and \(\mathsf{Crypto}\mathsf{G}_{\mathsf{s},M}^{f_{A^{*}}}\).
Assume for a moment that we are given the matrix \(A^{*}\). We will construct a circuit \(C_{A^{*}}\) as follows. Let \(\mathsf{RSU}^{(-)}\) and \(\mathsf{Invert}^{(-)}\) be the oracle algorithms from Theorem4.9 and Theorem4.7 respectively. We first run \(\mathsf{RSU}_{\vec{\ell}}^{D,P}(A^{*})\) to obtain a \(D\)-oracle circuit \(C_{A^{*}}^{\prime}\) and some \(\vec{v}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\). By the property of \(\mathsf{RSU}^{(-)}\) (item2 of Theorem4.9) and the fact that \(D\) avoids \(\mathsf{HSU}_{\vec{\ell}}(P,A^{*})\), we get that, with probability at least \(1-1/p^{2m}\), for every \(i\in[p^{m}-1]\),
\[C_{A^{*}}^{\prime}(i)=P((A^{*})^{i}\cdot\vec{v}). \tag{2}\]
Similarly, by the property of \(\mathsf{Invert}^{(-)}\) (item2 of Theorem4.7) and the fact that \(D\) avoids \(\mathsf{Crypto}\mathsf{G}_{\mathsf{s},M}^{f_{A^{*}}}\), we get that
\[\Pr_{x\leftarrow\{0,1\}^{s}}\Bigl{[}\mathsf{Invert}^{f_{A^{*}},D}_{s,M}(x)=f _{A^{*}}^{-1}(x)\Bigr{]}\geq\frac{1}{\operatorname{poly}(M)}.\]
By combining
\[g\triangleq\mathsf{Invert}^{f_{A^{*}},D}_{s,M}\]
with the algorithm \(\mathsf{DLCorr}^{(-)}\) from Lemma4.6, we get that for every \(\vec{x}\in\mathbb{F}_{p}^{m}\), with probability at least \(2/3\) over the internal randomness of \(\mathsf{DLCorr}^{g}\),
\[\mathsf{DLCorr}^{g}\Bigl{(}p,m,1^{\operatorname{poly}(M)},A^{*},\vec{x}\Bigr{)} =f_{A^{*}}^{-1}(\vec{x}).\]
By using standard error reduction techniques (to reduce the error from \(2/3\) to \(1/(10p^{2m})\)) and by fixing the internal randomness (that hopefully works correctly for all \(p^{m}\) inputs), we can obtain, in time \(\operatorname{poly}(p,m)\) and with probability at least \(1-1/(10p^{m})\), a \(D\)-oracle circuit \(C_{A^{*}}^{\prime\prime}\) such that for every \(\vec{x}\in\mathbb{F}_{p}^{m}\),
\[C_{A^{*}}^{\prime\prime}(\vec{x})=f_{A^{*}}^{-1}(\vec{x}). \tag{3}\]
That is, given \(\vec{x}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\), \(C_{A^{*}}^{\prime\prime}(\vec{x})\) outputs \(i\in[p^{m}-1]\) such that \((A^{*})^{i}\cdot\vec{1}=\vec{x}\). This is almost what we need except that we want the circuit to output \(i\) such that \((A^{*})^{i}\cdot\vec{v}=\vec{x}\). We further construct such a circuit \(C_{A^{*}}^{\prime\prime\prime}\) as follow. Given \(\vec{x}\in\mathbb{F}_{p}^{m}\), we first compute
\[j\triangleq C_{A^{*}}^{\prime\prime}(\vec{v})\quad\text{and}\quad k\triangleq C _{A^{*}}^{\prime\prime}(\vec{x}).\]
That is, \(\vec{v}=(A^{*})^{j}\cdot\vec{1}\) and \(\vec{x}=(A^{*})^{k}\cdot\vec{1}\). We then output \(i\) depending on the values of \(j\) and \(k\). On the one hand, if \(j<k\), we let \(i\triangleq k-j\). Then
\[(A^{*})^{i}\cdot\vec{v}=(A^{*})^{k-j}\cdot(A^{*})^{j}\cdot\vec{1}=(A^{*})^{k} \cdot\vec{1}=\vec{x}.\]
On the other hand, if \(k\leq j\), we let \(i\triangleq p^{m}-1-(j-k)\), which yields
\[(A^{*})^{i}\cdot\vec{v}=(A^{*})^{p^{m}-1-j+k}\cdot(A^{*})^{j}\cdot\vec{1}=I\cdot( A^{*})^{k}\cdot\vec{1}=\vec{x}.\]
Now we have a circuit \(C^{\prime\prime\prime}_{A^{*}}\) that given \(\vec{x}\in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}\), outputs \(i\in[p^{m}-1]\) such that \((A^{*})^{i}\cdot\vec{v}=\vec{x}\) and a circuit \(C^{\prime}_{A^{*}}\) that given \(i\in[p^{m}-1]\), computes \(P((A^{*})^{i}\cdot\vec{v})\). We then construct the circuit
\[C_{A^{*}}(\vec{x})\triangleq\begin{cases}P(\vec{0})&\text{if }\vec{x}=\vec{0} \\ C^{\prime}_{A^{*}}(C^{\prime\prime\prime}_{A^{*}}(\vec{x}))&\text{if }\vec{x} \in\mathbb{F}_{p}^{m}\setminus\{\vec{0}\}.\end{cases}\]
Note that we can hardwire the value of \(P(\vec{0})\). Also notice that if both Equations 2 and 3 are true (which happens with probability at least \(1-1/(9p^{m})\)) we will get that for all \(\vec{x}\in\mathbb{F}_{p}^{m}\),
\[C_{A^{*}}(\vec{x})=P(\vec{x}).\]
However, we don't know the matrix \(A^{*}\). Instead, we will run the above procedure for each \(A\in S\) to obtain a list \(\mathcal{C}\triangleq\{C_{A}\}_{A\in S}\) of candidate circuits \(C_{A}\). Then, with probability at least \(1-1/(9p^{m})\), \(\mathcal{C}\) contains at least one circuit (in particular, \(C_{A^{*}}\)) that computes the polynomial \(P\).
Given the list of candidate circuits \(\mathcal{C}\), we now describe the second stage. First of all, given a circuit \(C_{A}\in\mathcal{C}\), we want to check if \(C_{A}\) is sufficiently close to \(P\).
**Claim 4.10**.: _There is a randomized algorithm \(\mathsf{IsClose}\) that, given a circuit \(B\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\), \(\delta\in(0,1]\), and oracle access to the polynomial \(P\), runs in time \(\mathrm{poly}(|B|)\cdot\log(1/\delta)\) such that_
* _if_ \(\Pr_{\vec{x}}[B(\vec{x})=P(\vec{x})]=1\)_, the algorithm accepts with probability_ \(1\)_, and_
* _if_ \(\Pr_{\vec{x}}[B(\vec{x})=P(\vec{x})]\leq 3/4\)_, the algorithm rejects with probability at least_ \(1-\delta\)_._
Proof of Claim 4.10.: The algorithm picks \(3\log(1/\delta)\) points uniformly at random from \(\mathbb{F}_{p}^{m}\) and checks if \(B\) and \(P\) agree on all those points. If so, the algorithm accepts; otherwise it rejects. Note that if \(\Pr_{\vec{x}}[B(\vec{x})=P(\vec{x})]\leq 3/4\), then the probability that it accepts is at most \((3/4)^{3\log(1/\delta)}<\delta\). \(\diamond\)
For each \(C_{A}\in\mathcal{C}\), we run \(\mathsf{IsClose}^{P}(C_{A},\delta\triangleq 1/(4|\mathcal{C}|p^{m}))\) and pick the first one that the algorithm accepts. By the fact that \(\mathcal{C}\) contains at least one circuit that computes \(P\) and by the property of the algorithm \(\mathsf{IsClose}\) (Claim 4.10), with probability at least \(1-1/(4p^{m})\), we will obtain some \(D\)-oracle circuit \(C_{\mathsf{close}}\) such that
\[\Pr_{\vec{x}\leftarrow\mathbb{F}_{p}^{m}}[C_{\mathsf{close}}(\vec{x})=P( \vec{x})]>3/4. \tag{4}\]
Conditioned on Equation 4, by combining \(C_{\mathsf{close}}\) with the self-corrector \(\mathsf{PCor}^{(-)}\) from Theorem 4.8, we get that for every \(\vec{x}\in\mathbb{F}_{p}^{m}\), \(\mathsf{PCor}^{C_{\mathsf{close}}}(p,m,\Delta,\vec{x})=P(\vec{x})\) with probability at least \(2/3\) (over the internal randomness of \(\mathsf{PCor}^{C_{\mathsf{close}}}\)). Again, by using standard error reduction techniques and by picking a randomness uniformly at random, we can obtain in time \(\mathrm{poly}(p,m)\), with probability at least \(1-1/(4p^{m})\), a \(D\)-oracle circuit \(C\) that computes \(P\).
By a union bound, the above procedure gives, with probability at least \(1-1/p^{m}\), a \(D\)-oracle circuit that computes the polynomial \(P\).
Finally, it is easy to verify that the running time is \(\mathrm{poly}(p,m)\).
## 5 Improved Chen-Tell Targeted Hitting Set Generator
In this section, we prove Theorem 3.1, showing how to build a reconstructive hitting set generator from any uniform low-depth circuit.
### Layered-Polynomial Representation
The first step is to "arithmetize" our low-depth circuit into a _layered-polynomial representation_. Roughly speaking, given a (uniform) circuit \(C\) of depth \(d\) and size \(T\), we will produce a table of size \(d^{\prime}\times T^{\prime}\) where \(d^{\prime}\approx d\) and \(T^{\prime}=\operatorname{poly}(T)\), such that the following key properties hold:
**(Low-degree.)**: Each row is the "truth table" of a low-degree polynomial (thus admits self-correction properties).
**(Faithful representation.)**: Given oracle access to the \(d^{\prime}\)-th row, we can compute the output of \(C(1^{n})\) quickly.
**(Downward self-reducibility.)**: For each \(2\leq i\leq d^{\prime}\), given oracle access to the \((i-1)\)-th polynomial, we can quickly compute the output of the \(i\)-th polynomial on a given input. Moreover, the entries of the first row (corresponding to \(i=1\)) can be computed quickly.
Later, we will use these properties of the layered-polynomial representation to compile them into a reconstructive HSG.
We now formally describe our layered-polynomial representation, which can be proved by modifying the construction in [11]. In the following, letting \(p\) be a power of \(2\), and \(f\colon\mathbb{F}_{p}^{\ell}\to\mathbb{F}_{p}\), we use \(\mathtt{tt}(f)\) to denote the length-\((p^{\ell}\cdot\log p)\) Boolean string that consists of \(p^{\ell}\) blocks, where the \(i\)-th block corresponds to the Boolean encoding of the \(i\)-th element in \(\mathbb{F}_{p}\).
**Theorem 5.1** (Layered-Polynomial Representation).: _There exist universal constants \(c,c^{\prime},\beta>1\) such that the following holds. Let \(\kappa\in\mathbb{N}\) and let \(T,d,n,h,p\in\mathbb{N}\) all be sufficiently large such that (1) \(d\leq T\) and \(n\leq T\), and (2) \(h,p\) are both nice powers of \(2\) and \(\log T\leq h<p\leq h^{27}\leq T\)._ (_Recall that \(p\) is a nice power of \(2\) if \(p=2^{2\cdot 3^{\lambda}}\) for some \(\lambda\in\mathbb{N}\)._)__
_Let \(\vec{\ell}\triangleq(\kappa,T,d,n,h,p)\) be the input parameters, and let \(\mathbb{F}\triangleq\mathbb{F}_{p}\). For a Turing machine \(\mathsf{TM}\) with description size \(|\mathsf{TM}|=\kappa\cdot\log T\), let_
\[C_{\mathsf{TM}}\triangleq\mathsf{Circuit}[T,\kappa\cdot\log T,n,n](\mathsf{ TM}).\]
_Assuming \(C_{\mathsf{TM}}\neq\bot\) and \(C_{\mathsf{TM}}\) has depth at most \(d\), there are \(d^{\prime}\triangleq c_{\mathcal{K}}\cdot\log^{2}T\cdot(d+\kappa^{2}\log T)\) polynomials \(\left(P_{i}^{\vec{\ell},\mathsf{TM}}\right)_{i\in[d^{\prime}]}\) such that the following hold_ (_below we write \(P_{i}^{\vec{\ell},\mathsf{TM}}\) as \(P_{i}\) for simplicity_)_:_
1. **(Arithmetic setting.)** _Let_ \(H\subset\mathbb{F}\) _be the first_ \(h\) _elements of_ \(\mathbb{F}\)_, and let_ \(m\) _be the smallest power of_ \(3\) _such that_ \(h^{m}\geq T^{\beta\kappa}\)_. Each polynomial is from_ \(\mathbb{F}^{3m}\) _to_ \(\mathbb{F}\) _and has total degree at most_ \(\Delta=c\cdot h\cdot\log^{3}(T)\)_._
2. **(Faithful representation.)** _Fix an injective function_ \(\mathsf{id}\colon[n]\to H^{m}\) _in an arbitrary but canonical way._21 _For every_ \(i\in[n]\)_,_ \(\left(C_{\mathsf{TM}}(1^{n})\right)_{i}=P_{d^{\prime}}(\mathsf{id}(i),0^{2m})\)_._ Footnote 21: For simplicity, we will ignore the complexity of computing \(\mathsf{id}\) and its inverse since it is negligible.
3. **(Complexity of the polynomials.)** _Let_ \(T_{\mathsf{poly}}\triangleq T^{c\cdot\kappa}\) _and_ \(d_{\mathsf{poly}}\triangleq c\cdot(d\log T+\kappa^{2}\log^{2}T)\)_. There is a Turing machine_ \(\mathsf{TM}_{\mathsf{poly}}\) _of description length_ \(\log T_{\mathsf{poly}}\) _such that for_ \[C_{\mathsf{poly}}\triangleq\mathsf{Circuit}\big{[}T_{\mathsf{poly}},\log T_{ \mathsf{poly}},\log d^{\prime},|\mathbb{F}|^{3m}\cdot\log|\mathbb{F}|\big{]}( \mathsf{TM}_{\mathsf{poly}}),\] _it holds that (_1_) for every_ \(i\in[d^{\prime}]\)__\(C_{\mathsf{poly}}(i)=\mathtt{tt}(P_{i})\) _and (_2_)_ \(C_{\mathsf{poly}}\) _has depth_ \(d_{\mathsf{poly}}\)_. Moreover, there is a polynomial-time algorithm_ \(\mathbb{A}_{\vec{\ell}}^{\mathsf{poly}}\) _that takes_ \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\) _as input, and outputs the description of_ \(\mathsf{TM}_{\mathsf{poly}}\)_._
4. **(Downward self-reducibility.)** _There is a_ \(\max(n,h)\cdot h^{c^{\prime}}\)_-time algorithm_ Base _that takes inputs_ \(\vec{\ell}\)_,_ \(\mathsf{TM}\in\{0,1\}^{\kappa\cdot\log T}\)_, and_ \(\vec{w}\in\mathbb{F}^{3m}\)_, outputs_ \(P_{1}(\vec{w})\)_._ _Also, there is an_ \(h^{c^{\prime}}\)_-time oracle algorithm_ DSR _that takes inputs_ \(\vec{\ell}\)_,_ \(\mathsf{TM}\in\{0,1\}^{\kappa\cdot\log T}\)_,_ \(i\in\{2,\ldots,d^{\prime}\}\)_, and_ \(\vec{w}\in\mathbb{F}^{3m}\)_, and oracle access to a polynomial_ \(\widetilde{P}\colon\mathbb{F}^{3m}\to\mathbb{F}\)_, such that when it is given_ \(P_{i-1}\) _as the oracle, it outputs_ \(P_{i}(\vec{w})\)_._
Proof.: Recall that we use \(\vec{\ell}\) to denote the input parameters \((\kappa,T,d,n,h,p)\). We will follow the proof of [11, Proposition 4.7], which in turn follows [10] (see also [10]). In the following, we will simply use \(C\) to denote the (low-depth) circuit \(C_{\mathsf{TM}}=\mathsf{Circuit}[T,\kappa\cdot\log T,n,n](\mathsf{TM})\) for notational convenience, but we stress that \(C\) depends on both \(\vec{\ell}\) and \(\mathsf{TM}\) (and so does the later circuits constructed from \(C\)).
#### 5.1.1 Construction of a Highly Uniform Circuit \(D\)
We first construct a circuit \(D\) that has better uniformity and preserves the functionality of \(C\), i.e., \(D(1^{n})=C(1^{n})\). Given input \(1^{n}\), \(D\) first computes a description of \(C=\mathsf{Circuit}[T,\kappa\cdot\log T,n,n](\mathsf{TM})\) (represented as a \(T\times T\times T\) tensor) and then computes the \(\mathsf{Eval}\) function \(\langle\langle C\rangle,n,d\rangle\mapsto C(1^{n})\). Let \(s\triangleq\kappa\cdot\log T\) and \(s^{\prime}\triangleq O(s+\log(3\log T))\) be such that each configuration of \(\mathsf{TM}\) on \(3\log T\)-bit inputs can be described by \(s^{\prime}\) bits.
The circuit \(D\) is constructed by composing the following three subcircuits. Let \(\mu\in\mathbb{N}\) be a sufficiently large universal constant. We will describe and analyze their complexities (or state the complexity bounds proved in [11, 10]).
1. (**Computing the adjacency matrices for configurations.**) The first circuit \(D^{(1)}\) takes \(n\) bits as input (which are supposed to be \(1^{n}\)), outputs a list of \(T^{3}\) matrices from \(\{0,1\}^{2^{s^{\prime}}\times 2^{s^{\prime}}}\), such that the \((u,v,w)\)-th matrix22\(M^{(u,v,w)}\) satisfies the following condition: for every \(\gamma,\gamma^{\prime}\in\{0,1\}^{s^{\prime}}\), \(M^{(u,v,w)}[\gamma,\gamma^{\prime}]=1\) if and only if \(\mathbb{A}_{\mathsf{nxt}}(\mathsf{TM},s,(u,v,w),\gamma,\gamma^{\prime})\) (_i.e._, \(\gamma^{\prime}\) is the configuration obtained by running \(\mathsf{TM}\) for one step on configuration \(\gamma\) and input \((u,v,w)\) with space bound \(s\)). Recall we assumed that if \(\gamma\) is the accepting or the rejecting configuration, then its next configuration is \(\gamma\) itself. Footnote 22: We use \((u,v,w)\in[T]^{3}\) to denote the integer \((u-1)T^{2}+(v-1)T+w\in[T^{3}]\). 23 Note that we can implement projections and restrictions of input bits to \(0\) and \(1\) using two layers of \(\mathsf{NAND}\) gates. **Complexity of \(D^{(1)}\).**\(D^{(1)}\) can be implemented by a projection (_i.e._, depth \(d_{D^{(1)}}=2\) and size \(T_{D^{(1)}}=T^{3}\cdot 2^{2s^{\prime}}\)).23 Moreover, from Fact 2.2, given \(\vec{\ell}\) and \(\mathsf{TM}\), in polynomial time we can compute a Turing machine \(\mathsf{TM}_{D^{(1)}}\in\{0,1\}^{(\kappa+\mu)\cdot\log T}\) such that Footnote 23: Note that we can implement projections and restrictions of input bits to \(0\) and \(1\) using two layers of \(\mathsf{NAND}\) gates. \[\mathsf{Circuit}\Big{[}T_{D^{(1)}},s_{D^{(1)}},n,T^{3}\cdot 2^{2s^{ \prime}}\Big{]}(\mathsf{TM}_{D^{(1)}})=D^{(1)},\] where \(s_{D^{(1)}}=\mu\cdot s^{\prime}\).
2. (**Computing the adjacency relation tensor of \(C\) via matrix multiplication.**) The second circuit \(D^{(2)}\) takes a list of \(T^{3}\) matrices from \(\{0,1\}^{2^{s^{\prime}}\times 2^{s^{\prime}}}\) as input, and outputs a tensor from \(\{0,1\}^{T\times T\times T}\) followed by the encoding of a pair \((n,d)\). In more detail, given the output of \(D^{(1)}(1^{n})\), for every \((u,v,w)\in[T]^{3}\), it determines whether \(\mathsf{TM}(u,v,w)=1\) by computing \((M^{(u,v,w)})^{2^{s^{\prime}}}\), which can be done by repeated squaring \(s^{\prime}\) times. This gives the adjacent relation tensor of \(C\).
**Complexity of \(D^{(2)}\).**\(D^{(2)}\) can be implemented by a circuit of depth \(d_{D^{(2)}}=\mu\cdot(s^{\prime})^{2}\) and size \(T_{D^{(2)}}=T^{3}\cdot 2^{\mu s^{\prime}}\). Moreover, from [11, 12] (note that \(D^{(2)}\) does not depend on \(\mathsf{TM}\)), given \(\vec{\ell}\), in polynomial time we can compute a Turing machine \(\mathsf{TM}_{D^{(2)}}\in\{0,1\}^{\mu\cdot\log T}\) such that \[\mathsf{Circuit}\Big{[}T_{D^{(2)}},s_{D^{(2)}},T^{3}\cdot 2^{2s^{\prime}},T^{3 }+|(n,d)|\Big{]}\big{(}\mathsf{TM}_{D^{(2)}})=D^{(2)},\] where \(s_{D^{(2)}}=\mu\cdot s^{\prime}\).
3. (**Computing** Eval.) The final circuit \(D^{(3)}\) takes \(\langle\langle C\rangle,n,d\rangle\) as input, and outputs \(\mathsf{Eval}(\langle C\rangle,n,d)\). **Complexity of \(D^{(3)}\).**\(D^{(3)}\) can be implemented by a circuit of depth \(d_{D^{(3)}}=\mu\cdot d\cdot\log T\) and size \(T_{D^{(3)}}=T^{\mu}\). Moreover, from [11, 12] (note that \(D^{(3)}\) does not depend on \(\mathsf{TM}\)), given \(\vec{\ell}\), in polynomial-time we can compute a Turing machine \(\mathsf{TM}_{D^{(3)}}\in\{0,1\}^{\mu\cdot\log T}\) such that \[\mathsf{Circuit}\big{[}T_{D^{(3)}},s_{D^{(3)}},T^{3}+|(n,d)|,n\big{]}(\mathsf{ TM}_{D^{(3)}})=D^{(3)},\] where \(s_{D^{(3)}}=\mu\cdot s^{\prime}\).
Formally, we have
\[D=D^{(3)}\circ D^{(2)}\circ D^{(1)}.\]
Let \(\beta\in\mathbb{N}\) be a sufficiently large constant such that \(\beta\geq 100\mu\). The following claim summarizes the required properties of \(D\) for us.
**Claim 5.2**.: _The following properties about the circuit \(D\) are true._
1. _The depth of_ \(D\) _is_ \(d_{D}=\beta\cdot(\kappa^{2}\cdot\log^{2}T+d\cdot\log T)\) _and the width of_ \(D\) _is_ \(T^{\prime}_{D}=T^{\beta\kappa}\)_._
2. _The layered adjacency relation function_ \(\Phi^{\prime}\colon[d_{D}]\times\{0,1\}^{3\log(T^{\prime}_{D})}\to\{0,1\}\) _of_ \(D\) _can be decided by a formula of depth_ \(O(\log\log T)\) _and size_ \(O(\log^{3}T)\)_. Moreover, there is an algorithm_ \(\mathbb{A}_{\Phi^{\prime}}\) _that given_ \(\vec{\ell}\) _and_ \(\mathsf{TM}\) _as input, outputs the formula above in_ \(O(\kappa\log T)\) _space._
3. _There is a Turing machine_ \(\mathsf{TM}_{D}\in\{0,1\}^{\beta\kappa\log T}\) _such that_ \[\mathsf{Circuit}[T_{D},s_{D},n,n](\mathsf{TM}_{D})=D,\lx@note{footnote}{Note that \mathsf{Circuit} generates the unlayered version of $D$ of size $T^{\prime}_{D}\cdot(d_{D}+1)$, Without loss of generality we can assume the first $T^{\prime}$ gates are on the first layer, the next $T^{\prime}$ gates are on the second layer, and so on.}\] _where_ \(T_{D}=T^{\prime}_{D}\cdot(d_{D}+1)\) _and_ \(s_{D}=\beta\kappa\log T\)_. Moreover, given_ \(\vec{\ell}\) _and_ \(\mathsf{TM}\) _as input, the description of_ \(\mathsf{TM}_{D}\) _can be computed in polynomial time._
Proof of Claim 5.2.: By construction, the size of \(D\) is bounded by \(\operatorname{poly}(T)\cdot 2^{O(s^{\prime})}\leq T^{O(\kappa)}\) (recall that \(s^{\prime}=O(s+\log(3\log T))\) and \(s=\kappa\log T\)), and its depth is bounded by \(O(s^{2}+d\cdot\log T)\). The first bullet then follows directly from the fact that \(\beta\) is sufficiently large.
Recall that the \(D^{(1)}\) part of \(D\) has depth \(d_{D^{(1)}}=2\). To see the complexity of computing \(\Phi^{\prime}(i,-,-,-)\) for \(i>2\), we note that the layers corresponding to \(D^{(2)}\) and \(D^{(3)}\)_do not_ depend on \(\mathsf{TM}\). Hence the complexity of computing their layered adjacent relation function follows directly from [11, Claim 4.7.1].25 Also, the complexity of computing \(\Phi^{\prime}(i,-,-,-)\) for \(i\in\{1,2\}\) follows
directly from Fact 2.2. To see the moreover part, again, the case for \(i>2\) follows from [11, Claim 4.7.1], and the case for \(i\in\{1,2\}\) follows from Fact 2.2.26
Footnote 26: Strictly speaking we need to combine the formulas for two cases to obtain a single formula for \(\Phi^{\prime}\). The overhead of doing so is negligible so we omit this discussion here.
Finally, to obtain the algorithm that computes \(\mathsf{TM}_{D}\), we simply apply the composition \(\mathbb{A}_{\mathsf{comp}}\) (from Fact 2.4) twice to compose the circuits \(D^{(1)},D^{(2)},D^{(3)}\) in order and add some dummy gates to the circuit. The space bound and the description size bound can also be verified easily. \(\diamond\)
#### 5.1.2 Arithmetization of \(D\)
The construction of the polynomials and their corresponding algorithms can then be done in the same way as in [11]. We only state the necessary changes to establish our theorem.
Note that \(|\mathbb{F}|^{m}=p^{m}\leq\operatorname{poly}(h^{27m})\leq T^{O(\beta\kappa)} \leq T^{O(\kappa)}\) (\(\beta\) is a universal constant), from our assumption that \(p\leq h^{27}\) and our choice of \(m\).
First, we need an arithmetization of each \(\Phi^{\prime}_{i}\triangleq\Phi^{\prime}(i,-,-,-)\).
**Claim 5.3**.: _For \(i\in[d_{D}]\) there exists a polynomial \(\hat{\Phi}_{i}\colon\mathbb{F}^{3m}\to\mathbb{F}\) that satisfies the following:_
1. _For every_ \((\vec{w},\vec{u},\vec{v})=z_{1},...,z_{3m}\in H^{3m}\) _we have that_ \(\hat{\Phi}_{i}(\vec{w},\vec{u},\vec{v})=1\) _if gate_ \(\vec{w}\) _in the_ \(i^{th}\) _layer of_ \(D\) _is fed by gates_ \(\vec{u}\) _and_ \(\vec{v}\) _in the_ \((i-1)^{th}\) _layer of_ \(D\)_, and_ \(\hat{\Phi}_{i}(\vec{w},\vec{u},\vec{v})=0\) _otherwise._
2. _The degree of_ \(\hat{\Phi}_{i}\) _is at most_ \(O(h\cdot\log^{3}T)\)_. Moreover, there exists an algorithm that on input_ \(\vec{\ell},\mathsf{TM},i,\vec{w},\vec{u},\vec{v}\)_, computes_ \(\hat{\Phi}_{i}(\vec{w},\vec{u},\vec{v})\) _in_ \(\operatorname{poly}(h)\) _time._
3. _For a universal constant_ \(c_{1}>1\)_, there exists a circuit_ \(C_{\hat{\Phi}}\) _of size_ \(T_{\hat{\Phi}}\triangleq T^{c_{1}\kappa}\) _and depth_ \(c_{1}\kappa\cdot\log T\) _that on input_ \(i\in[d_{D}]\) _outputs_ \(\mathsf{tt}(\hat{\Phi}_{i})\in\mathbb{F}^{|\mathbb{F}|^{3m}}\)__(_represented as a Boolean string_)_. Moreover, there is a polynomial-time algorithm_ \(\mathbb{A}_{\hat{\Phi}}\) _that takes_ \(\vec{\ell}\) _and_ \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\) _as input, and outputs the description of a Turing machine_ \(\mathsf{TM}_{\hat{\Phi}}\in\{0,1\}^{c_{1}\kappa\log T}\) _such that_ \[C_{\hat{\Phi}}=\mathsf{Circuit}\big{[}T_{\hat{\Phi}},c_{1}\cdot\kappa\log T, \log(d_{D}+1),|\mathbb{F}|^{3m}\log|\mathbb{F}|\big{]}(\mathsf{TM}_{\hat{\Phi }}).\]
Proof Sketch of Claim 5.3.: We first define \(\hat{\Phi}_{i}\) and then establish each item separately.
Construction of \(\hat{\Phi}_{i}\).Let \(F_{\Phi^{\prime}}\) be the \(O(\log\log T)\)-depth \(O(\log^{3}T)\)-size formula computing \(\Phi^{\prime}\colon[d_{D}]\times\{0,1\}^{3\cdot\log T_{D}^{\prime}}\to\{0,1\}\) from Claim 5.2. For every \(i\in[d_{D}]\), let \(F_{i}\) be the restriction of \(F_{\Phi^{\prime}}\) by fixing the first input to be \(i\). Then, we arithmetize \(F_{i}\) by replacing every \(\mathtt{NAND}\) gate in \(F_{i}\) by an arithmetic gate \(\widehat{\mathtt{NAND}}:\mathbb{F}^{2}\to\mathbb{F}\) computing \(\widehat{\mathtt{NAND}}(u,v)\triangleq 1-uv\). Denote the new formula (which is now an _arithmetic_ formula) by \(\hat{F}_{i}\).
For each \(j\), let \(\pi_{j}\colon H\to\{0,1\}\) be the mapping that maps \(z\in H\) to the \(j\)-th bit of the encoding of \(z\). Note that since \(H\) consists of the smallest \(h\) elements in \(\mathbb{F}\), we know that \(\pi(z)=(\pi_{1}(z),\ldots,\pi_{\log h}(z))\) is a bijection between \(H\) and \(\{0,1\}^{\log h}\).27
Footnote 27: More specifically, by our specific encoding of \(H\) as strings from \(\{0,1\}^{\log|\mathbb{F}|}\), \(\pi(z)\) is simply the first \(\log h\) bits of the encoding of \(z\), hence it can be computed by a projection.
For each \(j\in[\log h]\), let \(\hat{\pi}_{j}\colon\mathbb{F}\to\mathbb{F}\) be the unique degree-\((h-1)\) extension of \(\pi_{j}\) to \(\mathbb{F}\), that can be computed via standard interpolation via logspace-uniform circuits of \(O(\log(h\cdot\log|\mathbb{F}|))=O(\log T)\) depth and \(\operatorname{polylog}(T)\) size [1, 10] (see [11, Claim 4.7.2] for the details). We also let \(\hat{\pi}(z)=(\hat{\pi}_{1}(z),\ldots,\hat{\pi}_{\log h}(z))\). Then, we set
\[\hat{\Phi}_{i}(z_{1},\ldots,z_{3m})\triangleq\hat{F}_{i}(\hat{\pi}(z_{1}), \hat{\pi}(z_{2}),\ldots,\hat{\pi}(z_{3m})).\]
We also use \(F_{\hat{\Phi}_{i}}\) to denote the _arithmetic_ formula on the right side above that computes the formula \(\hat{\Phi}_{i}\).
From the construction above, the first two items of the claim can be proved identically as [12, Claim 4.7.2]. It remains to establish the third item.
Construction of \(C_{\hat{\Phi}}\).We hardwire the description of \(F_{\Phi^{\prime}}\) into \(C_{\hat{\Phi}}\). The circuit \(C_{\hat{\Phi}}\) takes \(i\in[d_{D}]\) as input, performs the above computation to obtain a description of the arithmetic formula \(F_{\hat{\Phi}_{i}}\), and then outputs the truth table of \(F_{\hat{\Phi}_{i}}\) by evaluating it on all vectors in \(\mathbb{F}^{3m}\).
In more detail, computing the description of \(F_{\hat{\Phi}_{i}}\) from the description of \(F_{\Phi^{\prime}}\) and \(i\) can be done in \(O(\log T)\) depth and \(\operatorname{polylog}(T)\) size. \(C_{\hat{\Phi}}\) then evaluates \(F_{\hat{\Phi}_{i}}\) on all vectors from \(\mathbb{F}^{m}\), which can be done in \(\operatorname{poly}(|\mathbb{F}|^{m})\) size and \(O(\log(|\mathbb{F}|^{m}))\) depth. The third bullet (except for the moreover part) then follows by setting \(c_{1}\) to be sufficiently large and recalling that \(|\mathbb{F}|^{m}\leq T^{O(\beta\kappa)}\).
Establishing the uniformity.Finally, we establish the moreover part of the third bullet. Let \(\mu_{\hat{\Phi}}\in\mathbb{N}\) be a sufficiently large universal constant that depends on the space complexity of the algorithm \(\mathbb{A}_{\Phi^{\prime}}\) from Claim5.2.
Our algorithm \(\mathbb{A}_{\hat{\Phi}}\) works as follows:
1. We first construct a Turing machine \(\mathsf{TM}_{[1]}\) with \(\vec{\ell}\) and \(\mathsf{TM}\) hardwired. \(\mathsf{TM}_{[1]}\) corresponds to a circuit \(C_{[1]}\) that takes \(i\in[d_{D}]\) as input and outputs \(i\) together with the description of \(F_{\Phi^{\prime}}\).28\(C_{[1]}\) has depth \(d_{[1]}=O(1)\) and size \(T_{[1]}=\operatorname{polylog}(T)\). Let \(s_{[1]}=\mu_{\hat{\Phi}}\cdot\kappa\log T\), we have Footnote 28: Precisely, \(\mathsf{TM}_{[1]}\) simulates \(\mathbb{A}_{\Phi^{\prime}}\) on input \(\vec{\ell}\) and \(\mathsf{TM}\) to construct a projection that maps \(i\) to \((i,F_{\Phi^{\prime}})\). \[C_{[1]}=\mathsf{Circuit}\big{[}T_{[1]},s_{[1]},\log d_{D},\log d_{D}+|F_{\Phi^ {\prime}}|\big{]}\big{(}\mathsf{TM}_{[1]}\big{)}\] and \(\mathsf{TM}_{[1]}\) has description size at most \(|\mathsf{TM}|+\mu_{\hat{\Phi}}\cdot\log T=(\kappa+\mu_{\hat{\Phi}})\cdot\log T\). Here, we crucially use the fact that the algorithm \(\mathbb{A}_{\Phi^{\prime}}\) from Claim5.2 runs in \(O(\kappa\log T)\) space (and \(\mu_{\hat{\Phi}}\) is sufficiently large).
2. Then we construct a Turing machine \(\mathsf{TM}_{[2]}\) that corresponds to a circuit \(C_{[2]}\) that takes \(i\in[d_{D}]\) together with the description of \(F_{\Phi^{\prime}}\) as input and outputs \(\mathtt{tt}(\hat{\Phi}_{i})\). By the discussion above, from \(\vec{\ell}\) we can compute a Turing machine \(\mathsf{TM}_{[2]}\in\{0,1\}^{\mu_{\hat{\Phi}}\kappa\log T}\) such that for \(T_{[2]}=\operatorname{poly}(|\mathbb{F}|^{m})\leq T^{\mu_{\hat{\Phi}}\kappa}\), \(d_{[2]}=O(\log(|\mathbb{F}|^{m}))\leq\mu_{\hat{\Phi}}\kappa\cdot\log T\), \(s_{[2]}=\mu_{\hat{\Phi}}\kappa\log T\), we have \[C_{[2]}=\mathsf{Circuit}\big{[}T_{[2]},s_{[2]},\log d_{D}+|F_{\Phi^{\prime}}|, |\mathbb{F}|^{3m}\big{]}\big{(}\mathsf{TM}_{[2]}\big{)}\;,\] and \(C_{[2]}\) has depth \(d_{[2]}\).
3. Finally, \(\mathbb{A}_{\hat{\Phi}}\) composes \(C_{[1]}\) and \(C_{[2]}\) by applying Fact2.4 and outputs the obtained Turing machine as \(\mathsf{TM}_{\hat{\Phi}}\). Setting \(c_{1}\) sufficiently large completes the proof. \(\diamond\)
Then we define the following polynomials, according to [12, Definition 4.6].
Input polynomial.Let \(\alpha_{0}\colon H^{m}\to\{0,1\}\) represent the string \(1^{n}0^{h^{m}-n}\), and let \(\hat{\alpha}_{0}\colon\mathbb{F}^{m}\to\mathbb{F}\) be the "arithmetization" of \(\alpha_{0}\), defined by
\[\hat{\alpha}_{0}(\vec{w})=\sum_{\vec{z}\in H^{m^{\prime}}\times\{0\}^{m-m^{ \prime}}}\delta_{\vec{z}}(\vec{w})\cdot\alpha_{0}(\vec{z}).\]
Here, \(m^{\prime}\leq m\) is the minimal integer such that \(h^{m^{\prime}}\geq n\), and \(\delta_{\vec{z}}\) is Kronecker's delta function (_i.e._, \(\delta_{\vec{z}}(\vec{w})=\prod_{j\in[m]}\prod_{a\in H\setminus\{z_{j}\}} \frac{w_{j}-a}{z_{j}-a}\)).
Layer polynomials.For each \(i\in[d_{D}]\), let \(\alpha_{i}\colon H^{m}\to\{0,1\}\) represent the values of the gates at the \(i^{th}\) layer of \(D\) in the computation of \(D(1^{n})\) (with zeroes in locations that do not index valid gates). Recall that we consider circuits consisting of \(\mathsf{NAND}\) gates, where for \(a,b\in\{0,1\}\) we have \(\mathsf{NAND}(a,b)=1-a\cdot b\). We define \(\hat{\alpha}_{i}\colon\mathbb{F}^{m}\to\mathbb{F}\) as
\[\hat{\alpha}_{i}(\vec{w})=\sum_{\vec{u},\vec{v}\in H^{m}}\hat{\Phi}_{i}(\vec{ w},\vec{u},\vec{v})\cdot\left(1-\hat{\alpha}_{i-1}(\vec{u})\cdot\hat{\alpha}_{i-1 }(\vec{v})\right). \tag{5}\]
Note that \(\hat{\alpha}_{i}\) is the "arithmetization" of \(\alpha_{i}\) in the sense that for every \(\vec{w}\in H^{m}\), \(\alpha_{i}(\vec{w})=\hat{\alpha}_{i}(\vec{w})\).
Sumcheck polynomials.For each \(i\in[d_{D}]\), let \(\hat{\alpha}_{i,0}\colon\mathbb{F}^{3m}\to\mathbb{F}\) be the polynomial
\[\hat{\alpha}_{i,0}(\vec{w},\sigma_{1},...,\sigma_{2m})=\hat{\Phi}_{i}(\vec{w},\sigma_{1,...,m},\sigma_{m+1,...,2m})\cdot\left(1-\hat{\alpha}_{i-1}(\sigma_{ 1,...,m})\cdot\hat{\alpha}_{i-1}(\sigma_{m+1,...,2m})\right). \tag{6}\]
For every \(j\in[2m]\), let \(\hat{\alpha}_{i,j}\colon\mathbb{F}^{3m-j}\to\mathbb{F}\) be the polynomial
\[\hat{\alpha}_{i,j}(\vec{w},\sigma_{1},...,\sigma_{2m-j})=\] \[\sum_{\sigma_{2m-j+1},...,\sigma_{2m}\in H}\hat{\Phi}_{i}(\vec{w},\sigma_{1,...,m},\sigma_{m+1,...,2m})\cdot\left(1-\hat{\alpha}_{i-1}(\sigma_{ 1,...,m})\cdot\hat{\alpha}_{i-1}(\sigma_{m+1,...,2m})\right), \tag{7}\]
where \(\sigma_{k,...,k+r}=\sigma_{k},\sigma_{k+1},...,\sigma_{k+r}\). It is easy to check that \(\hat{\alpha}_{i,2m}=\hat{\alpha}_{i}\).
We are now ready to define the sequence \((P_{i})_{i\in[d^{\prime}]}=\left(P_{i}^{\vec{\ell},\mathsf{TM}}\right)_{i\in [d^{\prime}]}\). We set \(d^{\prime}\triangleq(2m+1)\cdot d_{D}+1\) and
\[(P_{i})_{i\in[d^{\prime}]}=(\hat{\alpha}_{0},\hat{\alpha}_{1,0},\ldots,\hat{ \alpha}_{1,2m},\hat{\alpha}_{2,0},\ldots,\hat{\alpha}_{2,2m},\ldots,\hat{ \alpha}_{d_{D},0},\ldots,\hat{\alpha}_{d_{D},2m}).\]
For those \(\hat{\alpha}_{i,j}\) (and \(\hat{\alpha}_{0}\)) that take less than \(3m\) variables, we add some dummy variables at the end to make all polynomials taking exactly \(3m\) variables.
From the definitions of \(m\) and \(d_{D}\), we have \(m\leq 3\cdot\beta\kappa\cdot\log T+1\) and \(d_{D}=\beta\cdot(\kappa^{2}\cdot\log^{2}T+d\cdot\log T)\). Hence, we have \(d^{\prime}=(2m+1)\cdot d_{D}+1\leq c\kappa\cdot\log^{2}T\cdot(d+\kappa^{2} \log T)\) as desired.29
Footnote 29: We can add identical polynomials at the end to make \(d^{\prime}\) exactly \(c\kappa\cdot\log^{2}T\cdot(d+\kappa^{2}\log T)\) as in the theorem statement.
Below we verify the desired properties of the sequence \((P_{i})_{i\in[d^{\prime}]}\).
Arithmetic setting, faithful representation, and downward self-reducibility.First, the degree bounds of all these polynomials follow directly from their definitions and from the degree bound on \(\hat{\Phi}_{i}\) (from Claim5.3). The faithful representation property also follows directly from the definition of \(\alpha_{d_{D}}\) and \(\hat{\alpha}_{d_{D},2m}=\hat{\alpha}_{d_{D}}\). Finally, the downward self-reducibility of the polynomials follows from the complexity of computing \(\hat{\Phi}_{i}\) (from Claim5.3) and the definitions of these polynomials, similarly to the proof of [11, Proposition 4.7].
#### 5.1.3 Complexity of the Polynomials
Now we verify the complexity of computing these polynomials. The argument below is straightforward but tedious. We first give a high-level overview.
High-level overview of the construction.To construct the circuit \(C_{\mathsf{poly}}\) that maps \(i^{\prime}\in[d^{\prime}]\) to \(\mathtt{tt}(P_{i})\), we will construct three subcircuits \(C_{\alpha}\), \(C_{\mathtt{tt-\hat{\Phi}}}\), and \(C_{\mathsf{arith}}\) such that:
1. \(C_{\alpha}\) maps \(i^{\prime}\in[d^{\prime}]\) to \((\mathtt{tt}(\alpha_{i-1}),i,j)\). Here, if \(i^{\prime}\geq 2\), then \(i\in\{1,\ldots,d_{D}\}\) and \(j\in\{0,1,\ldots,2m\}\) satisfies that \(P_{i^{\prime}}=\hat{\alpha}_{i,j}\) and \(\mathtt{tt}(\alpha_{i-1})\in\{0,1\}^{h^{m}}\) denotes the values of the gates at the \(i\)-th layer of \(D\). If \(i^{\prime}=1\), then we consider \(i=j=0\) and \(C_{\alpha}\) outputs \((\mathtt{tt}(\alpha_{0}),0,0)\).
2. \(C_{\mathtt{tt-\hat{\Phi}}}\) maps \((\mathtt{tt}(\alpha_{i-1}),i,j)\) to \((\mathtt{tt}(\alpha_{i-1}),i,j,\mathtt{tt}(\hat{\Phi}_{i}))\).
3. \(C_{\mathsf{arith}}\) maps \((\mathtt{tt}(\alpha_{i-1}),i,j,\mathtt{tt}(\hat{\Phi}_{i}))\) to \(\mathtt{tt}(\hat{\alpha}_{i,j})\in\mathbb{F}^{|\mathbb{F}|^{3m}}\).
\(C_{\mathsf{poly}}\) is then simply \(C_{\mathsf{arith}}\circ C_{\mathtt{tt-\hat{\Phi}}}\circ C_{\alpha}\). To compute the Turing machine \(\mathsf{TM}_{\mathsf{poly}}\) that corresponds to \(C_{\mathsf{poly}}\), we construct the Turing machines \(\mathsf{TM}_{\alpha}\), \(\mathsf{TM}_{\mathtt{tt-\hat{\Phi}}}\), and \(\mathsf{TM}_{\mathsf{arith}}\) corresponding to the three circuit above, and compose them using creftype 2.4.
Construction of \(C_{\alpha}\) and \(\mathsf{TM}_{\alpha}\).First, we construct a circuit \(C_{\alpha}\) that takes as input \(i^{\prime}\in[d^{\prime}]\) and outputs \((\mathtt{tt}(\alpha_{i-1}),i,j)\). To construct \(C_{\alpha}\), we first compute \(i\) and \(j\) from \(i^{\prime}\) using basic arithmetic, and then truncate \(D\) up to its \(i\)-th layer. It is easy to see that given the Turing machine \(\mathsf{TM}_{D}\) that specifies the circuit \(D\), in polynomial-time we can construct a Turing machine \(\mathsf{TM}_{\alpha}\in\{0,1\}^{|\mathsf{TM}_{D}|+\mu}\) such that (in what follows, we write \(|\langle i,j\rangle|=\log(d_{D}+1)+\log(2m+1)\) for convenience):
\[\mathsf{Circuit}[T_{\alpha},s_{\alpha},\log(d^{\prime}),h^{m}+|\langle i,j \rangle|](\mathsf{TM}_{\alpha})=C_{\alpha},\]
where \(T_{\alpha}=\mu\cdot T_{D}\), \(s_{\alpha}=2s_{D}\). Moreover, \(C_{\alpha}\) has depth at most \(d_{\alpha}=2\cdot d_{D}\).
Construction of \(C_{\mathtt{tt-\hat{\Phi}}}\) and \(\mathsf{TM}_{\mathtt{tt-\hat{\Phi}}}\).Let \(c_{1}\) be the universal constant from creftype 5.3. Next we construct a circuit \(C_{\mathtt{tt-\hat{\Phi}}}\) that on input \((\mathtt{tt}(\alpha_{i-1}),i,j)\), outputs \((\mathtt{tt}(\alpha_{i-1}),i,j,\mathtt{tt}(\hat{\Phi}_{i}))\). It is straightforward to obtain this circuit from the circuit \(C_{\hat{\Phi}}\) constructed in creftype 5.3. In other words, given \(\vec{\ell}\) and \(\mathsf{TM}_{\hat{\Phi}}\in\{0,1\}^{c_{1}\kappa\log T}\) as input (where \(\mathsf{TM}_{\hat{\Phi}}\) is the Turing machine that generates \(C_{\hat{\Phi}}\) as defined in creftype 5.3), we can compute a Turing machine \(\mathsf{TM}_{\mathtt{tt-\hat{\Phi}}}\in\{0,1\}^{2c_{1}\kappa\log T}\) such that
\[\mathsf{Circuit}[T_{\mathtt{tt-\hat{\Phi}}},s_{\mathtt{tt-\hat{\Phi}}},h^{m} +|\langle i,j\rangle|,h^{m}+|\langle i,j\rangle|+|\mathbb{F}|^{3m}\log| \mathbb{F}|](\mathsf{TM}_{\mathtt{tt-\hat{\Phi}}})=C_{\mathtt{tt-\hat{\Phi}}},\]
where \(T_{\mathtt{tt-\hat{\Phi}}}=T^{2c_{1}\kappa}\), \(s_{\mathtt{tt-\hat{\Phi}}}=2c_{1}\kappa\log T\). Moreover, \(C_{\mathtt{tt-\hat{\Phi}}}\) has depth \(d_{\mathtt{tt-\hat{\Phi}}}=2c_{1}\kappa\log T\).
Construction of \(C_{\mathsf{arith}}\) and \(\mathsf{TM}_{\mathsf{arith}}\).We construct a circuit \(C_{\mathsf{arith}}\) that maps \((\mathtt{tt}(\alpha_{i-1}),i,j,\mathtt{tt}(\hat{\Phi}_{i}))\) to \(\mathtt{tt}(\hat{\alpha}_{i,j})\in\mathbb{F}^{|\mathbb{F}|^{3m}}\). (Recall that throughout this proof we view \(\hat{\alpha}_{i,j}\) as a \(3m\)-variable polynomial by adding dummy variables at the end.) Suppose that \(i\geq 1\) (the base case \(i=j=0\) can be handled similarly). If \(j=0\), \(C_{\mathsf{arith}}\) computes \(\mathtt{tt}(\hat{\alpha}_{i,0})\) using creftype 6, otherwise (\(j\geq 1\)) \(C_{\mathsf{arith}}\) computes \(\mathtt{tt}(\hat{\alpha}_{i,j})\) using creftype 7. (Note that both creftype 6 and creftype 7 only depend on the values of \(\hat{\alpha}_{i-1}\) over \(H^{m}\), which is exactly \(\mathtt{tt}(\alpha_{i-1})\) due to our arithmetization.) Since arithmetic operations over \(\mathbb{F}\) (including iterated addition, multiplication, and inverse) are in logspace-uniform \(\mathsf{NC}^{1}\)[11, 12], it follows that \(C_{\mathsf{arith}}\) is of \(T_{\mathsf{arith}}\triangleq\mathrm{poly}(|\mathbb{F}|^{m})\) size and \(d_{\mathsf{arith}}\triangleq O(m\log|\mathbb{F}|)\) depth. Moreover, \(C_{\mathsf{arith}}\) does not depend on \(\mathsf{TM}\), and we can compute a Turing machine \(\mathsf{TM}^{\mathsf{arith}}\) from \(\vec{\ell}\) in time \(\mathrm{polylog}(T)\) such that
\[\mathsf{Circuit}[T_{\mathsf{arith}},s_{\mathsf{arith}},h^{m}+|\langle i,j \rangle|+|\mathbb{F}|^{3m}\log|\mathbb{F}|,|\mathbb{F}|^{3m}\log|\mathbb{F}|]( \mathsf{TM}^{\mathsf{arith}})=C_{\mathsf{arith}},\]
where \(s_{\mathsf{arith}}=\mu\cdot\beta\kappa\log T\).
Composing \(\mathsf{TM}_{\alpha}\), \(\mathsf{TM}_{\mathtt{tt-\hat{\Phi}}}\), and \(\mathsf{TM}_{\mathsf{arith}}\) by applying creftype 2.4 twice gives the desired Turing machine \(\mathsf{TM}_{\mathsf{poly}}\) that computes the desired circuit \(C_{\mathsf{poly}}\).
Complexity of \(C_{\mathsf{poly}}\) and \(\mathsf{TM}_{\mathsf{poly}}\).Finally, we verify that \(\mathsf{TM}_{\mathsf{poly}}\) and \(C_{\mathsf{poly}}\) satisfy our requirements. First, from the discussions above, we can bound the size of \(C_{\mathsf{poly}}\) by \(T_{\alpha}+T_{\mathtt{tt}-\hat{\Phi}}+T_{\mathsf{arith}}\leq T_{\mathsf{poly}} =2^{c\kappa\log T}\) by picking a sufficiently large \(c\). Note that \(m\log|\mathbb{F}|=\log(p^{m})\leq O(\kappa\log T)\). The depth of \(C_{\mathsf{poly}}\) can be bounded by \(d_{\mathsf{poly}}=d_{\alpha}+d_{\mathtt{tt}-\hat{\Phi}}+d_{\mathsf{arith}}\leq c \cdot(\kappa^{2}\cdot\log^{2}T+d\cdot\log T)\) as desired.
From creftype 2.4, we have that
\[|\mathsf{TM}_{\mathsf{poly}}|\leq 100\cdot\left(|\mathsf{TM}_{\alpha}|+| \mathsf{TM}_{\mathtt{tt}-\hat{\Phi}}|+|\mathsf{TM}_{\mathsf{arith}}|+\log(| \mathbb{F}|^{3m})\right)\leq c\cdot\kappa\log T=\log T_{\mathsf{poly}}\]
by setting \(c\) sufficiently large. The space complexity of \(\mathsf{TM}_{\mathsf{poly}}\) can be bounded by
\[100\cdot\left(s_{\alpha}+s_{\mathtt{tt}-\hat{\Phi}}+s_{\mathsf{arith}}\right) \leq c\cdot\kappa\log T=\log T_{\mathsf{poly}}\]
as well. This completes the proof.
### Improved Chen-Tell Generator: Proof of Theorem 3.1
Now we are ready to prove creftype 3.1 by plugging every polynomial from creftype 5.1 into our modified Shaltiel-Umans generator (creftype 4.1).
Proof of creftype 3.1.: We first observe that we can assume \(\rho=1\) without loss of generality. To see how the general case follows from the case that \(\rho=1\), letting \(M^{\prime}=M^{\rho}\), we can simply define \(\mathsf{H}^{\mathsf{ct}}_{n,T,d,M,\kappa,\rho}\) as the set of strings obtained by truncating every string from \(\mathsf{H}^{\mathsf{ct}}_{n,T,d,M^{\prime},\kappa,1}\) to their first \(M\) bits. The reconstruction algorithm \(\mathsf{R}^{\mathsf{ct}}_{n,T,d,M,\kappa,\rho}\) can then be obtained by slightly modifying \(\mathsf{R}^{\mathsf{ct}}_{n,T,d,M^{\prime},\kappa,1}\).
Let
\[\vec{\ell}_{\mathsf{ct}}=(n,T,d,M,\kappa,1)\]
be the input parameters from the theorem statement and \(c\) be a sufficiently large universal constant. From the assumption, we have \(n\leq T,d\leq T\), and \(c\cdot\log T\leq M\leq T^{1/c}\). Let
\[C_{\mathsf{TM}}=\mathsf{Circuit}[T,\kappa\cdot\log T,n,n](\mathsf{TM}).\]
The layered-polynomial representation.Let \(c_{0},c_{0}^{\prime},\beta\) be the universal constants from creftype 5.1. Let \(h\) be the _smallest_ nice power of \(2\) such that \(h\geq M\), \(p\triangleq h^{27}\), \(m\) be the smallest power of \(3\) such that \(h^{m}\geq T^{\beta\kappa}\), and \(\mathbb{F}=\mathbb{F}_{p}\). Note that \(p\) is also a nice power of \(2\) and \(h\leq M^{3}\).
We will invoke creftype 5.1 with input parameters
\[\vec{\ell}_{\mathsf{poly}}=(\kappa,T,d,n,h,p).\]
Note that from their definitions and our assumption \(M\geq c\log T\), we have \(\log T\leq h<p\leq h^{27}\leq M^{\mathsf{S1}}\leq T\) (assuming \(c\geq 81\) is large enough), meaning that the requirements on the input parameters of creftype 5.1 are satisfied.
We first apply creftype 5.1 with input parameters \(\vec{\ell}_{\mathsf{poly}}\) and Turing machine \(\mathsf{TM}\) to obtain \(d^{\prime}=c_{0}\kappa\cdot\log^{2}T\cdot(d+\kappa^{2}\log T)\) polynomials \((P_{i})_{i\in[d^{\prime}]}=\left(P_{i}^{\vec{\ell},\mathsf{TM}}\right)_{i\in[d ^{\prime}]}\).
Hitting set \(\mathsf{H}^{\mathsf{ct}}\).Let \(\mathsf{H}^{\mathsf{layer}}\) and \(\mathsf{R}^{\mathsf{layer}}\) denote the \(\mathsf{H}\) and \(\mathsf{R}\) algorithms from creftype 4.1, respectively.31 Let \(\Delta=c_{0}h\log^{3}(T)\),
Footnote 31: The superscript layer highlights the fact that they are applied to each layer of the polynomial representation of the circuit.
\[\vec{\ell}_{\mathsf{layer}}=(p,3m,M,\Delta)\]
be the input parameters when applying Theorem4.1. We can verify that \(p>\Delta^{2}(3m)^{7}M^{9}\), _i.e._, the requirement on the input parameters of Theorem4.1 is satisfied.
We then define \(\mathsf{H}^{\mathsf{ct}}_{\vec{\ell}_{\mathsf{ave}}}(\mathsf{TM})\) as the union of \(\mathsf{H}^{\mathsf{layer}}_{\vec{\ell}_{\mathsf{layer}}}(P_{i})\) for every \(i\in[d^{\prime}]\). Next we analyze the complexity of computing \(\mathsf{H}^{\mathsf{ct}}_{\vec{\ell}_{\mathsf{ct}}}(\mathsf{TM})\). First, from Theorem5.1, letting \(T_{\mathsf{poly}}=T^{c_{0}\cdot\kappa}\) and \(d_{\mathsf{poly}}=c_{0}\cdot(d\log T+\kappa^{2}\log^{2}T)\), there is a polynomial-time algorithm \(\mathbb{A}^{\mathsf{poly}}_{\vec{\ell}}\) that takes \(\mathsf{TM}\in\{0,1\}^{\kappa\log T}\) as input, and outputs a description of Turing machine \(\mathsf{TM}_{\mathsf{poly}}\in\{0,1\}^{\log T_{\mathsf{poly}}}\) such that for
\[C_{\mathsf{poly}}=\mathsf{Circuit}\big{[}T_{\mathsf{poly}},\log T_{\mathsf{ poly}},\log d^{\prime},|\mathbb{F}|^{3m}\cdot\log|\mathbb{F}|\big{]}(\mathsf{TM}_{ \mathsf{poly}})\]
it holds that (1) for every \(i\in[d^{\prime}]\)\(C_{\mathsf{poly}}(i)=\mathsf{tt}(P_{i})\) and (2) \(C_{\mathsf{poly}}\) has depth \(d_{\mathsf{poly}}\).
Second, from Theorem4.1, there is a logspace-uniform circuit family with input parameters \(\vec{\ell}_{\mathsf{layer}}\), size \(\operatorname{poly}(p^{m})\), and depth \(\operatorname{poly}(\log p,m,M)\) such that for every \(i\in[d^{\prime}]\), it outputs \(\mathsf{H}^{\mathsf{layer}}_{\vec{\ell}_{\mathsf{layer}}}(P_{i})\) when taking \(\mathsf{tt}(P_{i})\) as input. Note that \(\operatorname{poly}(p^{m})\leq T^{O(\beta\kappa)}\) and \(\operatorname{poly}(\log p,m,M)\leq\operatorname{poly}(M)\). Applying creftype2.4 to compose the machines above and enumerating over all \(i\in[d^{\prime}]\),32 we obtain the desired circuit \(C_{\mathsf{H}}\) (note that \(c\) is sufficiently large).
Footnote 32: Enumerating all \(i\in[d^{\prime}]\) only adds a \(O(\log d^{\prime})\) additive overhead in depth and a \(O(d^{\prime})\) multiplicative blowup in size, which are negligible.
Reconstruction \(\mathsf{R}^{\mathsf{ct}}\).For every \(i\in\{2,\ldots,d^{\prime}\}\), the reconstruction algorithm \(\mathsf{R}^{\mathsf{ct}}\) attempts to construct a \(\operatorname{poly}(p,m,\log(Md^{\prime}))\)-size \(D\)-oracle circuit \(E_{i}\) that computes \(P_{i}\). A formal description of \(\mathsf{R}^{\mathsf{ct}}\) is as follows:
* We start with the circuit \(E_{1}(\vec{x})=\mathsf{Base}(\vec{\ell},\mathsf{TM},\vec{x})\) that computes the polynomial \(P_{1}\).
* For every \(i\in\{2,\ldots,d^{\prime}\}\): 1. We first construct a procedure \(\widetilde{P}_{i}\) computing \(P_{i}\) using the \(D\)-oracle circuit \(E_{i-1}^{D}\) for \(P_{i-1}\) and the downward self-reducibility for \(P_{i}\). In particular, on input \(\vec{x}\in\mathbb{F}^{3m}\), let \[\widetilde{P}_{i}(\vec{x})\triangleq\mathsf{DSR}^{E_{i-1}^{D}}(\vec{\ell}, \mathsf{TM},i,\vec{x}).\]
2. Run \(\big{(}\mathsf{R}^{\mathsf{layer}}\big{)}^{D,\widetilde{P}_{i}}_{\vec{\ell}_{ \mathsf{layer}}}\) which outputs a \(D\)-oracle circuit \(\widetilde{E}_{i}^{D}\) in \(\operatorname{poly}(p,m,M)\) time.
3. Let \(t\triangleq c_{1}\cdot m\cdot\log p\) for a sufficiently large constant \(c_{1}>1\). Take \(t\) i.i.d. samples \(\vec{x}_{1},\ldots,\vec{x}_{t}\) from \(\mathbb{F}^{3m}\). Check that for every \(j\in[t]\), \(\widetilde{E}_{i}^{D}(\vec{x}_{j})=\widetilde{P}_{i}(\vec{x}_{j})\). If any condition does not hold, the algorithm outputs \(\bot\) and aborts immediately. 4. Let \(E_{i}\) be a \(D\)-oracle circuit constructed as follows: 1. Draw \(t=\Theta(m\log p)\) i.i.d. samples of random strings \(r_{1},\ldots,r_{t}\) used by \(\mathsf{PCorr}\). (Recall that \(\mathsf{PCorr}\) is the self-corrector for low-degree polynomials in Theorem4.8.) 2. Set \(E_{i}(\vec{x})=\mathsf{MAJ}_{k\in[t]}\ \mathsf{PCorr}^{\widetilde{E}_{i}}(p,3m, \Delta,\vec{x};r_{k})\) for all \(\vec{x}\in\mathbb{F}^{3m}\).
* For every \(j\in[n]\), output \(E_{d^{\prime}}^{D}(\mathsf{id}(j),0^{2m})\).
For ease of notation, for every \(i^{\prime}\in\{2,\ldots,d^{\prime}\}\), we use \(\tau_{i^{\prime}}\) to denote the randomness used when running the algorithm above with \(i=i^{\prime}\), and we use \(\tau_{\leq i}\) to denote \(\tau_{1},\ldots,\tau_{i}\). Also, if \(E_{i}\) is not constructed by the algorithm (meaning that the algorithm aborts before constructing \(E_{i}\)), we set \(E_{i}=\bot\).
From Theorem 4.8, Theorem 4.1, and Theorem 5.1, the running time of the algorithm above can be bounded by
\[\operatorname{poly}(p,m,h,\log(Md^{\prime}))\cdot(d^{\prime}+n)\leq\operatorname {poly}(M)\cdot(d^{\prime}+n)\leq\operatorname{poly}(M)\cdot(d+n).\]
The last inequality follows from the fact that \(M\geq\log T\) and hence \(d^{\prime}=c_{0}\kappa\cdot\log^{2}T\cdot(d+\kappa^{2}\log T)\leq\operatorname {poly}(M)\cdot d\). Now we establish the soundness and completeness of the reconstruction. We show the following claim.
**Claim 5.4**.: _Fix \(D\colon\{0,1\}^{M}\to\{0,1\}\). For every \(i\in\{2,\ldots,d^{\prime}\}\), for every fixed \(\tau_{\leq i-1}\), if \(E_{i-1}^{D}\) computes \(P_{i-1}\) or \(i=2\),33 then with probability at least \(1-1/p^{m}\) over \(\tau_{i}\) the following holds:_
Footnote 33: Note that \(\tau_{\leq i-1}\) determines \(E_{i-1}\).
* **(Soundness.)** _If_ \(E_{i}\neq\bot\)_, then_ \(E_{i}^{D}\) _computes_ \(P_{i}\)_._
* **(Completeness.)** _If_ \(D\)__\((1/M)\)_-avoids_ \(\mathsf{H}^{\mathsf{layer}}_{\widetilde{\ell}_{\mathsf{layer}}}(P_{i})\)_, then_ \(E_{i}^{D}\) _computes_ \(P_{i}\)_._
Before establishing the claim, we show it implies the completeness and soundness of the reconstruction. To see the soundness, note that by induction over all \(i\in\{2,\ldots,d^{\prime}\}\), with probability at least \(1-d^{\prime}/p^{m}>9/10\), it holds that if \(E_{d^{\prime}}\neq\bot\), then \(E_{d^{\prime}}\) computes \(P_{d^{\prime}}\), meaning the reconstruction outputs the correct output \(C_{\mathsf{TM}}(1^{n})\). To see the completeness, note that an oracle \(D\colon\{0,1\}^{M}\to\{0,1\}\) that \((1/M)\)-avoids \(\mathsf{H}^{\mathsf{CT}}_{\widetilde{\ell}}(\mathsf{TM})\) also \((1/M)\)-avoids \(\mathsf{H}^{\mathsf{layer}}_{\widetilde{\ell}_{\mathsf{layer}}}(P_{i})\) for every \(i\in[d^{\prime}]\). Hence, by induction over \(i\in\{2,\ldots,d^{\prime}\}\), with probability at least \(1-d^{\prime}/p^{m}>9/10\), it holds that \(E_{i}\) computes \(P_{i}\) for every \(i\in\{2,\ldots,d^{\prime}\}\). Thus the reconstruction will output \(C_{\mathsf{TM}}(1^{n})\). The success probability \(9/10\) can be amplified to \(1-2^{-M}\) by running the reconstruction algorithm \(O(M)\) times independently and outputting the answer that occurs most frequently.
Finally, we prove the claim.
Proof of Claim 5.4.: We first establish the soundness. From the assumption that \(E_{i-1}^{D}\) computes \(P_{i-1}\) or \(i=2\) and the downward self-reducibility property of Theorem 5.1, it follows that \(\widetilde{P}_{i}\) computes \(P_{i}\). Therefore, \(E_{i}\neq\bot\) means that \(\widetilde{E}_{i}\) has passed the test in Step 3, meaning that with probability at least \(1-p^{-4m}\) over the randomness in Step 3, it holds that \(\widetilde{E}_{i}\) agrees \(P_{i}\) on at least \(3/4\) fraction of inputs from \(\mathbb{F}^{3m}\). This then means that with probability at least \(1-p^{-3m}\) over the randomness in Step 4(a), we have \(E_{i}^{D}\) computes \(P_{i}\).
The completeness follows immediately from Theorem 4.1. (Here \(\widetilde{E}_{i}^{D}\) already computes \(P_{i}\) with probability at least \(1-1/p^{m}\).) \(\diamond\)
This completes the proof of Theorem 3.1.
## Acknowledgments
Lijie Chen is supported by a Miller Research Fellowship. Igor C. Oliveira received support from the EPSRC New Horizons Grant EP/V048201/1, the Royal Society University Research Fellowship URF\(\backslash\)R1\(\backslash\)191059, and the Centre for Discrete Mathematics and its Applications (DIMAP) at the University of Warwick. Hanlin Ren received support from DIMACS through grant number CCF-1836666 from the National Science Foundation. This work was done in part while the authors were visiting the Simons Institute for the Theory of Computing. |
2302.11381 | Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted
Markov Decision Processes | Policy Mirror Descent (PMD) is a general family of algorithms that covers a
wide range of novel and fundamental methods in reinforcement learning.
Motivated by the instability of policy iteration (PI) with inexact policy
evaluation, PMD algorithmically regularises the policy improvement step of PI.
With exact policy evaluation, PI is known to converge linearly with a rate
given by the discount factor $\gamma$ of a Markov Decision Process. In this
work, we bridge the gap between PI and PMD with exact policy evaluation and
show that the dimension-free $\gamma$-rate of PI can be achieved by the general
family of unregularised PMD algorithms under an adaptive step-size. We show
that both the rate and step-size are unimprovable for PMD: we provide matching
lower bounds that demonstrate that the $\gamma$-rate is optimal for PMD methods
as well as PI, and that the adaptive step-size is necessary for PMD to achieve
it. Our work is the first to relate PMD to rate-optimality and step-size
necessity. Our study of the convergence of PMD avoids the use of the
performance difference lemma, which leads to a direct analysis of independent
interest. We also extend the analysis to the inexact setting and establish the
first dimension-optimal sample complexity for unregularised PMD under a
generative model, improving upon the best-known result. | Emmeran Johnson, Ciara Pike-Burke, Patrick Rebeschini | 2023-02-22T13:55:08Z | http://arxiv.org/abs/2302.11381v3 | # Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes
###### Abstract
The classical algorithms used in tabular reinforcement learning (Value Iteration and Policy Iteration) have been shown to converge linearly with a rate given by the discount factor \(\gamma\) of a discounted Markov Decision Process. Recently, there has been an increased interest in the study of gradient based methods. In this work, we show that the dimension-free linear \(\gamma\)-rate of classical reinforcement learning algorithms can be achieved by a general family of unregularised Policy Mirror Descent (PMD) algorithms under an adaptive step-size. We also provide a matching worst-case lower-bound that demonstrates that the \(\gamma\)-rate is optimal for PMD methods. Our work offers a novel perspective on the convergence of PMD. We avoid the use of the performance difference lemma beyond establishing the monotonic improvement of the iterates, which leads to a simple analysis that may be of independent interest. We also extend our analysis to the inexact setting and establish the first dimension-free \(\varepsilon\)-optimal sample complexity for unregularised PMD under a generative model, improving upon the best-known result.
## 1 Introduction
The problem of finding an optimal policy in tabular discounted Markov Decision Processes (MDPs) was classically solved using dynamic programming approaches such as policy iteration (PI) and value iteration (VI) (Puterman, 1994; Sutton and Barto, 2018). These methods are well understood theoretically and are guaranteed to converge linearly to the optimal policy in the tabular setting with a rate equal to the discount factor \(\gamma\) of the MDP (Bellman, 1957). Recently, increased interest has been devoted to the study of policy-gradient (PG) approaches based on optimising a parameterised policy with respect to an objective (Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001).
Given their popularity, it is of interest to understand whether PG methods match the performance guarantees of classical algorithms. Indeed, many recent works have focused on improving our understanding of these methods in the tabular setting. Among these, Xiao (2022) established the linear convergence of a general family of algorithms as Policy Mirror Descent (PMD). However, the rate they established depends on an instance-dependent factor that can scale with the dimension of the problem, such as the size of the state space. Khodadadian et al. (2021) show that the instance-independent \(\gamma\)-rate is achieved for a specific instance of PMD known as Natural Policy Gradient (NPG). In the setting of regularised MDPs, the linear \(\gamma\)-rate has been established for PMD (Cen et al., 2021; Lan, 2021; Zhan et al., 2021). However, the classical
approaches (PI and VI) achieve the \(\gamma\)-rate without regularisation, revealing that regularisation is, in general, not necessary for algorithms to reach the \(\gamma\)-rate. This motivates the following two questions:
_Can the classical linear \(\gamma\)-rate be matched by unregularised policy-gradient algorithms? And what is the best rate that unregularised policy-gradient methods can achieve?_
For PMD, our work answers the first question positively and answers the second by establishing that the \(\gamma\)-rate is in fact the best rate achievable for PMD. The family of PMD algorithms allows for the choice of a mirror map that specifies different algorithms. Among these, NPG and PI are two ubiquitous instances of PMD each corresponding to their own mirror map. However, PMD is much more general and other mirror maps will lead to alternative algorithms endowed with the guarantees of PMD that we establish in this paper. In particular, the correspondence of mirror maps with exponential families (Banerjee et al., 2005) allows us to specify a wealth of valid mirror maps. This illustrates that PMD is a general framework that encompasses a wide range of novel but also fundamental algorithms, and motivates the study of its convergence guarantees. In this work, we make the following contributions and summarise them in Table 1,
* We recover the \(\gamma\)-rate for the general family of PMD algorithms under an adaptive size, where the adaptivity comes from a dependence on the policy at the current iteration. In particular, Theorem 3 establishes the following bound in \(\ell_{\infty}\)-norm for the value \(V^{\pi^{k}}\) of the policy \(\pi^{k}\) after \(k\) iterations of PMD compared to the value \(V^{\pi^{*}}\) of an optimal policy \(\pi^{\star}\), \[\|V^{\pi^{*}}-V^{\pi^{k}}\|_{\infty}\leq\frac{2}{1-\gamma}\gamma^{k},\] providing guarantees for any starting-state distribution. This matches the rate of VI and PI as well as the best known rates for PMD on regularised MDPs. This is also the first fully dimension-independent linear convergence result for unregularised PMD, by which we mean that there is no dependence on the size of the state space or the action space.
* We provide a matching lower-bound in Theorem 4, establishing the \(\gamma\)-rate as the optimal rate for PMD methods. This is a worst-case bound in the sense that for a fixed iteration budget, there exists an MDP for which PMD can do no better than the \(\gamma\)-rate. Our results show that a particular choice of learning rate allows PMD to reach this lower-bound exactly.
* We establish a theoretical analysis that avoids the use of the performance difference lemma (Kakade and Langford, 2002) in the main scheme of the proof, beyond establishing the monotonic value improvement of PMD. As a result, this leads to a simple analysis and avoids needing to deal with visitation distribution mismatches that are the last remains of dimension dependence in prior work.
* By extending our analysis to the inexact setting, following an approach similar to that of in Xiao (2022), we establish the following instance-independent sample complexity under a generative model: \[\tilde{O}\Big{(}\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{8}\varepsilon^ {2}}\Big{)},\] where the notation \(\tilde{O}()\) hides poly-logarithmic factors, \(\mathcal{S}\) is the state space of the MDP, \(\mathcal{A}\) is the action space and \(\varepsilon\) is the required accuracy. This improves on the previous best known sample complexity for PMD by removing the dependence on a distribution mismatch coefficient that can scale with
problem-dependent quantities such as the size of the state space. More generally, we highlight that the theoretical analysis we establish in the exact setting can easily be combined with any other scheme for estimating the Q functions (see Section 5), paving the way for further improvements in instance-independent sample complexity results should more efficient estimation procedures be developed.
## 2 Related Work
### Convergence Rates for Exact Policy Mirror Descent
We first consider the setting where the algorithms can be carried out exactly without needing to estimate quantities such as the action-value function \(Q^{\pi}\) of a policy \(\pi\). In this setting, several works have sub-linear convergence results for PMD (Geist et al., 2019; Shani et al., 2020) and NPG specifically (Agarwal et al., 2021).
Another line of work has considered PG methods applied to regularised MDPs. In this setting, linear convergence has been established for NPG with entropy regularisation (Cen et al., 2021), PMD with strongly-convex regularisers (Lan, 2021) and PMD with convex non-smooth regularisers (Zhan et al., 2021). The rates of convergence are either exactly \(\gamma\) or can be made arbitrarily close to it by letting the step-size go to infinity.
In the unregularised setting which this paper focuses on, linear convergence of the special case of NPG was established (Bhandari and Russo, 2021; Khodadadian et al., 2021) under an adaptive step-size similar to ours that depends on the current policy at each step. We focus our comparison to the work of Khodadadian et al. (2021) because they show that NPG achieves the \(\gamma\)-rate exactly. This work focuses on the link between NPG and PI and their analysis consists of bounding the difference in value between iterates of both methods. Bhandari and Russo (2021) establish linear convergence for a number of algorithms including PMD, though it is in the idealised setting of choosing the step size that increases most the value. This result is immediate from the PMD update with an infinite step-size which is equivalent to a PI update, which converges linearly. This does not establish linear convergence of PMD for a finite step-size. However, linear convergence for unregularised general PMD was recently established by Xiao (2022) under a geometrically increasing step-size. In general, their rate is instance-dependent and may scale with problem dependent quantities such as the size of the state space. This same instance-dependent rate was established by Li et al. (2022) for a variant of PMD which augments the update with an added regularisation term. We focus our comparison on
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Linear & General & \(\ell_{\infty}\)-Bound & Dimension & Matching \\ & \(\gamma\)-Rate & Mirror Map & & Independent & Lower-Bound \\ \hline Khodadadian et al. (2021) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ \hline Xiao (2022) & \(\times\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) \\ \hline This work & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of contributions with prior work that study PMD. Khodadadian et al. (2021) focus on NPG, an instance of PMD for a specific mirror map (see Section 3). Note that their analysis is fundamentally different to ours as it exploits the specific closed-form update of NPG. Their step-size is similar to ours, though has an extra dependence on a sub-optimality gap (see Section 4). The \(\ell_{\infty}\)-bound is satisfied if it holds for \(\|V^{\pi^{*}}-V^{\pi^{k}}\|_{\infty}\). Dimension independence is satisfied when there is no instance for which the bound can scale with the size of the state space or action space. We compare these works in more detail in Section 4.
the work of Xiao (2022) rather than this work as the guarantees are equivalent in both but Li et al. (2022) have a more complicated algorithm. In particular, we compare our results to those of Khodadadian et al. (2021) and Xiao (2022) in Table 1 and in more detail in Section 4. In terms of optimality, Khodadadian et al. (2021) provide a lower-bound that scales exponentially with the step-size and the minimal sub-optimality gaps of the actions. The lower-bound applies to the convergence of constant step-size NPG for an MDP with a single-state, whereas in our case we provide a lower-bound in Theorem 4 that applies to PMD with arbitrary step-size on an MDP with any finite state space. To the best of our knowledge, prior to this work no lower-bound has been established in this setting.
### Sample Complexity of Inexact Policy Mirror Descent
Sample complexity in the inexact setting refers to the number of calls to the sampling model needed to guarantee the output of an \(\varepsilon\)-optimal policy. We here give a quick outline of results, typically established in high-probability, under a sampling model known as the generative model that we formally present in Section 5. The lower bound on the sample complexity in this setting was shown to be of \(\tilde{\Omega}\Big{(}\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{3} \varepsilon^{2}}\Big{)}\) by Azar et al. (2013). This lower-bound can be reached by model-based approches (Agarwal et al., 2020; Li et al., 2020) and model-free approaches (Sidford et al., 2018; Wainwright, 2019).
The sample-complexity for PG methods has been recently studied in (Yuan et al., 2022). Under a generative model, some works have considered PMD or NPG under various types of regularisation (Cen et al., 2021; Lan, 2021; Zhan et al., 2021). Results for unregularised PMD or its instances on tabular MDPs under a generative model are limited. There have been works with sample complexities with worst dependence than \(\mathcal{O}(\varepsilon^{-2})\) for NPG (Agarwal et al., 2021; Liu et al., 2020) and for PMD (Shani et al., 2020). Lazaric et al. (2016) show that a variant of PI, a special case of PMD, achieves the optimal dependence \(\mathcal{O}(\varepsilon^{-2})\) on \(\varepsilon\). More recently, Xiao (2022) show that the general family of PMD methods match the \(\mathcal{O}(\varepsilon^{-2})\) with a factor of \((1-\gamma)^{-8}\). Our result for the inexact setting shares the same dependence on \(\varepsilon\) and \(1-\gamma\) as Xiao (2022) but removes an instance-dependent quantity which can depend on the size of the state space. Further comparison to the result in (Xiao, 2022) is given in Section 5.
## 3 Preliminaries
### Setting
A Markov Decision Process (MDP) is a discrete-time stochastic process, comprised of a set of states \(\mathcal{S}\), a set of actions \(\mathcal{A}\), a (assumed here deterministic) reward function \(r(s,a)\in[0,1]\) for each state-action pair \((s,a)\) and a discount factor \(\gamma\in[0,1)\). We consider both \(\mathcal{S}\) and \(\mathcal{A}\) to be finite, which is known as the tabular setting.
In a state \(s\), an agent chooses an action \(a\), which gives them a reward \(r(s,a)\) and transitions them to a new state according to the transition function \(p(\cdot|s,a)\). Once they are in a new state, the process continues. The actions chosen by an agent are formalised through policies. A policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) is a mapping from a state to a distribution over actions, where \(\Delta(\mathcal{X})\) denote the probability simplex over a set \(\mathcal{X}\). We will often write it as an element in \(\Pi=\Delta(\mathcal{A})^{|\mathcal{S}|}\). In each state \(s\in\mathcal{S}\), an agent following policy \(\pi\) chooses an action \(a\in\mathcal{A}\) according to \(\pi_{s}=\pi(\cdot|s)\in\Delta(\mathcal{A})\).
In this work, the goal is to learn how to behave in a \(\gamma\)-discounted infinite-horizon MDP. We measure the
performance of a policy with respect to the value function \(V^{\pi}:\mathcal{S}\rightarrow\mathbb{R}\),
\[V^{\pi}(s)=\mathbb{E}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|\pi,s_{0 }=s\Big{]},\]
where \(s_{t},a_{t}\) are the state and action in time-step \(t\) and the expectation is with respect to both the randomness in the transitions and the choice of actions under policy \(\pi\). This is a notion of long-term reward that describes the discounted rewards accumulated over future time-steps when following policy \(\pi\) and starting in state \(s\). For a distribution over states \(\rho\in\Delta(\mathcal{S})\), we write \(V^{\pi}(\rho)=\sum_{s\in\mathcal{S}}\rho(s)V^{\pi}(s)\) for the value when starting in a state distributed according to \(\rho\). It is also useful to work with the state-action value \(Q^{\pi}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\):
\[Q^{\pi}(s,a)=\mathbb{E}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|\pi, s_{0}=s,a_{0}=a\Big{]},\]
which is similar to \(V^{\pi}\), with the additional constraint of taking action \(a\) in the first time-step. We will often write \(V^{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) (resp. \(Q^{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\)) to refer to the vector form, where each entry represents the value (resp. action-value) in that state (resp. state-action pair). Similarly, we write \(Q^{\pi}_{s}\in\mathbb{R}^{|\mathcal{A}|}\) for the vector of action-values in state \(s\). The following useful expressions, which relate \(Q^{\pi}\) and \(V^{\pi}\) in terms of each other, follow straightforwardly from their definitions above,
\[V^{\pi}(s)=\langle Q^{\pi}_{s},\pi_{s}\rangle,\quad Q^{\pi}(s,a)=r(s,a)+\sum_{ s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)V^{\pi}(s^{\prime}).\]
We now define the discounted visitation-distribution for starting state \(s^{\prime}\) and policy \(\pi\),
\[d^{\pi}_{s^{\prime}}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}^{ \pi}(s_{t}=s|s_{0}=s^{\prime}), \tag{1}\]
which plays an important part in the study of PG methods. Note that \(\mathbb{P}^{\pi}(s_{t}=s|s_{0}=s^{\prime})\) is the probability of being in state \(s\) at time \(t\) when starting in state \(s^{\prime}\) and following policy \(\pi\). As for \(V^{\pi}(\rho)\), we write \(d^{\pi}_{\rho}(s)=\sum_{s^{\prime}\in\mathcal{S}}\rho(s^{\prime})d^{\pi}_{s^ {\prime}}(s)\).
One of the main aims of reinforcement learning is to find a policy \(\pi\) that maximises \(V^{\pi}\). It is known that there exists a deterministic policy that simultaneously maximises \(V^{\pi}\) and \(Q^{\pi}\) for all states and actions (Bellman, 1957). We call such a policy an optimal policy and denote it by \(\pi^{\star}\). We are interested in finding an \(\varepsilon\)-optimal policy, i.e a policy \(\pi\) such that \(\|V^{\pi^{\star}}-V^{\pi}\|_{\infty}<\varepsilon\).
### Exact Policy Mirror Descent
We are interested in PG methods that are based on optimising a parameterised policy \(\pi_{\theta}\) with respect to \(V^{\pi_{\theta}}(\rho)\) for some \(\rho\in\Delta(\mathcal{S})\). In the tabular setting, we can use the direct parameterisation of a policy \(\pi_{\theta}\), which associates a parameter to each state-action pair, i.e. we have \(\pi_{\theta}(a|s)=\theta_{s,a}\). We will drop the subscript \(\theta\) for notational convenience. The gradient of the value function with respect to this parameterisation (Sutton et al., 1999) is given by the concatenation over all state-action pairs (s,a) of
\[\frac{\partial}{\partial\pi(a|s)}V^{\pi}(\rho)=\frac{1}{1-\gamma}d^{\pi}_{ \rho}(s)Q^{\pi}(s,a). \tag{2}\]
Mirror Descent (MD, Beck and Teboulle (2003)) is a method that carries out gradient descent in a geometry that is non-Euclidean. Using \(-V^{\pi}(\rho)\) as the objective we are trying to minimise, the proximal perspective of MD gives an update of the form
\[\pi^{k+1}=\text{argmin}_{p\in\Pi}\Big{\{}-\eta_{k}\langle\nabla V^{\pi^{k}}( \rho),p\rangle+D_{h}(p,\pi^{k})\Big{\}} \tag{3}\]
where \(h:\text{dom }h\to\mathbb{R}\) is the mirror map (with \(\Pi\subset\text{dom }h\)) and \(D_{h}\) is the Bregman divergence generated by \(h\). We require \(h\) to be of Legendre type (Rockafellar, 1970), i.e strictly convex and essentially smooth (differentiable and \(\|\nabla h(x_{k})\|\to\infty\) for any sequence \(x_{k}\) converging to a point on the boundary of \(\text{dom }h\)) on the relative interior of \(\text{dom }h\). The Bregman Divergence is defined as
\[D_{h}(\pi,\pi^{\prime})=h(\pi)-h(\pi^{\prime})-\langle\nabla h(\pi^{\prime}), \pi-\pi^{\prime}\rangle\qquad\text{for }\pi,\pi^{\prime}\in\text{dom }h.\]
As the objective \(V^{\pi}(\rho)\) is non-convex in general (Agarwal et al., 2021), usual techniques from convex-theory (Bubeck, 2015) are not applicable.
The presence of the visitation-distribution term \(d_{\rho}^{\pi}(s)\) in the gradient of the objective in (2) can slow down learning because it can lead to vanishingly small gradients when states are infrequently visited under the current policy \(\pi\)(Agarwal et al., 2021). To circumvent this issue, Policy Mirror Descent (PMD) (Lan, 2021; Shani et al., 2020; Xiao, 2022) applies a variant of update (3) with a weighted Bregman divergence \(D_{h}^{\text{PMD}}\) that matches the visitation distribution factors of the gradient:
\[D_{h}^{\text{PMD}}(p,\pi^{k})=\sum_{s}d_{\rho}^{\pi^{k}}(s)D_{h}(p_{s},\pi_{s }^{k})\]
where the mirror map h is now defined on a subset of \(\mathbb{R}^{|\mathcal{A}|}\). This results in the following update
\[\pi^{k+1}=\text{argmin}_{p\in\Pi}\sum_{s}d_{\rho}^{\pi^{k}}(s)\Big{[}-\eta_{k }\langle Q_{s}^{\pi^{k}},p_{s}\rangle+D_{h}(p_{s},\pi_{s}^{k})\Big{]}\]
and the minimisation can be applied for each state individually to get the PMD update
\[\pi_{s}^{k+1}=\text{argmin}_{p\in\Delta(\mathcal{A})}\Big{\{}-\eta_{k}\langle Q _{s}^{\pi^{k}},p_{s}\rangle+D_{h}(p_{s},\pi_{s}^{k})\Big{\}} \tag{4}\]
for all states \(s\). We will often add a superscript \(k\) to any quantity that is associated to \(\pi^{k}\). For example, \(V^{k}(s)=V^{\pi^{k}}(s)\). Similarly for \(\pi^{\star}\) and the superscript \(\star\). Exact PMD iteratively applies update (4) for some sequence of \(\eta_{k}>0\) and initial policy \(\pi^{0}\in\text{rint }\Pi\). We call this algorithm exact because we assume access to the true state-action values \(Q^{k}\).
PMD is a general family that covers many algorithms, specified by the choice of mirror map \(h\). These will inherit the guarantees of PMD, which motivates the study of the convergence guarantees of PMD beyond specific instances. Taking \(h\) to be the negative entropy yields NPG, whose theoretical properties have attracted a lot of interest (Agarwal et al., 2021; Cen et al., 2021; Khodadadian et al., 2021). With a null Bregman Divergence, PMD recovers PI. This is generated by a constant mirror map, which is not of Legendre type but the analysis still applies so all results on PMD remain valid. In fact, the update (4) converges to a PI update as \(\eta_{k}\to\infty\), regardless of the mirror map. Beyond these, providing mirror maps that generate other Bregman Divergences will lead to different algorithms. In particular, every exponential family has a corresponding mirror map generating a unique Bregman Divergence (Banerjee et al., 2005), highlighting the generality of PMD.
### Properties of PMD
We now present lemmas relevant to the analysis of PMD. Key to the analysis is the Three-Point Descent Lemma, that relates the improvement of the proximal gradient update compared to an arbitrary point. It originally comes from Chen and Teboulle (1993) (Lemma 3.2) where a proof can be found, though we use a slightly modified version from Xiao (2022) (Lemma 6).
**Lemma 1** (Three-Point Descent Lemma, Lemma 6 in Xiao (2022)).: _Suppose that \(\mathcal{C}\subset\mathbb{R}^{n}\) is a closed convex set, \(\phi:\mathcal{C}\rightarrow\mathbb{R}\) is a proper, closed convex function, \(D_{h}(\cdot,\cdot)\) is the Bregman divergence generated by a function \(h\) of Legendre type and rint \(\text{domh}\cap\mathcal{C}\neq\emptyset\). For any \(x\in\text{rint}\text{domh}\), let_
\[x^{+}=\text{argmin}_{u\in C}\{\phi(u)+D_{h}(u,x)\}. \tag{5}\]
_Then \(x^{+}\in\text{rint}\text{dom}\ h\cap C\) and \(\forall u\in C\),_
\[\phi(x^{+})+D_{h}(x^{+},x)\leq\phi(u)+D_{h}(u,x)-D_{h}(u,x^{+}) \tag{6}\]
The update (4) of PMD is an instance of the proximal minimisation (5) with \(\mathcal{C}=\Delta(\mathcal{A})\), \(x=\pi_{s}^{k}\) and \(\phi(x)=-\eta_{k}\langle Q_{s}^{k},x\rangle\). Plugging these into (6), Lemma 1 relates the decrease in the proximal objective of \(\pi_{s}^{k+1}\) to any other policy, i.e. \(\forall p\in\Delta(\mathcal{A})\),
\[-\eta_{k}\langle Q_{s}^{k},\pi_{s}^{k+1}\rangle+D_{h}(\pi_{s}^{k+1},\pi_{s}^{k })\leq-\eta_{k}\langle Q_{s}^{k},p\rangle+D_{h}(p,\pi_{s}^{k})-D_{h}(p,\pi_{s} ^{k+1}). \tag{7}\]
This equation is key to the analysis in Section 6. In particular, it allows us to prove the following lemma regarding the monotonic improvement in action-value of PMD iterates. This is an extension of Lemma 7 in Xiao (2022). The proof, which relies on the performance difference lemma (Appendix A), is included in Appendix B. This is crucially the only step in our analysis that requires the use of the performance difference lemma.
**Lemma 2**.: _Consider the policies produced by the iterative updates of PMD in (4). Then for any \(k\geq 0\),_
\[Q^{k+1}(s,a)\geq Q^{k}(s,a),\quad\forall(s,a)\in\mathcal{S}\times\mathcal{A}.\]
## 4 Main Results for Exact Policy Mirror Descent
In this section, we present our main results on the convergence of exact PMD. We first introduce some relevant notation. Fix a state \(s\in\mathcal{S}\) and an integer \(k\geq 0\). Let \(\mathcal{A}_{s}^{k}=\{a\in\mathcal{A}:Q^{k}(s,a)=\text{max}_{a^{\prime}\in \mathcal{A}}Q^{k}(s,a^{\prime})\}\) denote the set of optimal actions in state \(s\) under policy \(\pi^{k}\). Denote by \(\widetilde{\Pi}_{s}^{k+1}\) the set of greedy policies w.r.t \(Q_{s}^{k}\) in state s, i.e
\[\widetilde{\Pi}_{s}^{k+1}=\Big{\{}p\in\Delta(\mathcal{A}):\sum_{a\in\mathcal{ A}_{s}^{k}}p(a)=1\Big{\}}.\]
We are now ready to state our main result in the setting of exact PMD.
**Theorem 3**.: _Let \(\{c_{k}\}_{k\in\mathbb{Z}_{\geq 0}}\) be a sequence of positive reals. Consider applying iterative updates of (4) with \(\pi^{0}\in\text{rint}\ \Pi\) and step-sizes satisfying for all \(k\geq 0\),_
\[\eta_{k}\geq\frac{1}{c_{k}}\max_{s\in\mathcal{S}}\Big{\{}\min_{ \widetilde{\pi}_{s}^{k+1}\in\widetilde{\Pi}_{s}^{k+1}}D_{h}(\widetilde{\pi}_{s }^{k+1},\pi_{s}^{k})\Big{\}}. \tag{8}\]
_Then we have for all \(k\geq 0\),_
\[\|V^{\star}-V^{k}\|_{\infty}\leq\gamma^{k}\Big{(}\|V^{\star}-V^{0}\|_{\infty}+ \sum_{i=1}^{k}\gamma^{-i}c_{i-1}\Big{)}. \tag{9}\]
The sequence \(\{c_{k}\}_{k\in\mathbb{Z}_{\geq 0}}\) plays an important role in both the step-size constraint (8) and the bound (9). In particular, different choices will lead to different guarantees:
* \(c_{i}=c_{0}\) for some \(c_{0}>0\) yields a step-size with a constant component. The resulting bound is \[\|V^{\star}-V^{k}\|_{\infty}\leq\gamma^{k}\|V^{\star}-V^{0}\|_{\infty}+\frac{c _{0}}{1-\gamma},\] which converges linearly up to some accuracy controlled by \(c_{0}\).
* \(c_{i}=\gamma^{i+1}c_{0}\) for some initial \(c_{0}>0\) will yield a step-size with a component that is geometrically increasing as in Xiao (2022). The resulting bound is \[\|V^{\star}-V^{k}\|_{\infty}\leq\gamma^{k}\Big{(}\|V^{\star}-V^{0}\|_{\infty}+ kc_{0}\Big{)},\] which converges linearly with the sought-for \(\gamma\)-rate, though in early iterations the \(k\) factor may dominate.
* \(c_{i}=\gamma^{2(i+1)}c_{0}\) for some initial \(c_{0}>0\) will yield a step-size with a component that is geometrically increasing at a rate more aggressive than above. The resulting bound is \[\|V^{\star}-V^{k}\|_{\infty}\leq\gamma^{k}\Big{(}\|V^{\star}-V^{0}\|_{\infty}+ \frac{c_{0}}{1-\gamma}\Big{)},\] which converges linearly with the exact \(\gamma\)-rate, and matches the bounds of PI and VI as \(c_{0}\) goes to 0. PMD cannot do better as we will show in Theorem 4.
We make our contributions precise by comparing Theorem 3 to two relevant works.
**Comparison to Xiao (2022) (Section 4):** Linear convergence of unregularised PMD was first established by Xiao (2022) under a geometrically increasing step-size \(\eta_{k}=\eta_{0}/\gamma^{k}\). Their rate of convergence is \(1-\frac{1}{\theta_{\rho}}\) where \(\theta_{\rho}\) is an instance-dependent term defined as follows
\[\theta_{\rho}=\frac{1}{1-\gamma}\Big{\|}\frac{d_{\rho}^{\star}}{\rho}\Big{\|} _{\infty},\]
where \(d_{\rho}^{\star}\) is the visitation distribution defined in (1) under an optimal policy and \(\rho\) is the starting-state distribution to which the bound applies, i.e the bound is on \(V^{\star}(\rho)-V^{k}(\rho)\). This \(\theta_{\rho}\) is at best \(\gamma\) when we use \(\rho\) to be the stationary distribution of the optimal policy. However, in this case, the guarantee only applies to states on the support of this stationary distribution and provides no guarantees for other states. In general, it is unclear how \(\theta_{\rho}\) may scale over a specific MDP. In particular, it is possible to construct an MDP where \(\theta_{\rho}\) scales linearly with the size of the state space \(|\mathcal{S}|\) (Appendix E.1). Though this MDP is somewhat trivial, it nonetheless illustrates how \(\theta_{\rho}\) can easily get big leading to slow rates of convergence. It is also not straightforward to obtain convergence in individual states from the bound in Xiao (2022) due to the presence of \(\rho\) in
the denominator of the mismatch coefficient in \(\theta_{\rho}\). In contrast, we obtain the optimal \(\gamma\)-rate of convergence and our result holds in \(\ell_{\infty}\)-norm over all states so avoids having to deal with a starting-state distribution \(\rho\) altogether.
This distribution mismatch commonly appears in convergence bounds in the literature (Kakade and Langford, 2002; Scherrer, 2014; Bhandari and Russo, 2019; Shani et al., 2020; Agarwal et al., 2021). For many of these papers, removing it would be of great interest though often does not appear possible. Our results show that it is removable for the general family of PMD algorithms and we can obtain dimension-free linear convergence.
**Comparison to Khodadadian et al. (2021) (Section 3.2):** They establish a \(\gamma\)-rate for NPG, a specific instance of PMD for which the Bregman Divergence is the KL-divergence. The bound shown in their work (if \(\gamma>0.9\)) is the same as the one implied by our result with \(c_{i}=\gamma^{2(i+1)}c_{0}\). Defining \(\Delta^{k}(s)=\max_{a\in\mathcal{A}}Q^{k}(s,a)-\max_{a\notin\mathcal{A}^{k}_{ s}}Q^{k}(s,a)\), the minimal sub-optimality gap in state \(s\) under \(\pi^{k}\), then the step-size corresponding to their bound with the KL as Bregman Divergence is
\[\eta_{k}\geq\max_{s,\widetilde{\pi}^{k+1}_{s}\in\widetilde{\Pi}^{k+1}_{s}} \Big{\{}\Big{(}L_{k}+\text{log}|\mathcal{A}|+D(\widetilde{\pi}^{k+1}_{s},\pi^ {k}_{s})\Big{)}\frac{1}{\Delta_{k}(s)}\Big{)}\Big{\}},\]
for some sequence of positive reals \(\{L_{k}\}_{k\in\mathbb{Z}_{\geq 0}}\). This highlights the connection with our step-size condition (8). In particular, they both have an adaptive component that depends linearly on the Bregman divergence between the current policy and the greedy policy and a non-adaptive component on which the bound depends. An important difference is that our step-size is independent of the sub-optimality gap \(\Delta_{k}(s)\), and will be robust to situations where this gap is small. We can construct a general family of MDPs for which we can make \(\Delta_{k}(s)\) arbitrarily small and the step-size of Khodadadian et al. (2021) will correspondingly become arbitrarily large (Appendix E.2). Despite the apparent similarities with our results, their analysis is significantly different to ours as it exploits the specific closed-form update of NPG to bound the difference in value with an update of PI. Our analysis applies to PMD for a general mirror map and as such does not utilize specific properties of the mirror map and does not require the analytic solution of the update to be known.
### Optimality of PMD
We have established in Theorem 3 that PMD achieves a linear \(\gamma\)-rate. The following result shows that this rate is in fact optimal in a worst-case sense. The proof can be found in Appendix C.
**Theorem 4**.: _Fix \(n>0\) and \(\delta\in(0,(1-\gamma)\gamma^{n})\). There exists a class of MDPs parameterised by \(\delta\) with state-space of size \(|\mathcal{S}|=2n+1\) and a policy \(\pi^{0}\in\text{rint}\,\,\Pi\) such that running iterative updates of (11) for any positive step-size regime, we have for \(k<n\):_
\[\|V^{\star}-V^{k}\|_{\infty}\geq\gamma^{k}\|V^{\star}-V^{0}\|_{\infty}-\frac{ 2\delta}{1-\gamma}. \tag{10}\]
A key feature of this result is that the bound holds for \(k<n\). For an MPD with fixed state-space size \(|\mathcal{S}|\), we can always run PMD for a number of iterations that is greater than the \(n\) of Theorem 4 and the bound (10) will not hold or not be meaningful. However for a fixed iteration budget, Theorem 4 implies that there exists an MDP on which PMD will not do better than the linear \(\gamma\)-rate for any step-size. The \(\gamma\)-rate for PMD that we prove in Theorem 3 is optimal in this sense.
As discussed in Section 3.2, PI is an instance of PMD. We thus cannot expect a lower bound that scales with \(\gamma^{k}\) for all \(k>0\), since PI converges exactly in finite-iterations (in fact with a number of iterations that scales linearly with the size of the state space (Scherrer, 2013)). To the best of our knowledge, this lower bound on the value convergence of PMD scaling with \(\gamma^{k}\) is new. We expect this result to have been known for the special case of PI, though we could not find a proof of it in the literature. The works that establish a lower bound for PI do so in the setting of exact convergence to the optimal policy (Hollanders et al., 2012), not \(\varepsilon\)-accurate convergence, and for undiscounted MDPs (Fearnley, 2010).
We note that there have been some results on the super-linear convergence of NPG in the literature, though these apply once you have a policy within some neighbourhood of the optimal policy or value. Cen et al. (2021) establish such a result for NPG in the regularised case, and Khodadadian et al. (2021) in the unregularised case under certain additional conditions. Theorem 4 does not contradict this latter result as for the MDP considered in the proof, the super-linear convergence would kick-in for iterations beyond the \(k<n\) considered here.
## 5 Sample Complexity of Inexact Policy Mirror Descent under a Generative Model
In the previous sections, we have assumed access to the action values \(Q_{s}^{k}\) to carry out the PMD update. In In Inexact PMD (IPMD), we replace \(Q_{s}^{k}\) with an estimate \(\widehat{Q}_{s}^{k}\) giving the update
\[\pi_{s}^{k+1}=\text{argmin}_{p\in\Delta(\mathcal{A})}\Big{\{}-\eta_{k}\langle \widehat{Q}_{s}^{k},p_{s}\rangle+D_{h}(p_{s},\pi_{s}^{k})\Big{\}}. \tag{11}\]
Similarly to the exact case, IPMD iteratively applies update (11) for some sequence of \(\eta_{k}>0\) and initial policy \(\pi^{0}\in\text{rint}\ \Pi\), this time only assuming access to an inexact estimator of \(Q^{k}\).
We consider the setting of a generative model (Kearns and Singh, 1998), which is a sampling model where we can draw samples from the transition probabilities \(p(\cdot|s,a)\) for any pair \((s,a)\). We borrow an estimator common in the literature (see e.g. Xiao (2022), Lazaric et al. (2016)): for all state-actions pairs \((s,a)\), draw \(M_{k}\) trajectories of length or horizon \(H\), i.e samples of the form
\[\big{(}(s_{0}^{(i)},a_{0}^{(i)}),(s_{1}^{(i)},a_{1}^{(i)}),...,(s_{H-1}^{(i) },a_{H-1}^{(i)})\big{)}_{i=1,...,M_{k}},\]
where \(a_{t}^{(i)}\) is drawn from \(\pi^{k}(\cdot|s_{t}^{(i)})\), \(s_{t+1}^{(i)}\) is drawn from \(p(\cdot|s_{t}^{(i)},a_{t}^{(i)})\) and \((s_{t}^{(0)},a_{t}^{(0)})=(s,a)\). Using these samples, we can do a truncated Monte-Carlo estimate of the values as follows,
\[\widehat{Q}^{k}(s,a)=\frac{1}{M_{k}}\sum_{i=1}^{M_{k}}\widehat{Q}_{(i)}^{k}(s,a),\quad\text{where}\quad\widehat{Q}_{(i)}^{k}(s,a)=\sum_{t=0}^{H-1}\gamma^{ t}r(s_{t}^{(i)},a_{t}^{(i)}). \tag{12}\]
We use these \(\widehat{Q}^{k}(s,a)\) to replace \(Q^{k}(s,a)\) in the PMD update step. Xiao (2022) present a bound on the accuracy of this estimator which is restated in Appendix D. Following the same ideas as Xiao (2022), we can extend Theorem 3 to the inexact setting. The following theorem establishes a sample complexity result, which is the sufficient number of calls to the sampling model to obtain an \(\varepsilon\)-optimal policy. For simplicity, we focus on the step-size following from the choice \(c_{i}=\gamma^{2(i+1)}\).
**Theorem 5**.: _Consider applying iterative updates of (11) using the Q-estimator in (12) given access to a generative model with \(\pi^{0}\in\text{rint }\Pi\) and step-sizes satisfying for all \(k\geq 0\),_
\[\eta_{k}\geq\max_{s\in\mathcal{S}}\Big{\{}\min_{\tilde{\pi}_{s}^{k+1}\in \tilde{\Pi}_{s}^{k+1}}\frac{D(\tilde{\pi}_{s}^{k+1},\pi_{s}^{k})}{\gamma^{2k+1} }\Big{\}}.\]
_Fix \(\varepsilon>0\). For any \(\delta\in(0,1)\), suppose the following are satisfied for all \(k\geq 0\),_
\[K>\frac{1}{1-\gamma}\text{log}\frac{4}{(1-\gamma)\varepsilon},\quad H\geq \frac{1}{1-\gamma}\text{log}\frac{16}{(1-\gamma)^{3}\varepsilon}\quad\text{ and}\quad M_{k}=M\geq\frac{\gamma^{-2H}}{2}\text{log}\frac{2K|\mathcal{S}|| \mathcal{A}|}{\delta}.\]
_Then we have with probability at least \(1-\delta\),_
\[\|V^{\star}-V^{k}\|_{\infty}\leq\gamma^{k}\Big{(}\|V^{\star}-V^{0}\|_{\infty}+ \frac{1}{1-\gamma}\Big{)}+\frac{8}{(1-\gamma)^{3}}\gamma^{H}<\varepsilon.\]
_Choosing \(K\), \(H\) and \(M\) to be tight to their lower-bounds, the corresponding sample complexity is \(\tilde{O}\Big{(}\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{8}\varepsilon^ {2}}\Big{)}\), where the notation \(\tilde{O}()\) hides poly-logarithmic factors._
The proof can be found in Appendix D.1. The sample complexity established by Xiao (2022) (Theorem 16) under a generative model and the same Q-estimator is
\[\tilde{O}\Big{(}\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{8}\varepsilon^ {2}}\Big{\|}\frac{d_{\rho}^{\star}}{\rho}\Big{\|}_{\infty}^{3}\Big{)}.\]
In their work, Xiao (2022) stresses the interest in reducing the dependence on \(1/(1-\gamma)\) and the distribution mismatch coefficient in order to scale PMD guarantees to more relevant settings such as function approximation. Theorem 5 partially resolves this matter by removing the dependence on the distribution mismatch coefficient, which may scale with the size of the state space (Appendix E.1). This makes the result fully dimension independent, which is crucial when scaling the results to large or infinite state or action spaces. The dependence on \(1/(1-\gamma)\) remains distant from the \(1/(1-\gamma)^{3}\) lower-bound of Azar et al. (2013) (see Section 2). Whether this can be reached by PMD methods remains open, though using a more suitable Q-estimator than (12) with our step size regime and analysis could bring the sample complexity closer to this.
## 6 Analysis
In this section, we present the proof of Theorem 3. A key component in establishing the \(\gamma\)-rate is avoiding the performance difference lemma (Appendix A) beyond Lemma 2. In prior works, the value sub-optimalities \(V^{\star}(\rho)-V^{k}(\rho)\) and \(V^{\star}(\rho)-V^{k+1}(\rho)\) appear through the performance difference lemma. In particular, Xiao (2022) use it on \(\mathbb{E}_{s\sim d_{\rho}^{k}}[(Q_{s}^{k},\pi_{s}^{\star}-\pi_{s}^{k+1})]\), which introduces a distribution mismatch coefficient in order to get a recursion. On the other hand, we extract the value sub-optimalities \(V^{\star}(s)-V^{k}(s)\) and \(\|V^{\star}-V^{k+1}\|_{\infty}\) directly from \(\langle Q_{s}^{k},\pi_{s}^{\star}-\pi_{s}^{k+1}\rangle\) in (14). This leads to an elegant analysis that may be of interest in the study of other methods, and ultimately allows us to overcome distribution mismatch factors for an exact \(\gamma\)-rate.
### Proof of Theorem 3
Fix \(s\in\mathcal{S}\) and \(k\geq 0\). From Lemma 2, we have that \(\langle Q_{s}^{k},\pi_{s}^{k+1}\rangle\leq\langle Q_{s}^{k+1},\pi_{s}^{k+1} \rangle=V^{k+1}(s)\). This decouples the dependencies on \(\pi^{k}\) and \(\pi^{k+1}\) below and is one of the ingredients that allows us to bypass the performance difference lemma. Using this,
\[\langle Q_{s}^{k},\pi_{s}^{\star}-\pi_{s}^{k+1}\rangle \geq\langle Q_{s}^{k},\pi_{s}^{\star}\rangle-V^{k+1}(s)\] \[=\langle Q_{s}^{k}-Q_{s}^{\star},\pi_{s}^{\star}\rangle+\langle Q _{s}^{\star},\pi_{s}^{\star}\rangle-V^{k+1}(s)\] \[\geq-\|Q_{s}^{\star}-Q_{s}^{k}\|_{\infty}+V^{\star}(s)-V^{k+1}(s), \tag{13}\]
where the last step uses Holder's inequality. Now we use that the difference in state-action values of different policies for the same state-action pair propagates the error to the next time-step, which is discounted by a factor of \(\gamma\). Formally, for any state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\),
\[Q^{\star}(s,a)-Q^{k}(s,a) =\gamma\sum_{s^{\prime}}p(s^{\prime}|s,a)(V^{\star}(s^{\prime})-V ^{k}(s^{\prime}))\] \[\leq\gamma\sum_{s^{\prime}}p(s^{\prime}|s,a)\|V^{\star}-V^{k}\|_ {\infty}\] \[=\gamma\|V^{\star}-V^{k}\|_{\infty},\]
which is the same phenomenon that is responsible for the contraction of the Bellman operator. This gives \(\|Q_{s}^{\star}-Q_{s}^{k}\|_{\infty}\leq\gamma\|V^{\star}-V^{k}\|_{\infty}\). Plugging into Equation (13),
\[V^{\star}(s)-V^{k+1}(s)-\gamma\|V^{\star}-V^{k}\|_{\infty}\leq\langle Q_{s}^{ k},\pi_{s}^{\star}-\pi_{s}^{k+1}\rangle.\]
The rest of the proof relies on making the right-hand side of the above arbitrarily small by taking a large enough step size. Choose any greedy policy with respect to \(Q_{s}^{k}\), \(\widetilde{\pi}_{s}^{k+1}\in\widetilde{\Pi}_{s}^{k+1}\),
\[V^{\star}(s)-V^{k+1}(s)-\gamma\|V^{\star}-V^{k}\|_{\infty} \leq\langle Q_{s}^{k},\pi_{s}^{\star}-\pi_{s}^{k+1}\rangle \tag{14}\] \[\leq\langle Q_{s}^{k},\widetilde{\pi}_{s}^{k+1}-\pi_{s}^{k+1}\rangle \tag{15}\]
where we use that \(\widetilde{\pi}_{s}^{k+1}\) is greedy with respect to \(Q_{s}^{k}\). We then apply Lemma 1 or (7) to \(p=\widetilde{\pi}_{s}^{k+1}\),
\[\langle Q_{s}^{k},\widetilde{\pi}_{s}^{k+1}-\pi_{s}^{k+1}\rangle \leq\frac{D(\widetilde{\pi}_{s}^{k+1},\pi_{s}^{k})-D(\widetilde{\pi}_{s}^{k+1 },\pi_{s}^{k+1})-D(\pi_{s}^{k+1},\pi_{s}^{k})}{\eta_{k}}\leq D(\widetilde{\pi} _{s}^{k+1},\pi_{s}^{k})/\eta_{k}.\]
Combining with (15) and noting that this holds for any \(\widetilde{\pi}_{s}^{k+1}\in\widetilde{\Pi}_{s}^{k+1}\), we have
\[V^{\star}(s)-V^{k+1}(s)-\gamma\|V^{\star}-V^{k}\|_{\infty}\leq\frac{1}{\eta_{ k}}\text{min}_{\widetilde{\pi}_{s}^{k+1}\in\widetilde{\Pi}_{s}^{k+1}}D( \widetilde{\pi}_{s}^{k+1},\pi_{s}^{k})\leq c_{k}\]
from the step-size condition in the statement of the theorem. Rearranging and recalling that \(s\) and \(k\) were arbitrary, we can choose \(s\) where \(V^{\star}(s)-V^{k+1}(s)\) reaches its maximum value. We get
\[\|V^{\star}-V^{k+1}\|_{\infty}\leq\gamma\|V^{\star}-V^{k}\|_{\infty}+c_{k},\]
and unravelling this recursion completes the proof.
Conclusion
In this paper, we have shown that the general family of exact policy mirror descent algorithms in tabular MDPs match the dimension-free linear \(\gamma\)-rate of convergence of classical algorithms such as policy iteration. We provide a matching lower-bound that establishes this rate as optimal for PMD. We exploit a new approach to study the convergence of PMD, for which avoiding the performance difference lemma in the main analysis is a key element. Our analysis also naturally extends to the inexact setting, given access to an estimator of the action-value of a policy. We provide a result for a simple estimator under a generative model that improves upon the best-known sample complexity but remains far-off the optimal lower bound. Our method is general and applies to any estimator, meaning our result could be improved by a better estimator. It is not known if PMD can match the lower-bound in the inexact model-free setting. Exploiting further algorithmic properties of PMD in the inexact setting may be needed to bridge the gap to the optimal sample complexity.
|
2307.13348 | Boost clustering with Gaussian Boson Sampling: a full quantum approach | Gaussian Boson Sampling (GBS) is a recently developed paradigm of quantum
computing consisting of sending a Gaussian state through a linear
interferometer and then counting the number of photons in each output mode.
When the system encodes a symmetric matrix, GBS can be viewed as a tool to
sample subgraphs: the most sampled are those with a large number of perfect
matchings, and thus are the densest ones. This property has been the foundation
of the novel clustering approach we propose in this work, called GBS-based
clustering, which relies solely on GBS, without the need of classical
algorithms. The GBS-based clustering has been tested on several datasets and
benchmarked with two well-known classical clustering algorithms. Results
obtained by using a GBS simulator show that on average our approach outperforms
the two classical algorithms in two out of the three chosen metrics, proposing
itself as a viable full-quantum clustering option. | Nicolò Bonaldi, Martina Rossi, Daniele Mattioli, Michele Grapulin, Blanca Silva Fernández, Davide Caputo, Marco Magagnini, Arianna Osti, Fabio Veronese | 2023-07-25T09:05:24Z | http://arxiv.org/abs/2307.13348v1 | # Boost clustering with Gaussian Boson Sampling: a full quantum approach
###### Abstract
**Abstract**
Gaussian Boson Sampling (GBS) is a recently developed paradigm of quantum computing consisting of sending a Gaussian state through a linear interferometer and then counting the number of photons in each output mode. When the system encodes a symmetric matrix, GBS can be viewed as a tool to sample subgraphs: the most sampled are those with a large number of perfect matchings, and thus are the densest ones. This property has been the foundation of the novel clustering approach we propose in this work, called _GBS-based clustering_, which relies solely on GBS, without the need of classical algorithms. The GBS-based clustering has been tested on several datasets and benchmarked with two well-known classical clustering algorithms. Results obtained by using a GBS simulator show that on average our approach outperforms the two classical algorithms in two out of the three chosen metrics, proposing itself as a viable full-quantum clustering option.
## I Introduction
Clustering is one of the most common unsupervised learning problems [1]. Given a dataset, the goal is to group objects that are similar to each other with the purpose of having subsets that are meaningful in the specific context [2]. Therefore, a key concept is the definition of similarity, that must be tailored to each problem. This information can be provided as dissimilarity or design matrices whose values depend on the type of features that describe the input data [3; 4].
There are many clustering algorithms, the choice depends on the type and distribution of data [5]. Among these, k-means clustering is widely used since it can be implemented relatively easily and is suitable for large datasets; however, it has some limitations such as convex cluster shapes that can be detrimental when data distributions don't match this structure [6]. In this case, an interesting approach is the DBSCAN, which is a density-based clustering algorithm. This method evaluates two parameters that determine if a region is dense and points in the same dense region are clustered together [7]. Nevertheless, DBSCAN is not appropriate for every data distribution; e.g. points of regions with different densities cannot be correctly clustered.
Considering advantages and disadvantages of classical algorithms, this work explores the potential benefits that a quantum approach to the clustering problem can achieve.
Quantum computing promises to perform certain types of calculation considerably faster than classical computing, by exploiting quantum mechanics effects such as superposition, interference and entanglement [8]. There exist several quantum computing paradigms, the main ones being _quantum annealing_ and _universal quantum computing_. The first one [9] is mainly suitable for solving optimization problems formulated as Quadratic Unconstrained Binary Optimization (QUBO) problems [10] and relies on the adiabatic quantum theorem [11]. Once a system of qubits has been prepared depending on the model to be solved, it is left free to evolve towards the ground state, under the condition it corresponds to the optimal solution of the quadratic problem. On the contrary, universal quantum computing [12] is based on a direct manipulation of qubits through quantum gates. This precise control of the quantum system guarantees a larger range of possible applications, but at the cost of requiring higher quality qubits [13].
In 2011 Aaronson and Arkhipov [14] enriched the set of quantum computing protocols by introducing _Boson Sampling_ (BS), a novel model of quantum computation consisting of simultaneously sending identical photons through a linear interferometer and observing the output pattern in the photon number basis. They demonstrated that, under reasonable assumptions, BS is able to solve sampling problems which are beyond the capabilities of classical computing -in particular, sampling according to the permanent of a submatrix of the interferometer unitary- paving the way to the so called _quantum advantage_[14; 15]. Although it does not demand a total control over the quantum system, a physical implementation of BS still needs perfectly deterministic sources of single photons, which happen to be extremely difficult to achieve. Even though there have been some experimental realizations of this protocol [16; 17], in order to ease the production of single input photons, several variants of Boson Sampling have been proposed (for instance, the Scattershot Boson Sampling [18], which makes use of Gaussian states to improve the scaling of the generation probability of single photons which enter the interferometer). In 2017, Hamilton et al. [19] fully exploited the nature of Gaussian states by introducing a new pro
-tocol called _Gaussian Boson Sampling_ (GBS); contrary to other BS-inspired models, GBS's input consists of a Gaussian squeezed state, which guarantees significant experimental advantages. In 2022, Madsen et al. [20] implemented GBS on a photonic processor called Borealis and proved that this quantum protocol, when used to sample from a specific distribution, provides a significant computational advantage with respect to the best known algorithm running on a supercomputer. Moreover, several applications of GBS have been studied by Bromley et al. [21], ranging from graph similarity and graph optimization to molecular docking and quantum chemistry, showing that even though it is not a form of universal quantum computing, GBS offers a considerable versatility and can be used to efficiently solve several different problems.
In this work, we exploit the connection between Gaussian Boson Sampling and graph theory studied by Bradler et al. [22] to develop a GBS-based clustering approach. The proposed quantum clustering technique aims to find clusters as dense regions of a properly constructed graph, relying solely on GBS, using a classical approach only in the post-processing phase to deal with isolated points. The approach has been tested on several datasets and benchmarked with the well-known k-means algorithm and DBSCAN, considering three different metrics. In the absence of a QPU which can encode any symmetric matrix, GBS has been performed by using a simulator provided by Xanadu [23]. Results show that our approach outperforms the two classical clustering algorithms when considering two of the three metrics and thus can be considered a viable full-quantum clustering option.
The rest of this paper is organized as follows: Section II briefly introduces Gaussian Boson Sampling and its link to graph theory and the Hafnian of a matrix. Section III presents and explains the novel GBS-based clustering technique. The obtained results are collected in Section IV and deeply discussed in Section V. Finally, conclusions and future works are presented in Section VI.
## II Gaussian Boson Sampling
Developed as an evoultion of Boson Sampling, Gaussian Boson Sampling is a specialized approach of photonic quantum computation, consisting of sending single-mode squeezed states into a linear interferometer. At the exit of the interferometer, detectors perform Fock state measurements on the obtained Gaussian state, counting the number of photons in each output mode.
As mentioned in the previous section, the clustering technique we propose in this work strongly relies on the connection between GBS and graph theory. Before introducing such a relationship, we start by reviewing the results about photo-counting from a Gaussian state presented in [19].
Consider a system of \(M\)_qumodes_, namely \(M\) optical modes of the quantized electromagnetic field. The state of this system can be univocally identified by a quasi-probability distribution described by the Wigner function, \(W(\mathbf{p},\mathbf{q})\), where \(\mathbf{p}\in\mathbb{R}^{M}\) and \(\mathbf{q}\in\mathbb{R}^{M}\) are called respectively the position and momentum quadrature operators. Gaussian states [24] are those states whose Wigner function is a Gaussian distribution; as such, they are characterized by a \(2M\times 2M\) covariance matrix \(\sigma\) and two \(M\)-dimensional vectors of means \(\mathbf{\bar{p}},\mathbf{\bar{q}}\). Now, let \(\sigma_{A}\) be the covariance matrix of an arbitrary \(M-\)mode Gaussian state with zero mean and define the matrix \(\mathcal{A}\) as:
\[\mathcal{A}:=X_{2M}[\mathbb{I}_{2M}-(\sigma_{A}+\mathbb{I}_{2M}/2)^{-1}], \tag{1}\]
where \(\mathbb{I}_{2M}\) is the \(2M-\)dimensional identity matrix and \(X_{2M}:=\begin{bmatrix}0&\mathbb{I}_{M}\\ \mathbb{I}_{M}&0\end{bmatrix}\).
Assume also that \(\bar{n}=\bigotimes_{i=1}^{M}n_{i}\left|n_{i}\right\rangle\left\langle n_{i} \right|=(n_{1},n_{2},...,n_{M})\) corresponds to a specific output photon configuration, where \(n_{i}\) is the number of photons measured in the \(i\)-th mode. Then, it can be shown [25; 19] that the probability of observing \(\bar{n}\) is
\[\mathbb{P}(\bar{n})=\frac{Haf(\mathcal{A}_{\bar{n}})}{\bar{n}!\sqrt{det(\sigma _{Q})}}, \tag{2}\]
where \(\bar{n}!:=n_{1}!n_{2}!...n_{M}!\), \(\sigma_{Q}:=\sigma_{A}+\mathbb{I}_{2M}/2\) and \(\mathcal{A}_{\bar{n}}\) is a matrix associated to the observed output \(\bar{n}\). In particular, it is constructed starting from \(\mathcal{A}\) as follows: if \(n_{i}=0\), rows and columns \(i\) and \(i+M\) are removed from \(\mathcal{A}\) and, if \(n_{i}>0\), rows and columns \(i\) and \(i+M\) are repeated \(n_{i}\) times. Note that, when \(n_{i}>1\) for some \(i\), this procedure produces a matrix which has no physical meaning (one can think of the repeated rows and columns to correspond to observed "pseudo-modes"); however, this allows one to link the \(\mathbb{P}(\bar{n})\) to the _Hafnian_ of a matrix in any output situation. The Hafnian of a \(2M\)-square matrix \(B\) was introduced by Caianiello [26] in the context of quantum field theory and is defined as
\[Haf(B):=\sum_{\mu\in PMP}\prod_{i=1}^{M}B_{\mu(2i-1),\mu(2i)}, \tag{3}\]
where \(PMP\) is the set of perfect matching permutations.
When sending states which have been squeezed according to a squeezing transformation \(S\) through a linear interferometer described by a Haar random unitary \(T\), the output Gaussian state has a covariance matrix \(\sigma_{A}\) dependent on both \(S\) and \(T\) (see [19] for the explicit formula). Gaussian Boson Sampling has been introduced as the protocol which generates such a state and performs photo-counts measurements on it, according to Eq. (2).
Returning to the photo-counts analysis, when the Gaussian state is _pure_, the matrix \(\mathcal{A}\) can be written as \(\mathcal{A}=A\bigoplus A^{*}\), with \(A\) an \(M\crosscross M\) symmetric matrix, and the output probability distribution of the photo-counts becomes
\[\mathbb{P}(\bar{n})=\frac{|Haf(A_{\bar{n}})|^{2}}{\bar{n}!\sqrt{det(\sigma_{Q}) }}, \tag{4}\]
where the matrix \(A_{\bar{n}}\) is constructed considering only rows and columns \(i\) (and not \(i,i+M\) as for \(\mathcal{A}_{\bar{n}}\) above).
By relying on the above expression of \(\mathcal{A}\), it is possible to efficiently encode any symmetric matrix \(A\) into a GBS device [21; 22]. In other words, it is possible to set the squeezing transformation \(S\) and the unitary \(T\) in such a way that the produced Gaussian state has a covariance matrix \(\sigma_{A}\) which guarantees that the Hafnian appearing in Eq. (4) is computed on (possibly a submatrix of) a given symmetric matrix \(A\). The proposed procedure exploits the Takagi-Autonne decomposition [27] of \(A\) and results in a pure Gaussian state. Equation (4) becomes
\[\mathbb{P}(\bar{n})\propto c^{s}\frac{|Haf(A_{\bar{n}})|^{2}}{\bar{n}!}, \tag{5}\]
where \(c\) is a rescaling parameter linked to the squeezing applied to the input modes and \(s:=\sum_{i=1}^{M}n_{i}\).
Suppose now that the symmetric matrix \(A\) encoded into the GBS machine is the adjacency matrix of an undirected graph \(G\). As shown in [28], \(Haf(A)\) corresponds to the number of _perfect matchings_ of \(G\). A perfect matching of \(G\) is a subset of edges of \(G\) which match up every node of \(G\) exactly once. Assessing the number of perfect matchings of a graph is a known difficult task for classical computers: in fact, it can be proven that this problem (which in turns corresponds to computing the Hafnian of the adjacency matrix) belongs to the \(\#P-\)complete complexity class [29]. However, thanks to the possibility of encoding any symmetric matrix into the GBS device, Gaussian Boson Sampling can be actually used to estimate the number of perfect matchings of an arbitrary graph \(G\). In particular, it is related to the probability of observing \(n=(1,1,...,1)\), according to Eq. (5). Note also that, if the output \(n\) contains only 0s and 1s, it can be used to identify a subgraph of \(G\) in the following way: if \(n_{i}=1\), the \(i-\)th node of the graph is selected, whereas if \(n_{i}=0\) the \(i-\)th node is discarded. In addition, Eq. (5) states that the probability of observing a subgraph of the encoded graph \(G\) is proportional to the square Hafnian of the corresponding adjacency matrix, so that subgraphs with a large Hafnian are sampled with a higher probability. In other words, a GBS machine can be prepared such that it samples, with high probability, subgraphs whose number of perfect matchings is large. Aaghabali et al. [30] highlighted the connection between the number of perfect matchings in a graph and its density. In particular, the authors found a quantitative relationship between the two, confirming the intuition that a graph with a large number of perfect matchings is expected to contain many edges. Now the picture is complete: when sampling from a GBS device which encodes a graph \(G\), the subgraphs that are most likely to appear are the dense ones. This fact has been exploited in [31] to find dense subgraphs and is the foundation of our clustering algorithm.
## III GBS-based clustering
Let \(\{x_{i}\}_{i}\) be a set of points to be clustered. Classical clustering algorithms such as k-means [6] group these points according to a distance function in such a way that close points belong to the same cluster. In particular, given the number of clusters \(k\), k-means iteratively associates every element to the closest cluster and then it recomputes the cluster centers ("centroids"), until there are no more changes in the cluster composition. The nearest group is identified by computing the distance between data points and each cluster center; typically, Euclidean distance is used but any other distance metric can be implemented as well [6]. As mentioned before, k-means has some limitations linked to the clusters' shape: non-convex clusters which are not clearly separated are hardly identified. A different approach is adopted by DBSCAN, which is a density-based clustering method [7]. This algorithm uses two parameters, \(\varepsilon\) and \(MinPts\), to define clusters as dense regions. However, as mentioned in Section I, even DBSCAN is not suitable for every dataset.
Our clustering approach, which we name _GBS-based clustering_, adopts a different point of view. In Section II we highlighted the relationship between GBS and graph theory. In particular, when sampling from the GBS distribution of a graph, the subgraphs that are most likely to appear are the ones with high density. Such subgraphs consist of points which are connected to each other and disconnected to points belonging to other dense subgraphs. If one thinks of a node of the graph as a point \(x_{i}\) to be clustered and ensures that close points are connected, then dense subgraphs correspond to the common interpretation of clusters. Starting from this observation, the first step of our clustering approach (see Algorithm description at the end of this section) is to build a sparse graph \(G\) from the points to be clustered. First, we compute the distance matrix \(D\) of the \(\{x_{i}\}_{i}\) such that \(D_{ij}:=d(x_{i},x_{j})\), \(d\) being a distance. Then, we set a threshold \(\tilde{d}\) and we build the adjacency matrix \(A\) which characterizes the graph \(G\) as
\[A_{ij}:=\begin{cases}1&\text{if }D_{ij}<\tilde{d}\\ 0&\text{otherwise.}\end{cases} \tag{6}\]
In other words, two data points \(x_{i}\), \(x_{j}\) are connected in \(G\) if and only if their distance \(D_{ij}\) is smaller than a
chosen threshold \(\tilde{d}\). This way, we convert a list of points \(\{x_{i}\}_{i}\) into an undirected sparse graph \(G\): by construction, dense subgraphs of \(G\) consist of points which are close to each other and can therefore be considered clusters (see Figure 1).
The algorithm is iterative and starts by considering the above graph \(G\) and its adjacency matrix \(A\). At each step, we obtain \(N\) subgraphs by performing \(N\) times GBS from the adjacency matrix \(A\). We then identify the densest subgraph (in case of tie, the subgraph which has the largest number of nodes is considered) and if its density is higher than a threshold \(t\), then it is chosen as a cluster, otherwise GBS is used to sample another set of \(N\) subgraphs and the process is repeated. At each iteration, the threshold \(t\) is lowered: this way, after few samplings, the probability of identifying a cluster is very high. Once the cluster is found, the corresponding nodes are discarded from the graph, the adjacency matrix \(A\) is updated and the process restarts. This loop is performed until the number of nodes remaining in the graph is not too small.
Note that the check on the density of the subgraphs is crucial, mainly due to the fact that Eq. (5) only guarantees that, when performing GBS, the more sampled subgraphs are those with a large Hafnian. However, a large low-density graph (namely one consisting of a large number of slightly connected nodes) could have a larger Hafnian than a small high-density subgraph (see Figure 2). In other words, the Hafnian of the adjacency matrix can be considered a reliable measure of the density of a graph only when comparing graphs which have the same number of nodes.
Since the number of nodes composing a cluster is not known _a priori_, the density check cannot be avoided. Following the same reasoning, among the \(N\) sampled subgraphs, we post-select only those whose number of nodes is bigger than a threshold \(L\). Indeed, small graphs are more likely to be dense than larger graphs, since the number of possible edges in an \(M\)-nodes graph is \(\mathcal{O}(M^{2})\). However, in our framework, a cluster does not need to be extremely dense (as in the case of a _clique_[32], i.e. a fully connected graph) because this would limit the quality of the clustering, by producing a large number of tiny clusters. On the contrary, it is solely required to own a certain degree of connection between nodes. Therefore the post-selection of samples is done to avoid that a very small subgraph which is highly dense is considered a cluster in place of a fairly less dense but larger subgraph. In other words, selecting only large subgraphs helps obtain _maximal_ clusters, namely clusters which cannot be enlarged preserving a high density.
The process continues while the graph has a sufficient number of nodes and those which remain unclustered enter the post-processing phase. In this final step, each unclustered node \(n\) is assigned to a cluster according to its connectivity. In particular, if \(n\) is an isolated point, it forms a new cluster on its own; otherwise it is assigned to the cluster \(c\) for which the ratio between the number of connections linking \(n\) to \(c\) and the number of nodes of \(c\) is the highest.
## IV Results
Since its development, Gaussian Boson Sampling has been implemented on several quantum hardwares by different research groups [20; 33]. However, when using the public available QPUs [34], it is still not possible to encode a symmetric matrix to obtain a sample from a graph. For this reason, to test our clustering approach, we performed GBS by using a simulator provided by Xanadu [23]. This poses some limitations on the size of
Figure 1: **Creation of the graph \(G\) from the points \(\{x_{i}\}_{i}\). Blue edges are shorter than the chosen threshold \(\tilde{d}\) and thus are selected. In the resulting graph, close points are connected and dense subgraphs can be considered clusters of points.**
Figure 2: **Hafnian of \(A\) vs density of \(G\). a) A small high-density graph: density\((G)=0.83\) and \(Haf(A)=2\). b) A large low-density graph: density\((G)=0.66\) and \(Haf(A)=3\). GBS samples the graph on the right with a larger probability, however it would not be a good cluster, since it is quite sparse.**
the symmetric matrix from which one can sample. In particular, we noticed that graphs with more than 30 nodes are extremely slow to sample from. For this reason, we tested the GBS-based clustering algorithm described in Section III on 30 datasets consisting of a variable number of locations \(\{x_{i}\}_{i=1}^{M}\), with \(M\) ranging from 15 to 25, identified by their latitude and longitude. To guarantee a fair benchmark with k-means and DBSCAN, the usual Euclidean distance has been used to build the distance matrix \(D\). In order to set a meaningful threshold \(\tilde{d}\) (which is then used to build the adjacency matrix of the graph), we tried different percentiles of the distribution of the distances appearing in matrix \(D\); after a careful calibration on a large number of different datasets, we found that the best clusterings were obtained when setting \(\tilde{d}=D_{0.35}\), where \(D_{0.35}\) is the \(35^{th}\) percentile of \(\{D_{ij}\}_{i,j}\). Gaussian Boson Sampling has been performed by using the strawberryfields.sample.sample function [35], which takes as input a symmetric matrix \(A\), the mean number \(n_{mean}\) of photons observed in output and the number \(N\) of samples to produce. Some reasonable values were found to be \(n_{mean}=size(A)/2\) and \(N=50\); the parameter \(L\), used to post-select large samples, has been set to \(L=size(A)/3\). This choice should favor the creation of a large initial cluster but has just a negligible impact on following iterations.
The clustering outcomes obtained with our approach have been compared to the results of k-means, where \(k\) has been chosen for each dataset according to the so called _elbow analysis_, and to the results of DBSCAN. Different values of the hyperparameters of the latter have been tested; here we report only the best results, obtained when using \(\varepsilon=0.005\) and \(MinPts=2\). Additionally, noisy points found by DBSCAN have been clustered using the same post-processing function developed for our algorithm. To measure the quality of clustering, we used three metrics: the well-known _silhouette score_[36], the weighted density of clusters \(w\) and the intra-inter cluster cohesion \(\delta_{ie}\). The first one relies on the Euclidean distance between points, whereas the other two exploit the graph structure built upon the data. In particular, \(w\) is defined as \(w:=\frac{\sum_{i}^{k}n_{i}\cdot d_{i}}{M}\in[0,1]\), where \(d_{i}\) and \(n_{i}\) are respectively the density and the cardinality of cluster \(i\) and \(k\) is the number of found clusters. Finally, \(\delta_{ie}\in[-1,1]\) is defined as the average difference \(\delta_{int}-\delta_{ext}\), computed for each cluster. Given a cluster \(i\), \(n_{i}\) is the number of nodes in cluster \(i\), \(edge_{i}^{int}\) is the number of internal edges for the cluster and \(edge_{i}^{ext}\) corresponds to the number of edges connecting cluster \(i\) to any other point outside the cluster. Thus, \(\delta_{int}:=\frac{edge_{i}^{int}}{n_{i}(n_{i}-1)/2}\) and \(\delta_{ext}:=\frac{edge_{i}^{ext}}{n_{i}(n-n_{i})}\). For each metric, the higher the value, the better the clustering. In fact, a high silhouette score implies that, on average, a point is well paired with its assigned cluster. Concerning the weighted density \(w\), it means that clusters are dense subgraphs, namely highly connected sets of points. Recall that, by construction, connectivity between points is strongly related to their proximity, therefore dense subgraphs correspond to high-quality clusters. Finally, a high value of intra-inter cluster cohesion \(\delta_{ie}\) means that, not only the clusters have high density, but also that they are disconnected to each other and therefore points belonging to different clusters are far away. The mean results over the 30 datasets are reported in Table 1 and discussed in the following section.
Finally, it is important to note that, when measuring the GBS output, two options can be experimentally realized: it is possible to count the exact number of photons in each mode or to use th
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Method** & **Silhouette score** & **Weighted density** & **Intra-inter cluster** \\ & & & **closeion** \\ \hline \hline k-means & avg=0.40 & avg=0.65 & avg=0.52 \\ & std=0.06 & std=0.08 & std=0.12 \\ \hline DBSCAN & avg=0.29 & avg=0.73 & avg=0.71 \\ & std=0.11 & std=0.11 & std=0.11 \\ \hline GBS-based & avg=0.33 & avg=0.83 & avg=0.77 \\ clustering & std=0.10 & std=0.09 & std=0.09 \\ \hline \end{tabular}
\end{table}
Table 1: **Mean results and standard deviations over 30 datasets.** k-means produces the best silhouette score, but GBS-based clustering outperforms it when considering the weighted density \(w\) and the intra-inter cluster cohesion \(\delta_{ie}\). DBSCAN offers good results in terms of density and cohesion, at the cost of a poor silhouette score. Looking at the measured standard deviations, all of the three methods share similar variabilities and are stable with respect to different datasets.
measure only the presence of photons in each mode. Evidently, the first way is more precise, since the output of the second method is composed only of 0s and 1s, which correspond respectively to "no photons" and "at least one photon". Equations (2)-(4)-(5) rely on the first method of measurement: when there is more than one photon (say \(p\)) in an output mode, the corresponding row and column of matrix \(A\) are selected \(p\) times. Thus, if A is the adjacency matrix of a graph \(G\), the resulting sampled graph is no more a subgraph of the original \(G\), but a new graph where some nodes have been repeated along with their connections. For this reason, the ideal way of measurement when performing the proposed GBS-based clustering would be to count the exact number of photons in each mode and post-select just those samples containing only 0s and 1s, which correspond to actual subgraphs of \(G\). However, when using a simulator, counting the photons is an extremely slow operation: because of that, we decided to perform GBS using threshold detectors (setting threshold=True in the sample function). In this case, the exact probability distribution does not rely on the Hafnian of the matrix, but on its _Torontonian_, a matrix function introduced and discussed in [37]. The relationship between Hafnian, Torontonian and density of a graph has been studied in [38] through Monte Carlo simulations. The authors show a positive correlation between the Hafnian and the Torontonian and between the Torontonian and the density of the graph, validating the use of threshold detectors in GBS-measurement. Accordingly, we found that the proposed clustering method works properly even when using this faster approximate method of measurement.
## V Discussion
Every main clustering algorithm requires a choice of some parameters: for instance, in k-means it is the number \(k\) of clusters; in DBSCAN, they are the \(\varepsilon\), which defines the radius of the neighbourhood of a point, and \(MinPts\), which is the minimum number of points of a cluster. In the proposed GBS-based clustering, \(\tilde{d}\) is the main parameter to set. It is responsible for the creation of the auxiliary graph \(G\): large values produce a graph which contains a lot of edges, at the risk of connecting points which are not close; small values, instead, generate a high sparse graph, where fairly close points are not connected and therefore have low chance of being clustered together. Note that this crucial threshold has a concrete meaning, since it defines the _vicinity_ between points. Therefore, in real scenarios, one can leverage this fact and set \(\tilde{d}\) according to their definition of proximity. In this work, however, since the points to be clustered do not represent real-world datasets, we set \(\tilde{d}\) in order to obtain a number of clusters which was similar to the one obtained when using k-means. Other parameters to choose are \(n_{mean}\) and \(N\): the first one has been set quite large in order to favor the sampling of large subgraphs, whereas \(N\) is allowed to be small, since Eq. (5) guarantees that, with high probability, the GBS-device automatically samples graphs with a large number of perfect matchings. Finally, in our analysis, where the number of points to be clustered ranged between 15 and 20, the threshold \(L\) has a tangible impact only at the first iteration of the algorithm, when the first cluster is identified. After an accurate analysis, we set \(L=M/3\). Although the chosen thresholds performed as expected, a rigorous way of setting them could be investigated in a future work.
Results shown in Table 1 demonstrate that despite the GBS-based clustering shows a smaller silhouette score than k-means, it performs much better when considering the other two metrics. On the contrary, DBSCAN produces a poor silhouette but good results in \(w\) and \(\delta_{ie}\). As it was mentioned before, this is expected, since it is a density approach. However, our algorithm is able to surpass DBSCAN in every considered metric. Moreover, in several datasets, GBS-based clustering happened to be the best approach, getting an even higher silhouette than k-means (for instance, see the dataset reported in Figure 3). Note that, even though the graph \(G\) has no physical meaning and is constructed only to exploit the link between GBS and graph theory, clusters of points \(\{x_{i}\}_{i}\) should be dense with respect to \(G\) anyway. Indeed, a dense cluster represents a set of points which are _pairwise_ close: the actual distance is neglected, but is guaranteed to be below a certain threshold \(\tilde{d}\). K-means produces clusters in a way that a point is assigned to the closest centroid, which is more loosely related to the points of its cluster: clusters don't have a high density. On the contrary, DBSCAN takes the density of points into account, but loses focus on the global picture, getting poor results in terms of the actual distance. Given the obtained results and also considering a certain intrinsic variability due to the random nature of GBS, we believe that the proposed GBS-based clustering is able to capture the density of clusters while not neglecting the effective distance between points, resulting in a method which is at least as good as k-means and DBSCAN. We should stress that, if quantum hardware were available, we could have counted the exact number of photons in each mode and post-selected samples corresponding to proper subgraphs of \(G\). This precise method of measurement could have even improved the results.
A remark on the scalability of the proposed approach. The number of possible outcomes of GBS scales exponentially, so that, as \(M\) increases, the number of samples \(N\) required to estimate the actual probability distribution of photon patterns becomes immediately huge. Nevertheless, in our approach, we are not interested in estimating \(\mathbb{P}(\bar{n})\) (nor some Hafnian). Conversely, we know from Eq.
(5) that the most sampled subgraphs are those with a large number of perfect matchings, independently of \(N\) and \(M\). For this reason, by using an accurate quantum hardware, we still expect to be able to produce dense subgraphs by setting a value of \(N\) which guarantees that our GBS-based clustering remains computationally feasible.
Finally, recent works such as [39; 40] have focused on the analysis of _lossy_ Gaussian Boson Sampling, namely one containing imperfections. By means of numerical simulations on graphs generated using the Erdos-Renyi form, they show that, even in the presence of loss and spectral impurity, the samples obtained by GBS do not seem significantly different from the ones obtained when using a perfect GBS. If the results of these analysis were confirmed and extended to a general graph, there would be two consequences; first, the advantage of using GBS in finding dense subgraphs would likely be at most polynomial, since a lossy GBS can be efficiently simulated by classical algorithms. Second, GBS could be realized on a GBS device with few requirements in terms of loss and purity and thus our clustering algorithm could be implemented on a real quantum hardware in the short term. To confirm these results, more study is needed.
Figure 3: **Benchmark between different clustering methods on a selected dataset.****a)** The points to be clustered embedded in the sparse graph \(G\). **b)** Results obtained using k-means. The elbow analysis suggested \(k=3\). **c)** Results obtained using DBSCAN. **d)** Results obtained with the GBS-based clustering. It is evident that the best clusterings have been obtained with the two methods which consider the density of points (DBSCAN and GBS-based clustering). This visual intuition is confirmed by every considered metric: \(sil_{k-means}=0.54\), \(sil_{DBSCAN}=0.61\), \(sil_{\textit{GBS-based clustering}}=0.61\); \(w_{k-means}=0.79\), \(w_{DBSCAN}=1\), \(w_{\textit{GBS-based clustering}}=1\); \(\delta_{ic,k-means}=0.76\), \(\delta_{ic,DBSCAN}=0.87\), \(\delta_{ic,\textit{GBS-based clustering}}=0.91\). However, averaging over the 30 datasets, our approach performs better than DBSCAN.
Conclusions
Clustering is an unsupervised learning task which finds application in a plethora of real-world and research contexts. For this reason, it is crucial to develop clustering algorithms which can outperform well-known methods and quantum computing could be the key element to achieve this goal.
In this work, we propose an innovative clustering approach, which relies on Gaussian Boson Sampling (GBS), a recently developed model of quantum computation. This paradigm is strongly related to graph theory and can be used to sample high-density subgraphs from a parent graph. By exploiting this property of GBS, our algorithm identifies clusters of points as dense regions of a suitably constructed graph.
The proposed method has been tested on 30 datasets, using a GBS simulator which posed some limitations on the number of points that can be clustered. When a real quantum hardware able to encode any symmetric matrix is available, we expect GBS to be performed in a faster and more precise way. In particular, it will be possible to count the exact number of photons in each output mode, leading to the Hafnian-version of GBS (instead of the implemented Torontonian-version), which we believe could produce even more accurate results. Nevertheless, the obtained results demonstrate that, on average, our approach outperforms k-means and DBSCAN on two out of the three chosen metrics, proposing itself as a viable full-quantum clustering option. To further prove this point, a more complete benchmark between our method and other classical algorithms could be the subject of a future work.
Finally, it is important to note that, in this work, a suitable graph is constructed starting from the points to be clustered. However, the same algorithm could be applied directly to a given graph, to solve what is called _graph partitioning_, namely the task of finding communities in a network. In this perspective, future work will be focused on the case of weighted graphs and on the possibility of having overlapping clusters.
###### Acknowledgements.
This research work was supported by Enel S.P.A. that has funded the activity.
|
2304.12506 | DualSlide: Global-to-Local Sketching Interface for Slide Content and
Layout Design | Online learning and academic conferences have become pervasive and essential
for education and professional development, especially since the onset of
pandemics. Academic presentations usually require well-designed slides that are
easily understood. Sketches that visually represent design intentions and are
readily accessible to the average users. To assist non-expert users in creating
visually appealing academic slides, we propose DualSlide, a global and local
two-stage sketching interface system that provides image retrieval and user
guidance. At the global stage, DualSlide provides a heat map canvas to display
the distribution of all slide layouts in a dataset, allowing users to explore
the reference slides efficiently. At the local stage of the system, detailed
references and guidance for designing slide content, such as diagrams and
fonts, can be provided. We further propose a sketch-matching algorithm to
compare the user's input sketch and similar diagrams. All user guidance can be
adapted in real-time editing, and users can design slides with a high degree of
freedom. We conducted a user study to verify the effectiveness and usability of
the proposed DualSlide system confirming that DualSlide provides high retrieval
accuracy and satisfactory design results with a good user experience. Video:
https://youtu.be/lUI1zjxCdM0 | Jiahao Weng, Xusheng Du, Haoran Xie | 2023-04-25T01:03:43Z | http://arxiv.org/abs/2304.12506v1 | # DualSlide: Global-to-Local Sketching Interface for Slide Content and Layout Design
###### Abstract
Online learning and academic conferences have become pervasive and essential for education and professional development, especially since the onset of pandemics. Academic presentations usually require well-designed slides that are easily understood. Sketches that visually represent design intentions and are readily accessible to the average users. To assist non-expert users in creating visually appealing academic slides, we propose DualSlide, a global and local two-stage sketching interface system that provides image retrieval and user guidance. At the global stage, DualSlide provides a heat map canvas to display the distribution of all slide layouts in a dataset, allowing users to explore the reference slides efficiently. At the local stage of the system, detailed references and guidance for designing slide content, such as diagrams and fonts, can be provided. We further propose a sketch-matching algorithm to compare the user's input sketch and similar diagrams. All user guidance can be adapted in real-time editing, and users can design slides with a high degree of freedom. We conducted a user study to verify the effectiveness and usability of the proposed DualSlide system confirming that DualSlide provides high retrieval accuracy and satisfactory design results with a good user experience.
two-stage design, sketching interface, slides, layout design
## I Introduction
Creating visually appealing and effective slides is a challenging task, especially for users who have no experience with design. It is crucial to help users create high-quality and attractive slides which can have an impact on the audience's comprehension and retention of the information [3]. The use of consistent design elements can enhance the professionalism and clarity of slides, such as color and font selection [19]. However, even for experienced designers, it is challenging to create visually appealing slides because sophisticated design principles and techniques are required. Thus, there is an urgent need to assist users in the creation of well-designed documents, such as slides.
There is a growing trend for online courses and academic conferences to require instructors and presenters to create well-designed slides for lectures and presentations. The quality of the slides can affect the audience's understanding, but it can be time-consuming for inexperienced users to create visually appealing slides. Although presentation software, such as Microsoft PowerPoint is powerful, inexperienced users may find it more difficult to create well-designed slides than experienced designers. This is because inexperienced users are unfamiliar with design principles and may need more time to find references to inspire and inform their slide designs. Moreover, although PowerPoint provides a toolkit to generate layout suggestions for users, the toolkit may be unable to provide a consistent style and suitable suggestions for various design elements on a given slide.
In this work, we propose a sketching design system, DualSlide, which retrieves slide layouts and content for reference and provides users with guidance to assist them in designing slides. The proposed DualSlide system consists of two stages: a global design stage and a local design stage. This approach allows for a more structured and methodical design process, which can be beneficial to novice users who may not have experience or expertise in slide design. Dividing the task into global and local stages also helps users better understand the design process and gain a deeper understanding of the various elements that contribute to a successful design. Furthermore, this approach helps users identify and address any weaknesses in their designs. In contrast to traditional keyword-based retrieval systems [15], our system can extract slides from academic presentation videos and analyze their layout using image analysis techniques. We also apply a convolutional neural network (CNN) to extract features from the slides.
The interface of the DualSlide system is divided into three parts: the drawing canvas for retrieval and editing, the heat-map canvas and shadow guidance for guidance, and the retrieval results section for displaying similar slides. Users can edit their designs on the drawing canvas as the system simultaneously retrieves similar slides for reference, and the heat-map canvas and shadow guidance provide inspiration and guidance [14]. The retrieval results section provides a range of reference slide options from which users can choose. Overall, our system aims to make it easy for users to create visually appealing and effective slides and to help them find useful references efficiently and accurately, as shown in Fig. 1. Users first design the layout, for which the system provides references. Once the layout design is completed, users can proceed to design the details using the shadow guidance provided by the system.
We conducted a user study to evaluate the usability of the DualSlide system. The effectiveness of the sketching interface was evaluated through a user experience experiment and a comparison experiment. In the user experience phase, a group of users were asked to create slides using the DualSlide system, and the system's feedback and suggested slides were
used as shadow guidance to further refine the design. In the comparison experiment, a group of participants were asked to create slides using either the sketching interface or a traditional slide-design method. The evaluation results verified the effectiveness of the DualSlide in contrast to traditional slide design methods.
## II Related Work
### _Sketch Based Design Interface_
A design interface that employs abstract freehand sketches that do not contain a significant amount of visual detail has been demonstrated to be an effective way for users to express their intentions intuitively. Previous research has investigated the utilization of such sketches for a variety of tasks, including but not limited to the retrieval of web pages [4], the editing of images [11, 17], and the generation of shadow guidance to enhance the design skills of users [6, 7]. These techniques have also been applied to specific domains such as motion retrieval [10], calligraphy [5], cartoon image generation [8], and facial images [11]. Demonstrating their versatility and robustness. In this research, we aim to establish an interactive design interface that incorporates the use of freehand sketches to provide guidance to users in the process of designing slides.
### _Layout Design and Editing_
The VINS system, as described in [2], employs a layout with a user interface for input in order to retrieve designs for mobile interfaces. This allows users to easily find design elements that align with their intended interface layout. However, this approach is limited to the domain of mobile interface design. Other research has applied a similar approach in the context of web design. Specifically, Hashimoto et al. [4] proposed a method to retrieve example web pages with similar layout designs, using layout sketches as input. This approach enables users to quickly find web pages with layouts that match their intended design. However, although these previous studies have made significant contributions to the field, they are limited in scope.
### _Deep Learning for Layout Analysis_
In recent years, emerging deep learning-based techniques have been proposed for PubLayNet dataset [20], which has become a widely used resource for document layout analysis. Detectron2 [16], PubLayNet [20], and LayoutLM [18] are some of the widespread and state-of-the-art models being used in layout analysis. These techniques have been proven effective in various research studies; notable examples include work on the analyzing and generating layouts.
## III System Overview
DualSlide provides both global and local design stages for slide design. The proposed system consists of three parts: a preprocessing procedure for slide extraction; a database, which contains slide layout features and diagrams extracted from each slide, and a user interface (see Fig. 2). To construct the database, we first extracted slides from academic conference presentation videos. We then used a layout parser to analyze and label the layout of each slide. After that, we used a convolutional neural network (CNN) to extract the features of the labeled slides and store them in the database. Additionally, we trained another neural network to recognize the font in the slides, and we improve the image similarity algorithm to implement the slide retrieval function. The user interface has three sections: the sketch canvas for user editing, the heat map canvas for design guidance, and the results section for displaying similar slides.
Fig. 1: System concept of the proposed DualSlide sketching interface.
Fig. 2: Framework of our system. The shadow guidance is for the local design stage, and the layout guidance is for the global design stage.
### _Data Pre-processing_
#### Iii-A1 Global Design Stage
In the global design stage of DualSlide, slide extraction is a crucial step as it lays the foundation for labeling layouts, calculating the distribution, and extracting slide layout features, as shown in Fig. 3. To extract the slide content, we compare the image hashes of each consecutive frame in an academic conference presentation video. If the difference between two frames is greater than a certain threshold, it indicates that the slide has changed, and it is therefore extracted. Once the slides are extracted, we calculate the distribution of all the slides and generate a heat map to provide a reference and inspiration to users. We then use Detectron2 [16] to train a model to analyze the slides, and LayoutParser [12] to label the layout of each slide and extract features using a convolutional neural network. The LayoutParser tool is optimized for document image analysis tasks, allowing us to easily extract complicated document structures with just a few lines of code. After preprocessing is complete, all slide features are stored in the database and used for comparison with the user's input sketch.
#### Iii-A2 Local Design Part
In the local design stage of DualSlide, we employ LayoutParser [12] to label the diagrams and texts on each slide and to extract their content. These diagrams are then binarized and stored in a database for later use in sketch matching. In addition to the diagrams, we also process the text in the slides, as shown in Fig. 3. We applied five of the most commonly utilized fonts in design, as outlined in previous study [1]. These fonts were used to generate synthetic images for later training of a font recognition model, which is used to identify the font of the text on the slide and help users design the font style.
### _Sketch Matching_
Sketch matching is a crucial step in the local design stage as it lays the foundation for supporting references and generating shadow guidance. To match sketches, we propose an algorithm to compute the similarity between the input sketch and the candidate images, as shown in (1). The similarity between the images is computed by following these steps:
1. Initialize an empty list \(M\)
2. For each set of matching points \((i,j)\), compute \(sim(i,j)\).
3. Use a threshold to filter the good matches. Here we use 0.75 (In order to filter out matches that may be caused by noise or outliers, a distance ratio test is applied. After extensive experimentation, we have found this to be an effective threshold for filtering out high-quality matches while minimizing the loss of important features.). If \(m.distance<0.75\times n.distance\), append \((i,j)\) to \(M\) (\(m\) and \(n\) are matching points, from the first image and one from the second image, respectively).
4. If \(M\) is empty, return 0.
5. Otherwise, compute similarity.
First, we assume that there are \(n_{1}\) keypoints in the first image, each with a descriptor \(d_{1}\), and \(n_{2}\) keypoints in the second image, each with a descriptor \(d_{2}\). For each keypoint \(i\) and \(j\), we define their similarity as follows:
\[sim(i,j)=\frac{d_{1}(i)\cdot d_{2}(j)}{|d_{1}(i)|\cdot|d_{2}(j)|} \tag{1}\]
where \(d_{1}(i)\) and \(d_{2}(j)\) represent the descriptor (extracted by the Oriented FAST and Rotated BRIEF algorithm) of the \(i\)-th keypoint in the first image and the \(j\)-th keypoint in the second image, respectively. We use the BFMatcher algorithm (which compares each feature descriptor of one image with all feature descriptors of the other image, and returns the closest matches) to match all keypoints in the first image with all keypoints in the second image, resulting in \(n_{1}\times n_{2}\) matching points. For each matching point \((i,j)\), we select the better matching point, for example, the matching point satisfying \(sim(i,j)>t\).
Finally, we calculate the similarity \(S\) between the two images, defined as the average of the similarities of all better matching points:
\[S=\frac{\sum_{(i,j)\in M}sim(i,j)}{|M|} \tag{2}\]
where \(M\) is the set of better matching points: for example, the matching points satisfying \(sim(i,j)>t\) (here, \(t\) is a threshold value used to determine the "good matches" among the matching points. In this case, \(t\) is used in the distance ratio test, where if the distance between \(m\) and \(n\) is less than \(0.75\times n.distance\), then \(m\) and \(n\) are considered a "good match".).
In the above Equation (2), these matching points are referred to as "good matches" if the matching points satisfy \(m.distance<0.75\times n.distance\) when calculating the distance between matching points. \(|M|\) represents the number of elements in the set \(M\), for example, the number of good matching points. The final similarity \(S\) is the average of the similarities of all "good matching" points.
### _Font Recognition_
Font recognition is used to help users select fonts for slides. When users browse reference slides, they can take the font from the slides and use it in their design. We employed a neural network to train a font recognition model.
Fig. 3: Data pre-processing for slide extraction and slide contents extraction.
#### Iii-B1 Data Collection
Initially, we selected five of the most commonly utilized fonts in design, as outlined in a previously conducted study [1]. Since the slides in our dataset were extracted from videos, the font type of the text was not obvious. To address this, we generated synthetic data for each of the five selected fonts, consisting of 10,000 images per font (as shown in Fig. 4), resulting in a total of 50,000 images. Each image was labeled with its corresponding font type.
#### Iii-B2 Neural Network Architecture
The architecture of the neural network consists of a convolutional auto-encoder (CAE) [9] with a CNN classifier, as shown in Fig. 5. The fundamental concept of a CAE is to train the encoder component of the network to extract a compressed, low-dimensional representation of the input data and then use this representation to regenerate the original input through the decoder component of the network.
The encoder portion of the CAE is composed of three convolutional layers. The first layer has a kernel size of 58, which reduces the dimension of the input image to 64 feature maps. The second layer is a batch normalization layer, which aims to stabilize the distribution of activations during training. The third layer is a max pooling layer with a kernel size of 2, which reduces the spatial resolution of the feature maps by a factor of 2. The final layer of the encoder is a convolutional layer with a kernel size of 3 and padding of 1, which increases the number of feature maps to 128, as shown in Fig. 6(a).
The decoder portion of the CAE is composed of three transposed convolutional layers: a nearest-neighbor upsampling layer, and a batch normalization layer. The first layer is a transposed convolutional layer with a kernel size of 3 and padding of 1, which increases the spatial resolution of the feature maps to the original size. The second layer is a batch normalization layer, which aims to stabilize the distribution of activations during training. The third layer is a nearest-neighbor upsampling layer, which increases the spatial resolution of the feature maps by a factor of 2. The final layer of the decoder is a transposed convolutional layer with a kernel size of 58 and applies a sigmoid activation function, as shown in Fig. 6(b).
The CNN classifier takes the output from the CAE and a number of font types as input; it has several linear and dropout layers; and it uses the output of the encoder layers as the feature map for the classifier.
The CAE architecture is trained end-to-end using a mean-squared error (MSE) loss function as the objective function, while the CNN architecture is trained using a cross-entropy loss function.
## IV Design Interface
The proposed design interface includes both global and local design stages. The user can switch between the global and local stages by clicking the switch button shown in Fig. 7.
### _Global Design_
The aim of the global design is to help users efficiently and conveniently retrieve slide layouts similar to those they have designed. It also serves as a source of inspiration and guidance for slide layout design. When users want reference material for slide layout design, they can refer to the heat map, as shown in Fig. 7. The features of the user's input sketches are extracted, and the Visual Geometry Group Network (VGG16) [13] is used to compare the similarities between input features and all the features in the database. The most similar slides are displayed on the web page. The reference results change as users edit their sketches.
The heat map (shown in Fig. 8), displays the distribution of all slide layouts in the database as a source of inspiration for users. It is divided into three sections: title, text, and figure, which cover most design scenarios. The bottom of the heat map canvas features a legend; the darker the color, the greater the distribution. Users can view the distribution of each section or the entire distribution by clicking the different
Fig. 4: Examples of the training data.
Fig. 5: Architecture of the font recognition model. The CNN classifier takes the output from the CAE and a number of font types as inputs.
Fig. 6: Architecture of CAE.
toolkit buttons, as shown at the bottom of Fig. 7. As the user finishes or edits their sketch, the distribution of the heat map will also change simultaneously to provide ongoing guidance on layout design.
### _Local Design_
The local design is intended to help users accurately and efficiently retrieve references when they design the slide content, such as diagrams and fonts. It also serves as a source of inspiration and guidance for slide content design. For instance, when users want reference material to draw a "framework architecture," they can first draw a rough outline. The sketch matching algorithm then compares similarities between the input sketch and the diagrams extracted from the slides. Then, the most similar diagrams are retrieved, and the first one is used as shadow guidance to provide inspiration and guidance to users, as shown in Fig. 9. The shadow guidance also changes in real time as the user edits their sketch. Users can also click on candidate diagrams to change the shadow guidance and select a checkbox to select it.
In addition, when users click on candidate diagrams, they have another option: they can scan the text on the slide and apply the font to their design, as shown in Fig. 10.
## V User Study
We conducted user studies for both the global and local design stages of the proposed DualSlide system.
### _Global Design_
We recruited 17 participants (college students around 20 years old, 12 males and 5 females) to participate in the experiment.
#### V-A1 Comparison Experiments
We first compared traditional slide-design interfaces with our proposed interface. We divided 12 participants into two groups. Another five participants were recruited to evaluate the designs. The first group was asked
Fig. 8: The heat map of title layout (1), text (except title) layout (2), figure layout (3), and all of the layouts (4).
Fig. 10: The font recognition function. User first clicks the scan button, then labels the text they prefer, and the font style is applied to their design.
Fig. 7: The heat map canvas and the toolkit to change the type of heat map.
Fig. 9: The interface for local design. The bottom part shows the shadow guidance candidates (the visual most similar to a user sketch). Buttons: (a) Stroke thickness, (b) Stroke color, (c) Add font, (d) Draw line or arrow, (e) Draw rectangle, (f) Choose font type, (g) Auxiliary Lines, (h) Save canvas.
to design three slides (focusing only on the layout) using PowerPoint (PPT), while the second group used our interface. Both groups were given an academic document (a computer science poster) to use as a reference. Once the design task was completed, the designs were evaluated by the remaining five participants based on three criteria: organization, aesthetics, and consistency.
#### V-A2 User Experience
After the user experience experiment, we conducted a user study to assess user experiences by administering a questionnaire. The questionnaire used a 7-point Likert scale (from 1 = strongly disagree to 7 = strongly agree). We asked the 12 participants who experienced our interface in the comparison experiment to complete the questionnaire.
### _Local Design_
We recruited 32 participants (12 participants to directly participate in the experiment and 20 participants to conduct the evaluation; all were college students around 20 years old, with 20 males and 12 females) to attend the experiments.
#### V-B1 Comparison Experiments
These experiments compared traditional slide-design interfaces (Fig. 11(a)) with our proposed interface (Fig. 11(b)). We divided 12 participants into two groups. The other 20 participants were recruited to evaluate the designs. The first group was asked to design the framework architecture figure using UII (Fig. 11(a)), while the second group was asked to use our interface, and vice versa. Both groups were given three selected fragments from the academic document to use as references. Once the design task was completed, the designs were evaluated by the remaining 20 participants based on three criteria: organization, aesthetics, and correctness.
#### V-B2 User Experience
After the user experience experiment, we conducted a user study to assess user experiences by administering a questionnaire. The questionnaire used a 7-point Likert scale (from 1 = strongly disagree to 7 = strongly agree). We asked the 12 participants who experienced our interface in the comparison experiments to complete the questionnaire.
## VI Results
### _Implementation Details_
Our work was conducted on a computer system with the following hardware and software specifications. The central processing unit (CPU) was an AMD(r) Ryzen5 5600X CPU @ 3.70GHz X 6 with six cores and 12 threads. The graphics processing unit (GPU) was an NVIDIA RTX3060 with 12GB of video memory and support for CUDA programming. The random access memory (RAM) was 32 GB, which allowed multiple programs to run simultaneously. The operating system (OS) used was Windows 11, the latest version of Microsoft Windows. The programming language used was Python 3.8, a widely used and versatile scripting language for data analysis and machine learning. Our proposed system has an average retrieval time of 1.02 seconds for slide layout and an average computational cost of 1.33 seconds per sketch input for the sketch-matching algorithm.
### _Layout Design Guidance_
For the local design stage, we conducted an experiment to examine how well our system retrieves similar slides based on user input sketches. As shown in Fig. 12, different input sketches can retrieve different slides. Even if users do not complete the entire layout, the system will retrieve the most similar slides based on the sketches provided. The top part of Fig. 12 shows that even when the user only sketches the title layout, the system still performs well.
### _Font Recognition_
In our research, 75 \(\%\) of the data was used for training the model, while the remaining 25 \(\%\) was used for validation. This split allows for the model to be trained on a large portion of the data while also being able to evaluate its performance on unseen data.
The results for our model are shown in Fig. 13, where it can be seen that the model performed well on the training set, achieving high accuracy. This indicates that the model can perform well on font classification tasks.
### _User Study_
#### Vi-D1 Comparison Experiment
To quantify the ability to support users in designing slide layouts with more consistent styles, we conducted a comparison experiment. The results of this experiment are presented in Fig. 14(a). The results demonstrate that the proposed interface is capable of helping users design slides with a high degree of consistency when compared to traditional interfaces. This is an important finding,
Fig. 11: User interface for comparison experiment.
Fig. 12: Examples of the designed results by participants during the user study. The red part is the diagram layout, the blue part is the text layout, and the green part is the title layout.
as it suggests that the proposed interface can effectively guide users to create visually coherent slide designs, which can enhance the overall effectiveness of the resulting design.
Additionally, the results of the user comparison experiment, as shown in Fig. 14(b), indicate that the proposed interface has a positive impact on users' ability to effectively convert complex information into an aesthetically pleasing diagram. This is achieved by providing users with shadow guidance. The results of the experiment demonstrate that the proposed interface is effective in improving the aesthetics and correctness of slide content compared to traditional interfaces.
To further illustrate the usefulness of our interface, Fig. 15(a) displays several examples of the slides generated by participants during the experiment for the global design stage. In this stage, we provide the participant with images and a fixed font style. The participant only needs to focus on the design layout. Fig. 15(b) displays several examples of the slides generated by participants during the experiment for the local design stage, in which the participants need to design the diagram and font style by themselves.
#### Vi-B2 User Experience
To evaluate the effectiveness of the proposed interface, we conducted a user experience experiment. The results of the experiment for the global stage, as shown in Fig. 16(a), indicates that not only were participants satisfied with the slides that were designed using our interface, but they also reported that the interface saved them time during the retrieval process. Additionally, the heat-map design feature, which was implemented as a means to guide users, was found to be particularly inspiring for participants in terms of layout design.
The results of the experiment about the local stage, as shown in Fig. 16(b), indicate that not only were participants satisfied with the slides that were designed using our interface, but they also reported that the interface saved them time during the retrieval process. Additionally, the shadow guidance and font recognition features, which were implemented to guide users, were found to be particularly inspiring for participants in terms of local design. Moreover, most participants believed that our interface succeeded in improving their design skills.
## VII Conclusion
In this work, we proposed an interactive design system, DualSlide, that uses a heat map canvas and shadow guidance to provide users with references and guidance during slide design. A font recognition model was also included to ensure consistent font styles across slides. A user study was conducted to compare the proposed interface with traditional interfaces, and the results show that the proposed interface is effective
Fig. 14: Evaluation results for the comparison experiments.
Fig. 13: Training results of font recognition model. The y-axis is the accuracy percentage, the x-axis is the training loss.
Fig. 15: Examples of the results designed by participants.
in improving the overall design process for slide design. The results indicate that DualSlide has the potential to be generalized to a wide range of use cases, making it a valuable tool in the field of document layout and content design.
This work has some limitations, in particular, the limited size of the dataset, which results in a relatively long retrieval time of approximately 1.08 seconds per search. To address this issue, we intend to expand the dataset and improve the sketch-matching algorithm to reduce the retrieval time. We also plan to extend our research to other document types, such as posters and papers. In the current implementation of the proposed DualSlide framework, aesthetic judgment of the collected dataset and design relied on the user's discretion. To address this issue, we plan to add an assessment module that can automatically score the references provided by the system and the designed results created by users in the future.
## Acknowledgment
This research was supported by the JAIST Research Fund and JSPS KAKENHI JP20K19845, Japan.
|
2303.07812 | Termination of Graph Transformation Systems Using Weighted Subgraph
Counting | We introduce a termination method for the algebraic graph transformation
framework PBPO+, in which we weigh objects by summing a class of weighted
morphisms targeting them. The method is well-defined in rm-adhesive
quasitoposes (which include toposes and therefore many graph categories of
interest), and is applicable to non-linear rules. The method is also defined
for other frameworks, including SqPO and left-linear DPO, because we have
previously shown that they are naturally encodable into PBPO+ in the quasitopos
setting. We have implemented our method, and the implementation includes a REPL
that can be used for guiding relative termination proofs. | Roy Overbeek, Jörg Endrullis | 2023-03-14T11:36:55Z | http://arxiv.org/abs/2303.07812v7 | # Termination of Graph Transformation Systems Using Weighted Subgraph Counting
###### Abstract
We introduce a termination method for the algebraic graph transformation framework PBPO\({}^{+}\), in which we weigh objects by summing a class of weighted morphisms targeting them. The method is well-defined in rm-adhesive quasitoposes (which include toposes and therefore many graph categories of interest), and is applicable to non-linear rules. The method is also defined for other frameworks, including DPO and SqPO, because we have previously shown that they are naturally encodable into PBPO\({}^{+}\) in the quasitopos setting.
Keywords:Graph transformation Termination Pullback-Pushout
## 1 Introduction
Many fields of study related to computation have mature termination theories. See, for example, the corpus for term rewriting systems [33, Chapter 6].
For the study of graph transformation, by contrast, not many termination methods exist, and the ones that do exist are usually defined for rather specific notions of graphs. Although the techniques themselves can be interesting, the latter observation fits somewhat uneasily with the general philosophy of the predominant algebraic graph transformation tradition [12], in which graph transformations are defined and studied in a graph-agnostic manner, by using the language of category theory.
In this paper, we introduce a termination method for PBPO\({}^{+}\)[24], a method in the algebraic tradition. We weigh objects \(G\) by summing a class of weighted elements (i.e., morphisms of the form \(T\to G\)), and construct a decreasing measure. Our method enjoys generality across two dimensions:
1. The method is formulated completely in categorical terms, and is well-defined in (locally finite) rm-adhesive quasitoposes. The rm-adhesive quasitoposes include all toposes, and so automatically a large variety of graphs, as well as other structures, such as Heyting algebras [16] and fuzzy presheaves [31].
2. The method is also defined for DPO [13], SqPO [10], AGREE [9] and PBPO [8]. This is because we have recently shown that, in the quasitopos setting, every rule of these formalisms can be straightforwardly encoded as a PBPO\({}^{+}\) rule that generates the same rewrite relation [24, Theorem 73].
To the best of our knowledge, this is the first termination method applicable in such a broad setting; and the first method that is automatically defined for a variety of well-known algebraic graph transformation frameworks. In addition, the termination method can be applied to non-linear (duplicating) rules.
The paper is structured as follows. We summarize the basic categorical, graph and termination preliminaries (Section 2), and we cover the required background on PBPO\({}^{+}\) (Section 3). Next, we explain and prove our termination method (Section 4). After, we amply illustrate our method with a variety of examples (Section 5), and then compare our approach to related work (Section 6). We close with some concluding remarks and pointers for future work (Section 7).
## 2 Preliminaries
The preliminaries for this paper include basic categorical and graph notions (Section 2.1), and a basic understanding of termination (Section 2.2).
### Basic Notions
We assume familiarity with basic categorical notions such as (regular) monomorphisms, pullbacks and pushouts [3, 25]. We write \(\rightarrow\) for monos; and \(\operatorname{Hom}(\mathbf{C})\), \(\operatorname{mono}(\mathbf{C})\), \(\operatorname{rm}(\mathbf{C})\) and \(\operatorname{iso}(\mathbf{C})\) for the classes of morphisms, monomorphisms, regular monomorphisms and isomorphisms in \(\mathbf{C}\), respectively.
**Notation 1** (Nonstandard Notation): _Given a class of morphisms \(\mathcal{A}(\mathbf{C})\), we write \(\mathcal{A}(A,B)\) to denote the collection of \(\mathcal{A}\)-morphisms from \(A\) to \(B\), leaving \(\mathbf{C}\) implicit. For sets of objects \(S\), we overload \(\mathcal{A}(S,A)\) to denote \(\bigcup_{X\in S}\mathcal{A}(X,A)\). If \(\mathcal{A}(\mathbf{C})\) is a generic class in lemmas, we use \(\leadsto\) to denote \(\mathcal{A}\)-morphisms._
_For cospans \(A\stackrel{{ f}}{{\rightarrow}}C\stackrel{{\varrho}}{{ \rightarrow}}D\), we write \(\langle f\mid g\rangle\) to denote the arrow \(B\to D\) obtained by pulling \(f\) back along \(g\)._
Definition 2 (\(\mathcal{A}\)-Local Finiteness): Let \(\mathcal{A}(\mathbf{C})\) be a class of morphisms. A category \(\mathbf{C}\) is \(\mathcal{A}\)-locally finite if \(\mathcal{A}(A,B)\) is finite for all \(A,B\in\operatorname{Obj}(\mathbf{C})\).
Lemma 3 (Pullback Lemma): _Assume the right square is a pullback and the left square commutes. Then the outer square is a pullback iff the left square is a pullback._
Definition 4 (Van Kampen Square [18]): A pushout square is said to be Van Kampen (VK) if, whenever it lies at the bottom of a commutative cube where the back faces FBAE and FBCG are pullbacks, this implies that the top face is a pushout iff the front faces are pullbacks.
Definition 5 (Rm-Adhesive Category [19]): A category is _rm-adhesive_ (a.k.a. quasiadhesive) if pushouts along regular monomorphisms exist and are VK._
**Definition 6** (Quasitopos [15, 1, 34]): _A category \(\mathbf{C}\) is a quasitopos if it has all finite limits and colimits, it is locally cartesian closed, and it has a regular-subobject classifier._
Definition 7 (Split Epimorphism): An epimorphism \(e:A\twoheadrightarrow B\) is split if it has a right-inverse, i.e., if there exists an \(f:B\to A\) such that \(e\circ f=1_{B}\).
For split epimorphisms \(e\), we let \(e^{\leftarrow}\) denote an arbitrary right-inverse of \(e\).
Proposition 8 ([1, Prop. 7.59]): _If \(e\) is a split epi, then \(e^{\leftarrow}\in\operatorname{rm}(\mathbf{C})\). _
Definition 9 (\(\mathcal{A}\)-Factorization): An \(\mathcal{A}\)-factorization of a morphism \(f:A\to C\) consists of a morphism \(f^{\prime}:A\to B\) and \(\mathcal{A}\)-morphism \(f^{\prime\prime}:B\rightsquigarrow C\) such that \(f=f^{\prime\prime}\circ f^{\prime}\), and for any other such factorization \(g^{\prime}:A\to B^{\prime}\), \(g^{\prime\prime}:B^{\prime}\rightsquigarrow C\) of \(f\), there exists a unique \(x:B\to B^{\prime}\) making the right diagram commute.
Remark 10: If \(\mathcal{A}=\operatorname{mono}\) or \(\mathcal{A}=\operatorname{rm}\) then the notion of \(\mathcal{A}\)-factorization coincides with the common notion of (regular)-image factorization, so that \(B\) is considered to be the image of \(f\). For generality, we intentionally widen the definition to allow for the case where \(\mathcal{A}=\operatorname{Hom}\), in which case in any category, \(B=A\), \(f^{\prime}=1_{A}\) and \(f^{\prime\prime}=f\) defines an \(\mathcal{A}\)-factorization of \(f\), with \(x=g^{\prime}\) as the unique witness.
Our method is defined fully in categorical terms. For examples, and to guide intuition, we will use the category of edge-labeled multigraphs.
Definition 11 (Graph Notions): Let a finite label set \(\mathcal{L}\) be fixed. An (edge-labeled) (multi)graph \(G\) consists of a set of vertices \(V\), a set of edges \(E\), source and target functions \(s,t:E\to V\), and an edge label function \(\ell^{E}:E\to\mathcal{L}\). A graph is _unlabeled_ if \(\mathcal{L}\) is a singleton.
A _homomorphism_ between graphs \(G\) and \(G^{\prime}\) is a pair of maps \(\phi=(\phi_{V}:V_{G}\to V_{G^{\prime}},\phi_{E}:E_{G}\to E_{G^{\prime}})\) satisfying \((s_{G^{\prime}},t_{G^{\prime}})\circ\phi_{E}=\phi_{V}\circ(s_{G},t_{G})\) and \(\ell^{E}_{G^{\prime}}\circ\phi_{E}=\ell^{E}_{G}\).
Definition 12 ([12]): The category \(\mathbf{Graph}\) has graphs as objects, parameterized over some global (and usually implicit) label set \(\mathcal{L}\), and homomorphisms as arrows. The subcategory \(\mathbf{FinGraph}\) restricts to graphs with finite \(V\) and \(E\).
### Termination
The topic of termination dates back at least to Turing, and is studied in many different settings. For a systematic overview for term rewriting systems (not yet existent for graph transformation systems), see [33, Chapter 6]. Plump has shown that termination of graph rewriting is undecidable [28].
Definition 13: Let \(R\subseteq A\times A\) be given. An _infinite_\(R\)-sequence is a function \(f:\mathbb{N}\to A\) such that for all \(i\in\mathbb{N}\), \((f(i),f(i+1))\in R\).
Definition 14: A binary relation \(R\) is _terminating_ if there does not exist an infinite \(R\)-sequence.
**Definition 15** ([2, 17]): _Let \(R,S\subseteq A\times A\) be binary relations. Then \(R\) is terminating relative to \(S\) if every infinite \(R\cup S\)-sequence contains a finite number of \(R\) steps._
For our purposes, it suffices to measure objects as natural numbers (instead of a general well-founded order).
Definition 16 (Measure): A _measure_ is a function \(\mathbf{w}:A\rightarrow\mathbb{N}\). The measure \(\mathbf{w}\) is _decreasing_ for a binary relation \(R\subseteq A\times A\) if for all \((x,y)\in R\), \(\mathbf{w}(x)>\mathbf{w}(y)\), and it is _non-increasing_ for \(R\) if for all \((x,y)\in R\), \(\mathbf{w}(x)\geq\mathbf{w}(y)\).
Proposition 17 ([2, 17]): _Let \(R,S\subseteq A\times A\) be binary relations. Assume that there exists a measure \(\mathbf{w}\) that is decreasing for \(R\) and non-increasing for \(S\). Then \(R\) is terminating relative to \(S\). Consequently \(R\cup S\) is terminating iff \(S\) is. _
In a framework agnostic setting, a rule \(\rho\) is a mathematical object that induces a binary relation \(\Rightarrow_{\rho}\subseteq A\times A\). We say that a rule or a system or rules is terminating, decreasing or non-increasing if the induced rewrite relations have the respective property (and analogously for relative termination). Note that also Proposition 17 can then be applied to systems of rules in place of relations.
## 3 Pbpo\({}^{+}\)
PBPO\({}^{+}\) is short for _Pullback-Pushout with strong matching_. It is obtained by strengthening the matching mechanism of PBPO [8] by Corradini et al.
We provide the necessary definitions and results on PBPO\({}^{+}\). See Section 5 for many examples of rules. For a gentler introduction to PBPO\({}^{+}\), with examples of rewrite steps, see especially the tutorial [23].
Definition 18 (Pbpo\({}^{+}\) Rewriting [8, 24]): A _PBPO\({}^{+}\) rule_\(\rho\) is a diagram as shown on the left of:
\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(\L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(L\)\(\L\)\(L\)\(\L\)\(L\)\(\L\)\(L\)\(\L\)\(L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\L\)\(\)\(
Remark 20 ([24, Section 3]): If Assumption 19 holds for a rule \(\rho\), then any \(\rho\) step defines (up to isomorphism) a diagram shown on the right, where the bold diagram is \(\rho\) (\(t_{L}\circ l=l^{\prime}\circ t_{K}\) a pullback and \(t_{R}\circ r=r^{\prime}\circ t_{K}\) a pushout). Our method uses the extra pushout to analyze how rewritten objects (the middle span) relate to the context types (the bottom span).
Theorem 3.1 ([24, Theorem 73]): _Let \(\mathbf{C}\) be a quasitopos, and let matches \(m\) be regular monic. For rewriting formalisms \(\mathcal{F}\) and \(\mathcal{G}\), let \(\mathcal{F}\prec\mathcal{G}\) express that in \(\mathbf{C}\), for any \(\mathcal{F}\) rule \(\rho\), there exists a \(\mathcal{G}\) rule \(\tau\) such that \(\Rightarrow^{\rho}_{\mathcal{F}}=\Rightarrow^{\tau}_{\mathcal{G}}\). We have:_
\[\text{SqPO}\prec\text{AGREE}\ \ \begin{array}{c}\vartriangleleft\text{ PBPO}^{+}\vartriangleleft\text{ DPO}\\ \text{PBPO}\end{array}\end{array}\]
Observe that \(\prec\) is transitive. As the constructive proofs in [24] show, the procedures to encode the mentioned formalisms into PBPO\({}^{+}\) are straightforward. We moreover conjecture SPO \(\prec\text{PBPO}^{+}\)[24, Remark 26].
## 4 Decreasingness by Counting Weighted Elements
We start with an explanation of the general idea behind our termination approach. Given a set of rules \(\mathcal{T}\), we seek to construct a measure \(\mathbf{w}\) such that for all steps \(G_{L}\Rightarrow_{\rho}G_{R}\) generated by a rule \(\rho\in\mathcal{T}\), \(\mathbf{w}(G_{L})>\mathbf{w}(G_{R})\). Then \(\mathbf{w}\) is a decreasing measure for the rewrite relation generated by \(\mathcal{T}\), so that \(\mathcal{T}\) is terminating. We construct such a measure \(\mathbf{w}\) by weighing objects as follows.
Definition 22 (Weight Functions): Given a set of objects \(\mathbb{T}\), _weight function \(\mathbf{w}:\mathbb{T}\to\mathbb{N}\)_, and class of morphisms \(\mathcal{A}(\mathbf{C})\), we define the _tiling weight function_
\[\mathbf{w}^{\mathcal{A}}_{\mathbb{T}}(X)\qquad=\qquad\sum_{t\in\mathcal{A}( \mathbb{T},X)}\mathbf{w}(\text{dom}(t))\]
for objects \(X\in\operatorname{Obj}(\mathbf{C})\). In this context, we refer to the objects of \(\mathbb{T}\) as _tiles_. (Note that \(\mathbf{w}^{\mathcal{A}}_{\mathbb{T}}\) is well-defined if \(\mathbb{T}\) is finite and \(\mathbf{C}\) is \(\mathcal{A}\)-locally finite.)
Example 23: Let \(\mathbf{C}=\mathbf{FinGraph}\) with singleton label set \(\mathcal{L}\), and \(G\) and arbitrary graph. Some basic examples of tile sets and parameters are as follows.
* Let \(\boldsymbol{\cdot}\) represent the graph consisting of a single node. If \(\mathbb{T}=\{\,\boldsymbol{\cdot}\,\}\), \(\mathbf{w}(\,\boldsymbol{\cdot}\,)=1\), and \(\mathcal{A}(\mathbf{C})\in\{\operatorname{Hom}(\mathbf{C}),\operatorname{ mono}(\mathbf{C}),\operatorname{rm}(\mathbf{C})\}\), then \(\mathbf{w}^{\mathcal{A}}_{\mathbb{T}}(G)=|V_{G}|\).
* Let \(\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\) represent the graph consisting of a single edge with distinct endpoints. If \(\mathbb{T}=\{\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\}\), \(\mathbf{w}(\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,)=1\) and \(\mathcal{A}(\mathbf{C})=\operatorname{Hom}(\mathbf{C})\), then \(\mathbf{w}^{\mathcal{A}}_{\mathbb{T}}(G)=|E_{G}|\). If instead \(\mathcal{A}(\mathbf{C})=\operatorname{Hom}(\mathbf{C})\), then \(\mathbf{w}^{\mathcal{A}}_{\mathbb{T}}(G)\) counts the number of subgraph occurrences isomorphic to \(\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\) in \(G\) (loops are not counted). (See also Example 50.)
* If \(\mathbb{T}=\{\,\boldsymbol{\cdot},\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\}\), \(\mathbf{w}(\,\boldsymbol{\cdot}\,)=2\), \(\mathbf{w}(\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,)=1\) and \(\mathcal{A}(\mathbf{C})=\mathrm{Hom}(\mathbf{C})\), then \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G)=2\cdot|V_{G}|+|E_{G}|\).
Our goal is to use \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(\cdot)\) as a decreasing measure. This gives rise to two main challenges: finding a suitable \(\mathbb{T}\) (if it exists), and determining whether \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(\cdot)\) is decreasing. In this paper, we focus exclusively on the second problem, and show that the matter can be decided through a finite rule analysis.
Certain assumptions on \(\mathcal{A}(\mathbf{C})\) will be needed. To prevent clutter and to help intuition, we state them now, valid for the remainder of this paper. In the individual proofs, we clarify which assumptions on \(\mathcal{A}(\mathbf{C})\) are used.
Assumption 24: _We assume that \(\mathcal{A}(\mathbf{C})\) satisfies \(\mathrm{rm}(\mathbf{C})\subseteq\mathcal{A}(\mathbf{C})\); is stable under pullback, composition (\(g,f\in\mathcal{A}(\mathbf{C})\implies g\circ f\in\mathcal{A}(\mathbf{C})\)) and decomposition (\(g\circ f\in\mathcal{A}(\mathbf{C})\implies f\in\mathcal{A}(\mathbf{C})\)); and that \(\mathcal{A}\)-factorizations exist._
Note that \(\mathrm{iso}(\mathbf{C})\subseteq\mathcal{A}(\mathbf{C})\), because \(\mathrm{iso}(\mathbf{C})\subseteq\mathrm{rm}(\mathbf{C})\).
Proposition 25: _In any category, the class \(\mathrm{Hom}(\mathbf{C})\) satisfies Assumption 24. Likewise for \(\mathrm{mono}(\mathbf{C})\) and \(\mathrm{rm}(\mathbf{C})\) in (quasi)toposes. 1_
Footnote 1: The proofs for results marked with \(\,\raisebox{-1.29pt}{\scalebox{0.8}{$\circ$}}\hskip-1.29pt\) can be found in the appendix.
Now suppose that a rule \(\rho\) generates a rewrite step diagram. This defines a factorization \(t_{R}=R\stackrel{{ w}}{{\to}}G_{R}\stackrel{{ w^{ \prime}}}{{\to}}R^{\prime}\) (Remark 20). Any tiling of \(G_{R}\) can be partitioned into two using the following definition.
Definition 26: For arrows \(f:A\to B\) and sets \(S\) of arrows with codomain \(B\) we define the partitioning \(S=S_{\cong}^{f}\uplus S_{\not\cong}^{f}\) where \(S_{\cong}^{f}=\{g\in S\mid\langle f\mid g\rangle\in\mathrm{iso}(\mathbf{C})\}\) and \(S_{\not\cong}^{f}=\{g\in S\mid\langle f\mid g\rangle\not\in\mathrm{iso}(\mathbf{ C})\}\).
Intuitively, \(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w}\) contains all tilings that lie isomorphically in the pattern \(w(R)\), and \(\mathcal{A}(\mathbb{T},R^{\prime})_{\cong}^{t_{R}}\) the remaining tilings, which overlap partially or fully with the context. The remainder of this section is structured as follows.
We will start by centrally identifying some key assumptions and properties that we need in order to reason on the level of the rule (Section 4.1).
We then prove that there exists a domain-preserving bijection between \(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w}\) and \(\mathcal{A}(\mathbb{T},R^{\prime})_{\cong}^{t_{R}}\), allowing us to determine \(\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})\) on the level of the rule (Section 4.2).
Determining \(\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\) on the level of the rule is in general impossible, because usually \(G_{R}\) can have an arbitrary size. Instead, we give precise conditions, formulated on the level of the rule, that ensure that there exists a domain-preserving injection \(\xi:\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\to\mathcal{A}(\mathbb{T},G_ {L})\) across the rewrite step diagram, so that \(\mathbf{w}(\xi\circ\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})=\mathbf{w}( \mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\) (Section 4.3). Such injections often exist in the usual categories of interest, because the context of \(G_{R}\) is roughly inherited from the left.
The two results are then combined as follows. If we additionally find a tiling \(\Delta\subseteq\mathcal{A}(\mathbb{T},L)\) such that for the given match \(m:L\to G_{L}\), \(m\circ\Delta\subseteq\mathcal{A}(\mathbb{T},G_{L})\), \(\mathbf{w}(m\circ\Delta)>\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})\) and \((m\circ\Delta)\cap(\xi\circ\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})=\varnothing\), then
\[\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{L}) \geq\mathbf{w}(m\circ\Delta)+\mathbf{w}(\xi\circ\mathcal{A}( \mathbb{T},G_{R})_{\not\cong}^{w})\] \[>\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})+\mathbf{w }(\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\] \[=\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{R})\]
and we will have successfully proven that \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(\cdot)\) is a decreasing measure. This is the main result of this section (Section 4.4).
### Relating Rule and Step
In order to reason about steps on the level of rules, the following variant of adhesivity is needed. It does not yet occur in the literature.
Definition 27 (Pbpo\({}^{+}\)-Adhesive): A pushout square \(r^{\prime}\circ t_{K}=t_{R}\circ r\) is _PBPO\({}^{+}\)-adhesive_ if, whenever it lies at the bottom of a commutative cube shown on the right, where the top face is a pushout and the back faces are pullbacks, we have that the front faces are pullbacks.
Corollary 28: _If \(\mathbf{C}\) is rm-adhesive, pushouts \(r^{\prime}\circ t_{K}=t_{R}\circ r\) with \(t_{K}\in\operatorname{rm}(\mathbf{C})\) are PBPO\({}^{+}\)-adhesive. \({}_{\blacksquare}\)_
Remark 29: Not all quasitoposes are PBPO\({}^{+}\)-adhesive: the counterexample by Johnstone et al. [16, Fig. 1], which shows that the category of simple graphs is not rm-adhesive, is also a counterexample for PBPO\({}^{+}\)-adhesivity. We ask: are there interesting PBPO\({}^{+}\)-adhesive categories that are not rm-adhesive?
The following equalities will prove crucial. Recall Notation 1.
Lemma 30: _Assume \(\mathbf{C}\) has pullbacks. Let a rewrite step for a PBPO\({}^{+}\) rule \(\rho\) be given. If square \(r^{\prime}\circ t_{K}=t_{R}\circ r\) is PBPO\({}^{+}\)-adhesive, then for any \(\rho\)-rewrite step and any \(t:T\to G_{R}\)_
1. \(\langle g_{R}\mid t\rangle=\langle r^{\prime}\mid w^{\prime}\circ t\rangle\)_;_
2. \(u^{\prime}\circ\langle t\mid g_{R}\rangle=\langle w^{\prime}\circ t\mid r^{ \prime}\rangle\)_;_
3. \(\langle w\mid t\rangle=\langle t_{R}\mid w^{\prime}\circ t\rangle\)_; and_
4. \(\langle t\mid w\rangle=\langle w^{\prime}\circ t\mid t_{R}\rangle\)_._
Proof: In the diagram on the right, the bottom face of the bottom cube is PBPO\({}^{+}\)-adhesive by assumption, its top face is a pushout, and its back faces are pullbacks in any category [24, Lemma 15]. Hence its front faces are pullbacks by PBPO\({}^{+}\)-adhesivity. Then all claims follow by composing pullback squares, using the pullback lemma. \({}_{\blacksquare}\)
Remark 31: Because every \(t\in\mathcal{A}(T,G_{R})\) defines an arrow \(w^{\prime}\circ t\in\operatorname{Hom}(T,R^{\prime})\), we can overapproximate \(\mathcal{A}(T,G_{R})\) using \(\operatorname{Hom}(T,R^{\prime})\). The equalities of Lemma 30 will then be used as follows.
1. We will slide morphisms \(t\in\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\) to the left. If \(\langle g_{R}\mid t\rangle\) is invertible, then \(g_{L}\circ\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{\gets }:T\to G_{L}\) is an arrow towards the left. Lemma 30.1 implies that invertibility of \(\langle g_{R}\mid t\rangle\) can be verified on the level of the rule.
2. Although we cannot deduce \(\langle t\mid g_{R}\rangle\), Lemma 30.2 implies that we can at least deduce how it is mapped into \(K^{\prime}\).
3. Lemma 30.3 implies that it suffices to restrict the overapproximation of \(\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\) to \(\operatorname{Hom}(\mathbb{T},R^{\prime})_{\not\cong}^{t_{R}}\).
4. If \(t\in\mathcal{A}(\mathbf{C})\), then \(\langle t\mid w\rangle\in\mathcal{A}(\mathbf{C})\) by the pullback stability assumption. Thus, Lemma 30.4 implies that it suffices to restrict the overapproximation even further to \(\{f\in\operatorname{Hom}(\mathbb{T},R^{\prime})_{\not\cong}^{t_{R}}\mid\langle f \mid t_{R}\rangle\in\mathcal{A}(\mathbf{C})\}\).
### Determining \(\mathbf{w(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})}\)
The weight of \(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w}\) can be determined under minimal assumptions.
Lemma 32: _Let the pullback on the right be given with \(t_{R}\in\mathcal{A}(\mathbf{C})\). Let \(\mathbf{C}\) be \(\mathcal{A}\)-locally finite and \(\mathbb{T}\) a set of objects. Then \(\chi(t)=w^{\prime}\circ t\) is a domain-preserving bijection \(\chi:\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w}\to\mathcal{A}(\mathbb{T},R^{ \prime})_{\cong}^{t_{R}}\)._
Corollary 33: _If the conditions of Lemma 32 are met, then \(\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})=\mathbf{w}(\mathcal{A}( \mathbb{T},R^{\prime})_{\cong}^{t_{R}})\)._
### Sliding Tiles Injectively
In this section we establish conditions for the existence of a domain-preserving injection \(\xi:\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\to\mathcal{A}(\mathbb{T},G_{ L})\). Intuitively, one can think of \(\xi\) as sliding tiles from right to left across the rewrite step diagram.
If \(l^{\prime}\in\operatorname{rm}(\mathbf{C})\), then \(\xi\) will be seen to exist rather straightforwardly. However, in general it suffices to require more weakly that \(l^{\prime}\) preserves any tiles to be slid (and distinctly so). Definitions 34 and 36 help capture such a weaker requirement. With these definitions, \(\xi\) can be shown to exist even for non-trivial rules with non-monic \(l^{\prime}\).
Definition 34: A morphism \(g:B\to C\) preserves the \(\mathcal{A}\)-factorization of \(f:A\to B\) if the \(\mathcal{A}\)-factorization \(f=f^{\prime\prime}\circ f^{\prime}\) exists and \(g\circ f^{\prime\prime}\in\mathcal{A}(C)\).
Lemma 35: _Assume \(\mathbf{C}\) has pullbacks. Let the diagram on the right be given, with \(x\in\mathcal{A}(\mathbf{C})\). If \(f\) preserves the \(\mathcal{A}\)-factorization of \(g^{\prime}\circ x\), then \(f^{\prime}\circ x\in\mathcal{A}(\mathbf{C})\)._
Definition 36 (Monic For): _Morphism \(h:B\to C\) is monic for morphisms \(f,g:A\to B\) if \(h\circ f=h\circ g\) implies \(f=g\)._
Lemma 37: _Let the diagram on the right be given. If \(f\) is monic for \(g^{\prime}\circ x\) and \(g^{\prime}\circ y\), then \(f^{\prime}\) is monic for \(x\) and \(y\)._
The morphism \(g_{R}:G_{K}\to G_{R}\) of the rewrite step may identify elements. So for the injection \(\xi\) from right to left to exist, we must be able to go into the inverse direction, without identifying tiles. To this end, the following lemma will prove useful.
Lemma 38: _If epimorphisms \(e\) and \(e^{\prime}\) in diagram_
\[\begin{array}{
commutes, where the pullback square is given by the rewrite step. Moreover, \(\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{\leftarrow}\in\mathcal{ A}(\mathbf{C})\) (as indicated in the diagram) by stability under composition, using \(\langle t\mid g_{R}\rangle\in\mathcal{A}(\mathbf{C})\) (by pullback stability and \(t\in\mathcal{A}(\mathbf{C})\)) and \(\langle g_{R}\mid t\rangle^{\leftarrow}\in\operatorname{rm}(\mathbf{C}) \subseteq\mathcal{A}(\mathbf{C})\) (using Proposition 8 and Assumption 24). By local assumption 2a and the commuting triangle of the diagram, \(l^{\prime}\) preserves the \(\mathcal{A}\)-factorization of \(u^{\prime}\circ\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{ \leftarrow}\). So by Lemma 35, \(\xi(t)\in\mathcal{A}(\mathbf{C})\) and consequently \(\xi(t)\in\mathcal{A}(\mathbb{T},G_{L})\).
For injectivity of \(\xi\), assume \(\xi(t)=\xi(s)\) for \(t,s\in\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w}\). By local assumption 2b, Lemma 30.1 and Lemma 30.2, \(l^{\prime}\) is monic for
\[\langle w^{\prime}\circ t\mid r^{\prime}\rangle\circ\langle r^{\prime}\mid w^ {\prime}\circ t\rangle^{\leftarrow}=u^{\prime}\circ\langle t\mid g_{R} \rangle\circ\langle g_{R}\mid t\rangle^{\leftarrow}\]
and
\[\langle w^{\prime}\circ s\mid r^{\prime}\rangle\circ\langle r^{\prime}\mid w^ {\prime}\circ s\rangle^{\leftarrow}=u^{\prime}\circ\langle s\mid g_{R} \rangle\circ\langle g_{R}\mid s\rangle^{\leftarrow}.\]
So by Lemma 37, \(g_{L}\) is monic for \(\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{\leftarrow}\) and \(\langle s\mid g_{R}\rangle\circ\langle g_{R}\mid s\rangle^{\leftarrow}\). Then because \(\xi(t)=\xi(s)\), \(\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{\leftarrow}=\langle s \mid g_{R}\rangle\circ\langle g_{R}\mid s\rangle^{\leftarrow}\). Then finally, \(t=s\) by Lemma 38.
### The Main Result
We are now ready to prove the main result of this paper (Theorem 4.1) and its corollary (Corollary 4.2). We also show that in rather common settings, many technical conditions of the theorem are met automatically (Lemma 4.3 and Propositions 4.2 and 4.2). We close with a complementary lemma that establishes decreasingness for deleting rules (Lemma 4.3). Examples of applications will be given in Section 5.
Theorem 4.1 (Decreasingness by Element Counting): _Let \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) be disjoint sets of PBPO\({}^{+}\) rules. Assume \(\mathbf{C}\) has pullbacks and let \(\mathcal{A}(\mathbf{C})\) be a class such that \(\mathbf{C}\) is \(\mathcal{A}\)-locally finite. Let \(\mathbb{T}\) be a set of objects and \(\mathbf{w}:\mathbb{T}\to\mathbb{N}\) a weight function such that, for every \(\rho\in\mathcal{T}\uplus\mathcal{T}^{\prime}\), the following conditions hold:_
* \(\rho\)_'s pushout square_ \(r^{\prime}\circ t_{K}=t_{R}\circ r\) _is PBPO_\({}^{+}\)_-adhesive; and_
* \(t_{R}\in\mathcal{A}(\mathbf{C})\)_; and_
* _set_ \(\Phi_{\rho}=\{f\in\operatorname{Hom}(\mathbb{T},R^{\prime})_{\cong}^{t_{R}} \mid\langle f\mid t_{R}\rangle\in\mathcal{A}(\mathbf{C})\}\) _meets the conditions of Theorem_ 39 _for some right inverse choice function_ \((\cdot)^{\leftarrow}\)_; and_
* _there exists a set_ \(\Delta_{\rho}\subseteq\mathcal{A}(\mathbb{T},L)\) _such that_
* _for all_ \(f\in\Phi_{\rho}\) _and_ \(t\in\Delta_{\rho}\)_,_ \(l^{\prime}\circ\langle f\mid r^{\prime}\rangle\circ\langle r^{\prime}\mid f \rangle^{\leftarrow}\neq t_{L}\circ t\)_;_
* \(t_{L}\) _is monic for all_ \(t,t^{\prime}\in\Delta_{\rho}\)_; and_
* \(\mathbf{w}(\Delta_{\rho})>\mathbf{w}(\mathcal{A}(\mathbb{T},R^{\prime})_{\cong}^ {t_{R}})\) _if_ \(\rho\in\mathcal{T}\) _and_ \(\mathbf{w}(\Delta_{\rho})\geq\mathbf{w}(\mathcal{A}(\mathbb{T},R^{\prime})_{ \cong}^{t_{R}})\) _if_ \(\rho\in\mathcal{T}^{\prime}\)_._
_Then for any rewrite step with match \(m\in\mathcal{A}(\mathbf{C})\), induced by a rule \(\rho\in\mathcal{T}\uplus\mathcal{T}^{\prime}\), we have \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{L})>\mathbf{w}_{\mathbb{T}}^{\mathcal{ A}}(G_{R})\) if \(\rho\in\mathcal{T}\) and \(\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{L})\geq\mathbf{w}_{\mathbb{T}}^{ \mathcal{A}}(G_{R})\) if \(\rho\in\mathcal{T}^{\prime}\)._
Proof: Let a step induced by a \(\rho\in\mathcal{T}\uplus\mathcal{T}^{\prime}\) be given.
By Corollary 33, \(\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})=\mathbf{w}(\mathcal{A}( \mathbb{T},R^{\prime})_{\cong}^{t_{R}})\).
By Theorem 3.2, we obtain an injection \(\xi:\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\rightarrow\mathcal{A}(\mathbb{T},G_{L})\) with \(\mathit{dom}(\xi(t))=\mathit{dom}(t)\), using the assumption on \(\Phi_{\rho}\). So \(\mathbf{w}(\xi\circ\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})=\mathbf{w} (\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\).
Moreover, by \(m\in\mathcal{A}(\mathbf{C})\) and stability under composition, we have \((m\circ\Delta_{\rho})\subseteq\mathcal{A}(\mathbb{T},G_{L})\). And by \(t_{L}\) monic for all \(t,t^{\prime}\in\Delta_{\rho}\), we have \(m\) monic for all \(t,t^{\prime}\in\Delta_{\rho}\), and so \(\mathbf{w}(m\circ\Delta_{\rho})=\mathbf{w}(\Delta_{\rho})\). It remains to show that \((m\circ\Delta_{\rho})\) and \((\xi\circ\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\) are disjoint. If for a \(t^{\prime}\in\Delta_{\rho}\) and \(t\in\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w}\), \(m\circ t^{\prime}=\xi(t)\), then \(t_{L}\circ t^{\prime}=\alpha\circ m\circ t^{\prime}=\alpha\circ\xi(t)=\alpha \circ g_{L}\circ\langle t\mid g_{R}\rangle\circ\langle g_{R}\mid t\rangle^{ \leftarrow}=l^{\prime}\circ u^{\prime}\circ\langle t\mid g_{R}\rangle\circ \langle g_{R}\mid t\rangle^{\leftarrow}=l^{\prime}\circ\langle w^{\prime}\circ t \mid r^{\prime}\rangle\circ\langle r^{\prime}\mid w^{\prime}\circ t\rangle^{ \leftarrow}\), using Lemma 3.1-2) and \(\alpha\circ g_{L}=l^{\prime}\circ u\), which contrafactoringly implies \(t^{\prime}\notin\Delta_{\rho}\) by the definition of \(\Delta_{\rho}\) and \(w^{\prime}\circ t\in\Phi\). Thus \(\xi(t)\neq m\circ t^{\prime}\).
In summary,
\[\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{L}) \geq\mathbf{w}(m\circ\Delta_{\rho})+\mathbf{w}(\xi\circ\mathcal{ A}(\mathbb{T},G_{R})_{\not\cong}^{w})\] \[=\mathbf{w}(\Delta_{\rho})+\mathbf{w}(\mathcal{A}(\mathbb{T},G_{ R})_{\not\cong}^{w})\] \[\succ\mathbf{w}(\mathcal{A}(\mathbb{T},R^{\prime})_{\cong}^{t_{R }})+\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\] \[=\mathbf{w}(\mathcal{A}(\mathbb{T},G_{R})_{\cong}^{w})+\mathbf{w }(\mathcal{A}(\mathbb{T},G_{R})_{\not\cong}^{w})\] \[=\mathbf{w}_{\mathbb{T}}^{\mathcal{A}}(G_{R})\]
for \(\succ=\,>\) if \(\rho\in\mathcal{T}\) and \(\succ=\,\geq\) if \(\rho\in\mathcal{T}^{\prime}\), completing the proof.
Remark 4.1: The requirement \(m\in\mathcal{A}(\mathbf{C})\) puts a lower bound on what one can choose for \(\mathcal{A}(\mathbf{C})\) in a termination proof. Usually two factors are relevant: the class of \(t_{L}\), and match restrictions imposed by the setting. More precisely, let \(X(\mathbf{C})\) and \(Y(\mathbf{C})\) be classes of morphisms. If \(t_{L}\in X(\mathbf{C})\), where \(X(\mathbf{C})\) satisfies the decomposition property (meaning \(m\in X(\mathbf{C})\) by \(t_{L}=\alpha\circ m\)), and the setting imposes \(m\in Y(\mathbf{C})\), then the choice of \(\mathcal{A}(\mathbf{C})\) must satisfy \(X(\mathbf{C})\cap Y(\mathbf{C})\subseteq\mathcal{A}(\mathbf{C})\).
From Theorem 3.2 and Remark 4.1 the following is immediate.
Corollary 4.2 (Termination by Element Counting): _Let \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) be disjoint sets of PBPO\({}^{+}\) rules. Let \(\mathcal{A}(\mathbf{C})\) be a class such that for all rules \(\rho\in(\mathcal{T}\uplus\mathcal{T}^{\prime})\), \(t_{L}(\rho)\in\mathcal{A}(\mathbf{C})\) or matching of \(\rho\) is restricted to a class \(X\subseteq\mathcal{A}(\mathbf{C})\). If the conditions of Theorem 3.2 are met, then \(\mathcal{T}\) terminates relative to \(\mathcal{T}^{\prime}\). Hence \(\mathcal{T}\uplus\mathcal{T}^{\prime}\) is terminating iff \(\mathcal{T}^{\prime}\) is. _
Remark 4.3 (Generalizing \(\mathcal{A}(\mathbf{C})\)): Theorem 3.2 and the results it depends on still hold if in instead of having \(\mathcal{A}(\mathbf{C})\) globally fixed, a class \(\mathcal{A}(\mathbf{C})\) is fixed for each individual \(T\in\mathbb{T}\), and match morphism \(m\) is in the intersection of every class. This for instance allows counting some tiles monically, and others non-monically.
The following lemma implies that in many categories of interest, tilings of \(L\) and slid tiles never collide. For instance, in quasitoposes, pushouts along \(t_{K}\) are pullbacks if \(t_{K}\in\mathrm{rm}(\mathbf{C})\)[15, Lemma A2.6.2].
Lemma 4.4: _Let a rule \(\rho\) be given. If pushout \(r^{\prime}\circ t_{K}=t_{R}\circ r\) is a pullback and \(t_{K}\in\mathrm{mono}(\mathbf{C})\), then for all \(t\in\mathrm{Hom}(T,R^{\prime})_{\not\cong}^{t_{R}}\) with \(\langle r^{\prime}\mid t\rangle\) a split epi, we have that for all \(t^{\prime}\in\mathrm{Hom}(T^{\prime},L)\), \(l^{\prime}\circ\langle t\mid r^{\prime}\rangle\circ\langle r^{\prime}\mid t \rangle^{\leftarrow}\neq t_{L}\circ t^{\prime}\)._
Proof: By contradiction. Assume that for some \(t\in\operatorname{Hom}(T,R^{\prime})_{\not\cong}^{t_{R}}\) with \(\langle r^{\prime}\mid t\rangle\) a split epi and some \(t^{\prime}\in\operatorname{Hom}(T^{\prime},L)\), \(l^{\prime}\circ\langle t\mid r^{\prime}\rangle\circ\langle r^{\prime}\mid t \rangle^{\leftarrow}=t_{L}\circ t^{\prime}\). Then we have the commuting diagram
where
* squares \(\operatorname{PB}(2)\) and \(\operatorname{PO}+\operatorname{PB}\) are given by \(\rho\);
* squares \(\operatorname{PB}(1)\) and \(\operatorname{PB}(4)\) are constructed;
* square \(\operatorname{PB}(3)\) follows from \(\operatorname{PB}(1)\) and \(\operatorname{PB}(2)\), using the pullback lemma;
* morphism \(x\) exists by the hypothesis and the pullback property; and
* \(\langle r^{\prime}\mid t\rangle\circ\langle r^{\prime}\mid t\rangle^{ \leftarrow}=1_{T}\) by the right inverse property.
By a diagram chase we thus have \(t=r^{\prime}\circ t_{K}\circ\langle t^{\prime}\mid l\rangle\circ x\). Then from
and two applications of the pullback lemma, we have \(\langle t_{R}\mid t\rangle\in\operatorname{iso}(\mathbf{C})\), contradicting \(t\in\operatorname{Hom}(T,R^{\prime})_{\not\cong}^{t_{R}}\).
The two propositions below state further sufficient conditions for satisfying the termination method's preconditions. Many graph categories of interest meet these conditions.
Proposition 4.1: _If \(\mathbf{C}\) is an \(\operatorname{rm}\)-locally finite, rm-adhesive quasitopos, then \(\operatorname{rm}(\mathbf{C})\) satisfies Assumption 2.2. \(\mathbf{C}\) also has all pullbacks and all pushouts, and so in particular the required pushouts described in Assumption 2.2. If moreover \(t_{L}(\rho)\in\operatorname{rm}(\mathbf{C})\), then \(m,t_{K},t_{R}\in\operatorname{rm}(\mathbf{C})\), \(\rho\)'s pushout square is \(\text{PBPO}^{+}\)-adhesive, and \(t_{L}\) is monic for \(\mathcal{A}(\mathbb{T},L(\rho))\)._
Proof: A quasitopos has by definition all limits and colimits. That \(\operatorname{rm}(\mathbf{C})\) satisfies Assumption 24 was stated in Proposition 25. If \(t_{L}\in\operatorname{rm}(\mathbf{C})\), and \(m,t_{K}\in\operatorname{rm}(\mathbf{C})\) by stability under decomposition and pullback, respectively. That \(t_{R}\in\operatorname{rm}(\mathbf{C})\) subsequently follows from pushout stability in quasitoposes [15, Lemma A.2.6.2]. Because of the assumed \(\operatorname{rm}\)-adhesivity and \(t_{K}\in\operatorname{rm}(\mathbf{C})\), \(\rho\)'s pushout square is PBPO\({}^{+}\)-adhesive (Corollary 28). Finally, that \(t_{L}\) is monic for \(\mathcal{A}(\mathbb{T},L(\rho))\) follows trivially from the fact that \(t_{L}\) is monic. (See [4, Corollary 1] and [24, Proposition 36] for relevant summaries of quasitopos properties.)
As is well known, if \(\mathcal{I}\) is small, then the functor category \([\mathcal{I},\mathbf{Set}]\) is a topos, and many structures that are of interest to the graph transformation community can be defined in this manner (e.g., \(\mathbf{Graph}\cong[\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\mathbf{Set}]\)). The following proposition assures us that such toposes are closed under finite restrictions. We are not aware of a similar principle for quasitoposes.
Proposition 46: _If \(\mathcal{I}\) is finite and \(\mathbf{C}\cong[\mathcal{I},\mathbf{FinSet}]\), then \(\mathbf{C}\) is a \(\operatorname{Hom}\)-locally finite topos, and so for any \(\mathcal{A}(\mathbf{C})\), an \(\mathcal{A}\)-locally finite rm-adhesive quasitopos._
Proof: \(\mathbf{C}\) is a topos [5, Example 5.2.7], and it is locally finite because \(\mathbf{FinSet}\) is \(\operatorname{Hom}\)-locally finite. Moreover, any topos is rm-adhesive [18], and any topos is a quasitopos.
Finally, we have the following general principle, which does not require any assumptions on \(t_{K}\) and \(t_{R}\), nor any adhesivity assumptions.
Lemma 47 (Deleting Rules Are Decreasing): _Assume \(\mathbf{C}\) has pullbacks and is \(\operatorname{mono}\)-locally finite. Suppose that for a PBPO\({}^{+}\) rule \(\rho\), \(l^{\prime}\) is monic, \(l\) is not epic, and \(r\) is iso; and that for any matches \(m\) for \(\rho\), \(m\) is monic. Then \(\rho\) is decreasing for \(\mathbb{T}=\{L\}\), \(\mathbf{w}(L)>0\), \(\Delta_{\rho}=\{1_{L}\}\) and \(\mathcal{A}(\mathbf{C})=\operatorname{mono}(\mathbf{C})\). _
## 5 Examples
We give a number of examples of applying Theorem 4.1 in category \(\mathbf{C}=\mathbf{FinGraph}\) (Definition 12), each demonstrating new features. For each example, we will fix \(\mathbb{T}\), \(\mathbf{w}\) and \(\mathcal{A}(\mathbf{C})\), and usually some properties of the relevant morphism sets (such as cardinalities) or related comments. The remaining details of the proofs are routine. Note that in \(\mathbf{FinGraph}\) (and more generally in any topos), \(\operatorname{rm}(\mathbf{C})=\operatorname{mono}(\mathbf{C})\), and because the rules in examples satisfy \(t_{L}\in\operatorname{mono}(\mathbf{C})\), we are in each case free to choose \(\operatorname{mono}(\mathbf{C})\) or \(\operatorname{Hom}(\mathbf{C})\) for \(\mathcal{A}(\mathbf{C})\) (Remark 41).
Notation 48 (Visual Notation): _In our examples of rules, the morphisms \(t_{X}:X\hookrightarrow X^{\prime}\) (\(X\in\{L,K,R\}\)) of rules are regular monos (embeddings). We depict \(t_{X}\) by depicting the graph \(X^{\prime}\), and then let solid, colored vertices and solid edges denote \(t_{X}(X)\), with dotted blank vertices and dotted edges the remainder of \(X^{\prime}\). For example, in Example 49 below, subgraph \(L\) of \(t_{L}:L\hookrightarrow L^{\prime}\) is \(\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{ \cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{ \cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{ \cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{ \cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\,\boldsymbol{\cdot}\, \boldsymbol
will choose the vertices of \(K^{\prime}\) and \(R^{\prime}\) in such a way that component \(r^{\prime}_{V}\) is fully determined by \(S\subseteq r^{\prime}_{V}(S)\) for all \(S\in V_{K^{\prime}}\). For example, for nodes \(\{x\},\{y\}\in V_{K^{\prime}}\) of Example 49 below (in which morphism \(r^{\prime}\) is implicit), \(r^{\prime}(\{x\})=r^{\prime}(\{y\})=\{x,y\}\in V_{R^{\prime}}\). If component \(r^{\prime}_{E}\) is not uniquely determined by \(r^{\prime}_{V}\), then let \(r^{\prime}_{E}\) preserve the relative positioning of the edges (although often, this choice will be inconsequential). Morphism \(l^{\prime}:K^{\prime}\to L^{\prime}\) is depicted similarly._
Example 49 (Folding an Edge): The rule
\[\rho=\]
folds a non-loop edge \(\,\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig/Folding-an-Edge-Edge-
rm(\(\mathbf{C}\)): mono(\(\mathbf{C}\)) contains all injective graph homomorphisms such that labels are non-decreasing (\(\leq\)), and rm(\(\mathbf{C}\)) restricts to monomorphisms that preserve labels (\(=\)). In previous papers [22, 23, 24], we have shown that fuzzy graphs are useful structures for implementing relabeling mechanics for graph transformation.
In these categories, rules that change labels (but leave the structure of the graph unchanged) can be proven terminating by using \(\mathcal{A}(\mathbf{C})=\) rm(\(\mathbf{C}\)), but not always by using \(\mathcal{A}(\mathbf{C})=\) mono(\(\mathbf{C}\)). For instance, a rule that increases a loop edge label \(a\) into label \(b>a\), is shown terminating by \(\mathbb{T}=\{\,\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\raisebox{-1.
partial overlaps with the pattern are not possible. So Theorem 3.2 is easily verified for \(\Phi_{\rho}\) and \(\Phi_{\tau}\). Then for the obvious largest choices of \(\Delta_{\rho}\) and \(\Delta_{\tau}\), we have \(\mathbf{w}(\Delta_{\rho})=2\cdot 5=10>\mathbf{w}(\operatorname{mono}(\mathbb{T},\rho(R^{ \prime}))_{\cong}^{\rho(t_{R})})=3\cdot 3=9\) for \(\rho\) and \(\mathbf{w}(\Delta_{\tau})=2\cdot 3=6>\mathbf{w}(\operatorname{mono}(\mathbb{T},\tau(R^{ \prime}))_{\cong}^{\tau(t_{R})})=5\) for \(\tau\), completing the proof.
The above termination proof works also for vast generalizations of the rules. For instance, the first rule can be generalized to
Observe that \(L^{\prime}\) now allows an unbounded number of additional loops on the nodes, and edges between the nodes and the context. The morphism \(l^{\prime}\) preserves the loops, duplicates a node including the edges from and to the context, and unfolds loops between the duplicated nodes. As long as \(l^{\prime}\) and \(r^{\prime}\) do not create new loops other than those specified by \(l\) and \(r\), the rule can be proven terminating.
Example 5: Consider the following rules:
Rule \(\rho\) deletes an arbitrary loop, and in doing so, allows arbitrarily many bipartite graph components in the context to duplicate (such components can either be mapped onto node \(c\) or onto the right subgraph component). Note that this makes the rule non-deterministic. Rule \(\tau\) deletes an arbitrary node including incident edges.
The derivational complexity (the maximum reduction length) of this system is \(O(2^{n})\) where \(n\) is the size of the starting graph.
Termination of the system can be proven as follows. Let \(\mathcal{A}(\mathbf{C})=\operatorname{mono}(\mathbf{C})\). Use the tile set \(\mathbb{T}=\{\,\includegraphics[width=14.226378pt]{fig:c-1.eps}\,\}\) with the weight assignment \(\mathbf{w}(\,\includegraphics[width=14.226378pt]{fig:c-2.eps}\,)=1\). Then \(\rho\) is decreasing and \(\tau\) is non-increasing, and so it suffices to prove \(\tau\) terminating, whose termination is immediate from Lemma 4.2.
## 6 Related Work
We consider two closely related approaches by Bruggink et al. [6, 7] to be the most relevant to our method. Both approaches use weighted type graphs \(T\) to measure graphs \(G\) by means of counting weighted morphisms \(G\to T\) (instead of weighted morphisms \(T^{\prime}\to G\) for tiles \(T^{\prime}\)). So the general idea is dual to ours. Moreover, to our knowledge, these approaches are the only systematic termination methods in the algebraic tradition based on decreasing interpretations.
Both methods are defined for DPO in the category of edge-labeled multigraphs. The first approach [7] requires that \(l\) and \(r\) of DPO rules \(L\stackrel{{ l}}{{\leftarrow}}K\stackrel{{ r}}{{ \rightarrow}}R\), and matches \(m:L\rightarrow G_{L}\), are monic. The second approach [6] has no such restrictions.
Because our method is applicable in a much broader setting, our method will prove rules terminating that are outside the scope of the methods by Bruggink et al. Nonetheless, it is interesting to ask how the approaches relate in settings where they are all defined.
On the one hand, although Examples 5 and 6 of [6] are within the scope of our method, our method cannot prove them terminating. The intuitive reason is that the examples terminate because of global properties, rather than local ones. On the other hand, Example 55 below defines a DPO rule that falls inside the scope of all three methods, and only our method can prove it terminating. In conclusion, within the restricted setting, the methods are incomparable.
Example 55: Consider the following DPO rule \(\rho\) in category **FinGraph**:
and assume matching is required to be monic. This requirement is often used in practice, because monic matching increases the expressiveness of DPO [14].
The approach in [7] cannot prove \(\rho\) terminating. For establishing termination (on all graphs), the weighted type graph \(T\) has to contain a node with a loop (called a _flower node_). The flower node ensures that every graph \(G\) can be mapped into \(T\). Then, in particular, the technique requires a weight decrease (from \(L\) to \(R\)) for the case that the interface \(K\) is mapped onto the flower node. However, this makes \(L\) and \(R\) indistinguishable for the technique in [7].
Although matches are required to be monic, the method of [6] overapproximates for unrestricted matches by design. Observe that if matching is not monic, then graph \(L\) of \(\rho\), but with \(x\) and \(y\) identified, rewrites to itself, meaning \(\rho\) is not terminating. As a consequence, the overapproximation of [6] causes it to fail in proving \(\rho\) terminating for the monic matching setting. (For the same reason, the method of [6] fails on the simpler top span of Example 49, which is a DPO rule, for the monic matching setting.)
Rule \(\rho\) can be proven terminating with our method as follows. Encode \(\rho\) into \(\text{PBPO}^{+}\) using the standard encoding [24, Definition 71]. The resulting rule and a termination proof is given in Example 52.
Additional examples by Bruggink et al. that our method can prove are Example 4 of [7] (= Example 2 of [6]), and Example 4 of [6]. Additional examples that our method cannot prove are Example 1 and the example of Section 3.5 of [7]. However, unlike the earlier referenced Examples 5 and 6 of [6], these examples are in reach if our morphism counting technique can take into account antipatterns (Remark 56), because they terminate because of local properties.
Remark 56 (Antipatterns): A rule that matches an isolated node, and adds a loop, cannot be proven terminating with our method. For this, one must be able to count nodes _without_ loops (an _antipattern_), which is currently unsupported. We believe that extending our method with support for such antipatterns is a natural first step for significantly strengthening it.
We discuss some additional related work. An early systematic termination criterion for hypergraph rewriting with DPO, is due to Plump, based on the concept of forward closures [26]. Both of the examples proven terminating with forward closures, Examples 3.8 and 4.1 of [26], can be handled with our method. The encoding and proof of Example 3.8 is available in the appendix (Example 38).
More recently, Plump formulated a modularity criterion for hypergraph rewriting using DPO [30]: the union of two terminating systems is terminating if there are no sequential critical pairs. Of this paper, our method can prove three out of four examples: Examples 3 (= Example 3.8 of [26]), 5 and 6. The modeling of Example 6 is available in the appendix (Example 39). Our method cannot prove Example 4 (= the already discussed Example 5 of [7]). It would be interesting to assess the strength of the modularity criterion (especially if generalized to PBPO\({}^{+}\)) combined with our method.
Bruggink et al. have shown that string rewriting rules are terminating on graphs iff they are terminating on cycles [7], making cycle rewriting techniques [32, 35] applicable to graph transformation systems consisting of string rewrite rules. Similarly, in a previous paper [22], we have shown that particular PBPO\({}^{+}\) encodings of linear term rewrite rules are terminating on graphs iff they are terminating on terms.
There also exist a variety of methods that generalize TRS methods (such as simplification orderings) to term graphs [21, 27, 29] and drags [11].
## 7 Conclusion and Future Work
We have introduced a termination method for graph transformation systems that can be utilized across frameworks, and which is defined in a broad array of categories. Our examples and comparisons with related work show that the method adds considerable value to the study of termination for graph transformation.
Future work for strengthening the method includes solving the issues raised related to rule equivalence (Example 52) and antipatterns (Remark 56). Methods for finding \(\mathbb{T}\), if it exists, and identifying useful sufficient conditions for the non-existence of \(\mathbb{T}\), would also be very useful. A possible metatheoretical direction for future research includes the question posed for PBPO\({}^{+}\)-adhesivity (Remark 29). Finally, we plan to formally verify and implement our method.
#### Acknowledgments
We thank anonymous reviewers for many helpful suggestions. Both authors received funding from the Netherlands Organization for Scientific Research (NWO) under the Innovational Research Incentives Scheme Vidi (project. No. VI.Vidi.192.004). |
2307.02245 | Set Learning for Accurate and Calibrated Models | Model overconfidence and poor calibration are common in machine learning and
difficult to account for when applying standard empirical risk minimization. In
this work, we propose a novel method to alleviate these problems that we call
odd-$k$-out learning (OKO), which minimizes the cross-entropy error for sets
rather than for single examples. This naturally allows the model to capture
correlations across data examples and achieves both better accuracy and
calibration, especially in limited training data and class-imbalanced regimes.
Perhaps surprisingly, OKO often yields better calibration even when training
with hard labels and dropping any additional calibration parameter tuning, such
as temperature scaling. We demonstrate this in extensive experimental analyses
and provide a mathematical theory to interpret our findings. We emphasize that
OKO is a general framework that can be easily adapted to many settings and a
trained model can be applied to single examples at inference time, without
significant run-time overhead or architecture changes. | Lukas Muttenthaler, Robert A. Vandermeulen, Qiuyi Zhang, Thomas Unterthiner, Klaus-Robert Müller | 2023-07-05T12:39:58Z | http://arxiv.org/abs/2307.02245v4 | # Set Learning for Accurate and Calibrated Models
###### Abstract
Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization. In this work, we propose a novel method to alleviate these problems that we call odd-\(k\)-out learning (OKO), which minimizes the cross-entropy error for sets rather than for single examples. This naturally allows the model to capture correlations across data examples and achieves both better accuracy and calibration, especially in limited training data and class-imbalanced regimes. Perhaps surprisingly, OKO often yields better calibration even when training with hard labels and dropping any additional calibration parameter tuning, such as temperature scaling. We provide theoretical justification, establishing that OKO naturally yields better calibration, and provide extensive experimental analyses that corroborate our theoretical findings. We emphasize that OKO is a general framework that can be easily adapted to many settings and the trained model can be applied to single examples at inference time, without introducing significant run-time overhead or architecture changes.
## 1 Introduction
In machine learning, a classifier is typically trained to minimize cross-entropy on individual examples rather than on sets of examples. By construction, this paradigm ignores information that may be found in correlations between sets of data. Therefore, we present _odd-k-out_ learning (OKO), a new training framework based on learning from sets. It draws inspiration from the _odd-one-out_ task which is commonly used in the cognitive sciences to infer notions of object similarity from human decision-making processes (Robilotto and Zaidi, 2004; Fukuzawa et al., 1988; Hebart et al., 2020; Muttenthaler et al., 2022; 2023a). The odd-one-out task is a similarity task where subjects choose the most similar pair in a set of objects. We use an adapted version of that task to learn better model parameters while not making any changes to the architecture (see Fig. 1; a).
Standard classification training often yields overconfident classifiers that are not well-calibrated (Muller et al., 2019; Guo et al., 2017; Minderer et al., 2021). Classically, calibration has been treated as an orthogonal problem to accuracy. Miscalibration has been observed to severely worsen while accuracy improves, an interesting phenomenon attributed to over-parametrization, reduced regularization, and biased loss functions (Guo et al., 2017; Vaiencaivicius et al., 2019; Roelofs et al., 2022). Even log-likelihood -- a proper scoring rule -- was accused of biasing network weights to better classification accuracy at the expense of well-calibrated probabilities (Guo et al., 2017; Roelofs et al., 2022). Other scoring rules were proposed that are differentiable versions of calibrative measures but these approximations can be crude (Karandikar et al., 2021). Thus, calibration methods are often treated as an afterthought, comprised of ad-hoc post-processing procedures that require an additional hold-out dataset and monotonically transform the output probabilities, usually without affecting the learned model parameters or accuracy.
Calibration is inherently a performance metric on sets of data; so we propose training the classifier on sets of examples rather than individual samples to find models that yield accurate calibration without ad-hoc post-processing. This is especially crucial in low-data and class-imbalanced settings, for which there is surprisingly little work on calibration (Dal Pozzolo et al., 2015).
Various techniques have been proposed to improve accuracy for imbalanced datasets (Branco et al., 2016; Johnson and Khoshgoftaar, 2019), which are typically based on non-uniform class sampling or reweighting of the loss function. However, neural nets can still easily overfit to the few training examples for the rare classes (Wang and Japkowicz, 2004). There is growing interest in the development of new techniques for handling class imbalance (Johnson and Khoshgoftaar, 2019; Iscen et al., 2021; Parisot et al., 2022; Guha Roy et al., 2022). Such techniques are adapted variants of non-uniform sampling, often focusing exclusively on accuracy, and ignoring model calibration. However, techniques for mitigating the effects of imbalance on classification accuracy do not improve calibration for minority instances and standard calibration procedures tend to systematically underestimate the probabilities for minority class instances (Wallace and Dahabreh, 2012). Moreover, it is widely known that direct undersampling of overrepresented classes modifies the training set distribution and introduces probabilistic biases (Dal Pozzolo et al., 2015). Bayesian prior readjustments were introduced to manipulate posterior probabilities for ameliorating that issue (Dal Pozzolo et al., 2015).
It is known that hard labels tend to induce extreme logit values and therefore cause overconfidence in model predictions (Hinton et al., 2015; Bellinger et al., 2020). _Label smoothing_ has been proposed to improve model calibration by changing the cross-entropy targets rather than scaling the logits after training (Muller et al., 2019; Carratino et al., 2022). Label smoothing, in combination with batch balancing -- uniformly sampling over the classes rather than uniformly sampling over all samples in the data (see Appx. B.3), achieves promising results on heavy-tail classification benchmarks, i.e. datasets that contain many classes with few samples and a few classes with many samples (Bellinger et al., 2020). Yet, all these methods ignore the need for accuracy on the underrepresented classes, generally lack rigorous theoretical grounding, and require fine-tuned parameters for good empirical performance, such as the noise parameter for label smoothing, or the scaling parameter for temperature scaling for which additional held-out data is required.
In contrast to the popular philosophy of training for accuracy and then calibrating, we pose our main question: Can we provide a theoretically grounded training framework to learn network parameters that simultaneously obtain better accuracy and calibration, especially with class imbalance?
**Contributions.**_Indeed, we find that OKO achieves better calibration and uncertainty estimates than standard cross-entropy training. The benefits of OKO over vanilla cross-entropy are even more pronounced in limited training data settings and with heavy-tailed class distributions.1_
Footnote 1: A JAX implementation of OKO is publicly available on GitHub: [https://github.com/LukasMut/OKO](https://github.com/LukasMut/OKO)
**Empirical. First**, through extensive experiments, we show that OKO often achieves _better accuracy_ while being _better or equally well calibrated_ than other methods for improving calibration, especially in low data regimes and for heavy-tailed class distribution settings (see Fig. 1; b). **Second**, OKO is a principled approach that changes the learning objective by presenting a model with _sets of examples_ instead of individual examples, as calibration is inherently a metric on sets. As such, OKO does not introduce additional hyperparameters for post-training tuning or require careful warping of the label distribution via a noise parameter as in label smoothing (see Fig. 1). **Third**, surprisingly, this differently posed set learning problem results in _smoothed logits_ that yield _accurate calibration_, although models are trained using hard labels. **Fourth**, we emphasize that OKO is extremely easy to plug into any model architecture, as it provides a general training framework that does not modify the model architecture and can therefore be applied to single examples at test time exactly like any network trained via single-example learning (see Fig. 1; a). The training complexity scales linearly in \(O(|\mathcal{S}|)\) where \(|\mathcal{S}|\) denotes the number of examples in a set and hence introduces _little computational overhead_ during training. **Last**, in few-shot settings, OKO achieves compellingly _low calibration and classification errors_ (see Fig. 1; b). Notably, OKO improves test accuracy for \(10\)-shot MNIST by \(8.59\%\) over the best previously reported results (Liu et al., 2022).
**Theoretical.** Through mathematical analyses, we show that OKO yields logit values that are not as strongly encouraged to diverge as in standard cross-entropy training. We develop a new scoring rule that measures excess confidence on a per-datapoint basis to provably demonstrate improved
calibration. This scoring rule compares the predictive entropies and cross-entropies, and for calibrated predictors, we show that our measure is consistent in that the average excess confidence is 0. By using this new scoring rule we demonstrate that OKO implicitly performs a form of _entropic regularization_, giving insight into how it prevents excess confidence in certain low entropy regions.
## 2 Related Work
The _odd-one-out_ task has been widely used in the cognitive sciences to infer notions of object similarity from human participants (Robilotto & Zaidi, 2004; Hebart et al., 2020; Muttenthaler et al., 2022; 2023a), and first uses are slowly percolating into machine learning: Fernando et al. (2017) trained a self-supervised video understanding network by predicting which one out of three sequences was in reverse time order, Locatello et al. (2020); Mohammadi et al. (2020) used comparisons between samples as weak supervision target. Muttenthaler et al. (2023b) use human odd-one-out choices to improve pretrained representations for few-shot learning and anomaly detection tasks. However, none of these works investigated calibration or provided any theory for (odd-one-out) set learning.
Improving calibration is of practical interest for many applications. However, deep neural networks often appear badly calibrated (Guo et al., 2017). Even though this depends on the concrete architecture used, scaling up a model usually increases accuracy at the cost of calibration (Minderer et al., 2021). Many post-hoc approaches to increase calibration have been proposed, such as temperature scaling (Platt et al., 1999), isotonic regression (Zadrozny & Elkan, 2002; Niculescu-Mizil & Caruana, 2005), and Bayesian binning (Naeini et al., 2015), while improving calibration during training is a less explored topic. Most related to our approach are techniques that use data augmentations that blend
Figure 1: **A**: OKO minimizes cross-entropy on sets of examples rather than on single examples and naturally yields smoothed logits after training. At inference time it can be applied to single examples without additional computational overhead. **B**: Expected calibration error as a function of the classification error. Each point in the graph represents the performance of a single seed; there are five for every number of training data points. For each dataset, every model was evaluated on the same test set. Dashed diagonal lines indicate a linear regression fit. Top: Uniform class distribution during training. Bottom: Heavy-tailed class distribution during training.
different inputs together (Thulasidasan et al., 2019) or use ensembles to combine representations (Lakshminarayanan et al., 2017). However, none of these works examined calibration for sets of data. The task of classifying sets of instances is known as _multiple instance learning_(Carbonneau et al., 2018). It is desirable to leverage the set structure, instead of simply using a concatenated representation of the examples in each set. A common approach is to pool representations, either by mean pooling, which is akin to OKO, or max pooling (Feng and Zhou, 2017; Pinheiro and Collobert, 2014). Other approaches include the use of permutation invariant networks (Zaheer et al., 2017) or attention mechanisms (Ilse et al., 2018; Cordonnier et al., 2021). We are unaware of work that leverages set learning for improving the calibration of standard cross-entropy training.
Learning from imbalanced data has a long history in machine learning (Japkowicz and Stephen, 2002; He and Garcia, 2009; Branco et al., 2016). Approaches usually center around resampling the training data (Chawla et al., 2002; Drummond and Holte, 2003; Liu et al., 2009) or modifying the loss function (Chen et al., 2004; Ting, 2000; Wallace et al., 2011; Khan et al., 2018; Cui et al., 2019; Lin et al., 2020; Du et al., 2023), or combinations thereof (Huang et al., 2016; Liu et al., 2019; Tian et al., 2022). Transfer learning (Wang et al., 2017; Zhong et al., 2019; Parisot et al., 2022), self-supervised learning (Yang and Xu, 2020; Kang et al., 2021), or ensembles of experts (Collell et al., 2018; Wang et al., 2021; Guha Roy et al., 2022; Cui et al., 2023; Jiang et al., 2023) can also be helpful for rare classes. Our method is a novel way to improve performance on imbalanced data at excellent calibration.
## 3 Method
Here we present the core contribution of this work, _odd-\(k\)-out_ training (OKO). In OKO a model is simultaneously presented with multiple data points. At least two of these data points are from the same class, while the remaining \(k\) data points are each from a different class, i.e., the odd-\(k\)-outs, or _odd classes_. The objective is to predict the _pair class_. This forces a model to consider correlations between sets of examples that would otherwise be ignored in standard, single-example learning.
**Notation.** More formally, we are interested in the classification setting on a training set \(\mathcal{D}=\left\{\left(x_{1},y_{1}\right),\ldots,\left(x_{n},y_{n}\right) \right\}\subset\mathbb{R}^{d}\times[C]\) of inputs \(x_{i}\) and labels \(y_{i}\) from \(C\) classes. The number of odd classes \(k\) is chosen such that \(k+1\leq C\). We construct an OKO training example \(\mathcal{S}\) as follows:
Let \(\mathcal{X}_{c}\) be the set of all training inputs, \(x_{i}\), such that \(y_{i}=c\). One first, uniformly at random, selects a label \(y^{\prime}\in[C]\) and sets \(y^{\prime}_{1}=y^{\prime}_{2}=y^{\prime}\) as the _pair_ class. Next \(y^{\prime}_{3},\ldots,y^{\prime}_{k+2}\) are sampled uniformly without replacement from \([C]\setminus\{y^{\prime}\}\) as the _odd_ classes. Finally \(x^{\prime}_{1},\ldots,x^{\prime}_{k+2}\) are selected uniformly at random from \(\mathcal{X}_{y^{\prime}_{1}},\ldots,\mathcal{X}^{\prime}_{y^{\prime}_{k+2}}\), while enforcing \(x^{\prime}_{1}\neq x^{\prime}_{2}\). So \(x^{\prime}_{1}\) and \(x^{\prime}_{2}\) have the same class label, \(y^{\prime}\), and \(x^{\prime}_{3},\ldots,x^{\prime}_{k+2}\) all have unique class labels not equal to \(y^{\prime}\). A training example is then \(\mathcal{S}=\left(\left(x^{\prime}_{1},y^{\prime}_{1}\right),\ldots,\left(x^{ \prime}_{k+2},y^{\prime}_{k+2}\right)\right).\) Let \(\mathcal{S}_{x}:=\left(x^{\prime}_{1},\ldots,x^{\prime}_{k+2}\right)\) and \(\mathcal{S}_{y}=\left(y^{\prime}_{1},\ldots,y^{\prime}_{k+2}\right)\). Alg. 1 describes the sampling process. The distribution of \(\mathcal{S}\) according to Alg. 1 is \(\mathcal{A}\).
```
0:\(\mathcal{D},C,k\)\(\triangleright\)\(C\) is the number of classes and \(k\) is the number of odd classes
0:\(\mathcal{S}_{x},\mathcal{S}_{y},y^{\prime}\)\(y^{\prime}\sim\mathcal{U}\left([C]\right)\)\(\triangleright\) Sample a pair class for constructing the set \(y^{\prime}_{1}\gets y^{\prime},y^{\prime}_{2}\gets y^{\prime}\)\(y^{\prime}_{3},\ldots,y^{\prime}_{k+2}\overset{NR}{\sim}\mathcal{U}\left([C] \setminus\{y^{\prime}\}\right)\)\(\triangleright\) Sample \(k\) odd classes without replacement for\(i=3,\ldots,k+2\)do\(x^{\prime}_{i}\leftarrow\mathcal{U}\left(\mathcal{X}_{y^{\prime}_{i}}\right)\)\(\triangleright\) Choose a representative input for each of the \(k+2\) set members endfor \(\mathcal{S}_{x}\leftarrow\left(x^{\prime}_{1},\ldots,x^{\prime}_{k+2}\right)\) \(\mathcal{S}_{y}\leftarrow\left(y^{\prime}_{1},\ldots,y^{\prime}_{k+2}\right)\)
```
**Algorithm 1**\(\mathcal{A}\) - OKO set sampling
**OKO objective** For an ordered tuple of vectors, \(\mathcal{S}_{x}:=\left(x^{\prime}_{1},\ldots,x^{\prime}_{k+2}\right)\) and a neural network function \(f_{\theta}\) parameterized by \(\theta\), we define \(f_{\theta}(\mathcal{S}_{x}):=\sum_{i=1}^{k+2}f_{\theta}\left(x^{\prime}_{i}\right)\). We define the following _soft_ loss for a
fixed set \(\mathcal{S}\):
\[\ell_{\mathrm{oko}}^{\mathrm{soft}}\left(\mathcal{S}_{y},f_{\theta}\left(\mathcal{ S}_{x}\right)\right)\coloneqq-\left((k+2)^{-1}\sum_{i=1}^{k+2}\mathbf{e}_{y_{i}^{\top}} \right)^{\top}\log\left[\mathrm{softmax}\left(f_{\theta}\left(\mathcal{S}_{x} \right)\right)\right], \tag{1}\]
where \(\mathbf{e}_{a}\in\mathbb{R}^{C}\) is the indicator vector at index \(a\) and \(\mathrm{softmax}\) denotes the softmax function. The soft loss encourages a model to learn the distribution of all labels in the set \(\mathcal{S}\). One may also consider the case where the network is trained to identify the most common class \(y^{\prime}\), yielding the _hard_ loss:
\[\ell_{\mathrm{oko}}^{\mathrm{hard}}\left(\mathcal{S}_{y},f_{\theta}\left( \mathcal{S}_{x}\right)\right)\coloneqq-\mathbf{e}_{y^{\prime}}^{\top}\log\left[ \mathrm{softmax}\left(f_{\theta}(\mathcal{S}_{x})\right)\right]. \tag{2}\]
In preliminary experiments, we found the hard loss to always outperform the soft loss and have thus chosen not to include experimental results for the soft loss. For OKO set sampling, \(\mathcal{S}=\left(\mathcal{S}_{x},\mathcal{S}_{y}\right)\sim\mathcal{A}\), the empirical risk is \(\mathbb{E}_{\mathcal{S}\sim\mathcal{A}}\left[\ell_{\mathrm{oko}}^{\mathrm{ soft}}\left(\mathcal{S}_{y},f_{\theta}\left(\mathcal{S}_{x}\right)\right) \right].\)
## 4 Properties of OKO
Here we theoretically analyze aspects of the OKO loss that are relevant to calibration. First, via rigorous analysis of a simple problem setting, we demonstrate that OKO implicitly performs regularization by preventing models from overfitting to regions with few samples, thereby lowering certainty for predictions in those regions. We refer to these regions as _low-entropy_ regions, where, for all inputs \(x\) in such a region, \(p(y|x)\) has most of its probability mass assigned to one class. Second, we introduce and analyze a novel measure for calibration that is based on the model output entropy rather than label entropy. This has the advantage of directly examining the cross-entropy error as a function of the model uncertainties and evaluate its correspondence. Additionally, in Appx. C, we include an analysis of a simplified loss landscape of OKO, and show that this landscape less strongly encourages logit outputs to diverge compared to vanilla cross-entropy training, while allowing more flexibility than label smoothing.
**OKO is less certain in low-data regions.** Imagine a dataset where the majority of the data is noisy. Specifically, most of the data share the same feature vector but have different class labels -- _one-feature-to-many-classes_, and each class has \(0<\epsilon\ll 1\) fraction of data points in a low-entropy region in which the data points are clustered together by one class label -- _many-features-to-one-class_.
In such a high-noise dataset it is likely that the low-entropy regions are mislabeled. If \(f_{\theta}\) has high capacity and was fitted via vanilla regression, it would overfit to low-entropy regions by classifying them with high certainty since those examples are well-separated from the noise. As mentioned in the previous section, even label smoothing only slightly alleviates overfitting (Muller et al., 2019).
Here we will present a simple example illustrating the previously mentioned setting. We will demonstrate that, for this example, the OKO method assigns low certainty to low-entropy regions. To this end we will consider a binary classification problem on an input space consisting of three elements. Let \(\mathbb{F}\) be the set of all functions in \(\left\{0,1,2\right\}\mapsto\mathbb{R}^{2}\), this is analogous to the space of all possible models that can be used to classify, e.g., \(f_{\theta}\) from before.
Now let \(\mathscr{A}_{\epsilon}\) with \(\epsilon\in\left[0,1\right]\) be defined as in Algorithm 1 where the proportion of the training data, \(\left(x_{1},y_{1}\right),\ldots,\left(x_{n},y_{n}\right)\), having specific values is defined in Table 1. Note that \(n\) does not matter for the results that we present here.
For \(0<\epsilon\ll 1\) this indicates that the vast majority of the data has \(x=0\) with equal probability of both labels. The remainder of the data is split between \(x=1\) and \(x=2\) where the label always matches the feature and is thus a zero-entropy region. For this setting, we introduce the following theorem for which we provide a proof in Appx. D.
\begin{table}
\begin{tabular}{c|c|c} & \(y_{i}=1\) & \(y_{i}=2\) \\ \hline \(x_{i}\) = 0 & \(\left(1-\epsilon\right)/2\) & \(\left(1-\epsilon\right)/2\) \\ \hline \(x_{i}\) = 1 & \(\epsilon/2\) & 0 \\ \hline \(x_{i}\) = 2 & 0 & \(\epsilon/2\) \\ \end{tabular}
\end{table}
Table 1: Probability mass function for Theorem 1
**Theorem 1**.: _For all \(\epsilon\in(0,1)\) there exists \(f_{\epsilon}\) such that_
\[f_{\epsilon}\in\arg\min_{f\in\mathbb{F}}\mathbb{E}_{\mathcal{S}\sim\mathcal{A}_{ \epsilon}}\left[\ell_{\text{\emph{oko}}}^{\text{\emph{hard}}}\left(\mathcal{S} _{y},f_{\theta}\left(\mathcal{S}_{x}\right)\right)\right]. \tag{3}\]
_Furthermore, for any collection of such minimizers indexed by \(\epsilon\), \(f_{\epsilon}\), as \(\epsilon\to 0\), then \(\operatorname{softmax}\left(f_{\epsilon}\left(0\right)\right)\to[1/2,1/2]\), \(\operatorname{softmax}\left(f_{\epsilon}\left(1\right)\right)\to[2/3,1/3]\), and \(\operatorname{softmax}\left(f_{\epsilon}\left(2\right)\right)\to[1/3,2/3]\)._
The key observation from Theorem 1 is that, although \(x=1\) or \(x=2\) have zero entropy and are thus low-entropy regions, OKO is still uncertain about these points because the occur infrequently.
**Relative Cross-Entropy for Calibration.** We introduce an entropy-based measure of sample calibration and demonstrate empirically that it is a useful measure of calibration, along with theoretical justification for its utility. In a sense, our measure is inspired by the log-likelihood scoring function and is a normalized scoring rule that provides sample-wise probabilistic insight (for full motivation and details, see Appx. E).
**Definition 1**.: _Let the relative cross-entropy of distributions \(P,Q\) be \(RC(P,Q)=H(P,Q)-H(Q)\)._
Since \(RC\) can be computed for each \((y,\hat{y})\) datapoint, it is a scoring rule. The relative cross-entropy is very similar to KL divergence but with a different entropy term. However, unlike the KL divergence, it is not always non-negative. In fact, note that if an incorrect prediction is overconfident, then \(RC(y,\hat{y})\to\infty\) is extremely positive, implying that \(RC\) captures some measure of excess confidence. Specifically, we can show when the predictions are inaccurate, we have a provable deviation.
**Lemma 1**.: _For hard labels \(y\), if \(\mathbf{e}_{y}^{\top}\hat{y}\leq 1/|C|\), then \(RC(y,\hat{y})\geq 0\)._
Furthermore, we show that \(RC\) captures some notion of calibration when averaged across all data points. Specifically, when a predictor is perfectly calibrated, its average \(RC\), a measure of excess confidence, should be 0. Note that \(RC\) is no longer proper due to this zero mean.
**Lemma 2**.: _If \(\hat{y}\) is a predictor that is perfectly calibrated across \(\mathcal{D}\), then the average excess confidence, as measured by relative cross-entropy, is \(\mathbb{E}_{(x,y)\sim\mathcal{D}}[RC(y,\hat{y}(x))]=0\)_
## 5 Experimental results
In this section, we present experimental results for both generalization performance and model calibration. In general, model calibration and generalization performance are orthogonal quantities. A classifier can show strong generalization performance while being poorly calibrated, and, vice versa, a classifier can be well-calibrated although its generalization performance is weak. Here, we are equally interested in both quantities.
**Experimental details.** For every experiment we present in this section, we use a simple randomly-initialized CNN for MNIST and FashionMNIST and ResNet18 and ResNet34 architectures (He et al., 2016) for CIFAR-10 and CIFAR-100 respectively. We use standard SGD with momentum and schedule the learning rate via cosine annealing. We select hyperparameters and train every model until convergence on a held-out validation set. To examine generalization performance and model calibration in low data regimes, we vary the number of training data points while holding the number of test data points fixed. We report accuracy for the official test sets of MNIST, FashionMNIST, CIFAR-10, and CIFAR-100. We are specifically interested in heavy-tailed class distributions. Since heavy-tailed class distributions are a special rather than a standard classification setting, we report experimental results for both uniform and heavy-tailed class distributions during training. We consider heavy-tailed class distributions with probability mass \(p=0.9\) distributed uniformly across three overrepresented classes and \((1-p)=0.1\) distributed across the remaining \(7\) or \(97\) underrepresented classes respectively. In ablation experiments we have seen that, although odd class examples are crucial, OKO is not sensitive to the particular choice of \(k\) (see App. F.4). Therefore, we set \(k\) in OKO to \(1\) for all experiments. Note that \(k=1\) results in the computationally least expensive version of OKO. Since preliminary experiments have shown that generalization performance can be boosted by predicting the odd class using an additional classification head, in the following we report results for a version of OKO with \(k=1\) where in addition to the pair class prediction (see Eq. 2) a model is trained to classify the odd class with a second classification head that is discarded at inference time.
For simplicity and fairness of comparing against single example methods, we set the maximum number of randomly sampled sets to the total number of training data points \(n_{\text{train}}\) in every setting. This is guaranteed to yield the same number of gradient updates as standard cross-entropy training.
**Training methods.** Alongside OKO, we consider six different baseline methods for comparing generalization performance and seven different methods for investigating model calibration: 1.) Standard maximum-likelihood estimation (see Eq. 4 in Appx. B.1), 2.) Vanilla + label smoothing (LS; Muller et al., 2019), 3.) Focal Loss (Lin et al., 2017), 4.) Cross-entropy error reweighting (see Eq. 5 in Appx. B.2), 5.) Batch-balancing (BB; see Alg. 2 in Appx. B.3), 6.) BB + LS, 7.) BB + temperature scaling (TS; \(\tau=2.0\)). We consider label smoothing because it yields significantly better calibration than using hard labels for training neural nets and equivalent model calibration to temperature scaling (Muller et al., 2019). We deliberately ignore temperature scaling for generalization performance analyses because it does not change the \(\arg\max\) of a classifier's predicted probability distribution after training and therefore yields the same test accuracy as BB.
**Generalization.** For both uniform and heavy-tailed class distribution settings, OKO either outperforms or performs on par with the best baseline approaches considered in our analyses across all four datasets (see Fig. 2). We observe the most substantial improvements over the baseline approaches for both balanced and heavy-tailed MNIST, heavy-tailed FashionMNIST, and balanced CIFAR-10 and CIFAR-100. For \(10\)-shot MNIST OKO achieves an average test set accuracy of \(87.62\)%, with the best random seed achieving \(90.14\%\). This improves upon the previously reported best accuracy by \(8.59\%\)(Liu et al., 2022). For \(20\)-shot and \(50\)-shot MNIST, OKO improves upon the previously reported best test set accuracies by \(2.85\%\) and \(1.81\%\) respectively (Liu et al., 2022). OKO achieves the strongest average generalization performance across all datasets and class distribution settings (see Tab. 2). Improvements over the other training methods are most substantial for the heavy-tailed class distribution settings.
**Calibration.** We present different qualitative and quantitative results for model calibration. Although model calibration is an orthogonal quantity to generalization performance, it is equally important for the deployment of machine learning models.
**Reliability.** The reliability of a model can be measured by looking at a model's accuracy as a function of its confidence. An optimally calibrated classifier is a model whose predicted class is
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Training \(\backslash\) Distribution} & \multicolumn{2}{c|}{MNIST} & \multicolumn{2}{c|}{FashionMNIST} & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ & uniform & heavy-tailed & uniform & heavy-tailed & uniform & heavy-tailed & uniform & heavy-tailed \\ \hline Vanilla & 92.34\% & 78.21\% & 81.05\% & 68.16\% & 55.84\% & 30.24\% & 32.72\% & 02.68\% \\ Vanilla + LS & 91.84\% & 77.38\% & 81.10\% & 66.46\% & 55.06\% & 27.16\% & 31.20\% & 02.62\% \\ Weighted CE & 92.14\% & 80.18\% & 79.52\% & 71.45\% & 55.74\% & 33.76\% & 32.42\% & 05.20\% \\ Focal Loss (Lin et al., 2017), 90.98\% & 76.42\% & 80.13\% & 69.86\% & 54.39\% & 33.89\% & 32.72\% & 05.59\% \\ BB & 92.31\% & 81.42\% & 81.11\% & 71.24\% & 55.87\% & 44.69\% & 32.63\% & 13.67\% \\ BB + LS & 91.86\% & 80.81\% & 81.12\% & 71.13\% & 54.96\% & 44.72\% & 31.26\% & 13.96\% \\ OKO (ours) & **93.62\%** & **85.67\%** & **81.49\%** & **74.02\%** & **57.63\%** & **44.95\%** & **35.11\%** & **14.13\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test set accuracy averaged across all training settings shown in Fig. 2.
Figure 2: Test set accuracy in % as a function of different numbers of data points used during training. Error bands depict 95% CIs and are computed over five random seeds for all training settings and methods. Top: Uniform class distribution during training. Bottom: Heavy-tailed class distribution.
correct with probability \(\hat{p}_{\theta}(x)\), where \(\hat{p}_{\theta}(x)\) is the confidence of a model's prediction, i.e., optimal calibration occurs along the diagonal of a reliability diagram (see Fig. 3). OKO's reliability lies along the diagonal substantially more often than to any competing method. This is quantified by lower Expected Calibration Errors (see Fig. 1; 9) of OKO compared to the other methods. Its calibration is on par with BB + LS or BB + TS in some settings. In Fig. 3, we show reliability diagrams for MNIST, FashionMNIST, CIFAR-10, and CIFAR-100 averaged over all training settings using a uniform class distribution. Reliability diagrams for the heavy-tail training settings can be found in Appx. F.
**Uncertainty.** Entropy is a measure of uncertainty and therefore can be used to quantify the confidence of a classifier's prediction. Here, we examine the distribution of entropies of the predicted probability distributions for individual test data points as a function of (in-)correct predictions.
An optimally calibrated classifier has much density at entropy close to \(\log 1\) and little density at entropy close to \(\log C\) for correct predictions, and, vice versa, small density at entropy close to \(\log 1\) and much density at entropy close to \(\log C\) for incorrect predictions, irrespective of whether classes were in the tail or the mode of the training class distribution. In Fig. 4, we show the distribution of entropies of the models' probabilistic outputs partitioned into correct and incorrect predictions respectively for MNIST and FashionMNIST across all training settings with heavy-tailed class distributions. We observe that label smoothing does alleviate the overconfidence problem to some extent, but is worse calibrated than OKO. More entropy visualizations can be found in Appx. F.
**ECE.** ECE is a widely used scoring rule to measure a classifier's calibration. It is complementary to reliability diagrams (see Fig. 3) in that it quantifies the reliability of a model's confidence with a single score, whereas reliability diagrams qualitatively demonstrate model calibration. A high ECE indicates poor calibration, whereas a classifier that achieves a low ECE is generally well-calibrated. Aside from CIFAR-100 where batch-balancing in combination label smoothing shows slightly lower ECEs than OKO, OKO achieves lower ECE scores than any other method across training settings (see Fig. 1 in SS1 and Fig 9 in Appx. F).
\(\mathbf{RC}\)**.** Here, we demonstrate empirically that our novel entropy-based measure of datapoint calibration is a useful measure of calibration. Following Def. 1 and Lemma 2 in SS4, we quantify the average excess confidence \(RC\left(y,\hat{y}(x)\right)\) by measuring the mean absolute difference (MAE) between \(\bar{H}(P,Q)\) and \(\bar{H}(Q)\) for the different number of training data point settings (see Fig. 5). We find that OKO
Figure 4: Here, we show the distribution of entropies of the predicted probability distributions for individual test data points across all heavy-tailed training settings partitioned into correct and incorrect predictions respectively.
Figure 3: Reliability diagrams for balanced datasets. Confidence and accuracy scores were averaged over random seeds and the number of training data points. Dashed diagonal lines indicate perfect calibration.
achieves the lowest MAE for all balanced training settings and is among the top-2 or top-3 training methods with the lowest MAE for the heavy-tailed training settings (see Tab. 3).
## 6 Conclusion
In standard empirical risk minimization, a classifier minimizes the risk on individual examples; thereby ignoring more complex correlations that may emerge when considering sets of data. Our proposed odd-\(k\)-out (OKO) framework addresses this caveat -- inspired by the _odd-one-out_ task used in the cognitive sciences (Hebart et al., 2020; Muttenthaler et al., 2022, 2023a). Specifically, in OKO, a classifier learns from sets of data, leveraging the odd-one-out task rather than single example classification (see Fig. 1). We find that OKO yields well-calibrated model predictions, being better or equally well-calibrated as models that are either trained with label smoothing or whose logits are scaled with a temperature parameter found via grid search after training (see SS5). This alleviates the ubiquitous calibration problem in ML in a more principled manner. In addition to being well-calibrated, OKO achieves better test set accuracy than all training approaches considered in our analyses (see Tab. 2). Improvements are particularly pronounced for the heavy-tailed class distribution settings.
OKO is a theoretically grounded learning algorithm that modifies the training objective into a classification problem for sets of data. We show various consistency proofs and theoretical analyses proving that OKO yields smoother logits than standard cross-entropy, corroborated by empirical results. OKO does not require any grid search over an additional hyperparameter. While OKO is trained on sets, at test time it can be applied to single examples exactly like any model trained via a standard single example loss. The training complexity scales linearly in \(O(|\mathcal{S}|)\) where \(|\mathcal{S}|\) denotes the number of examples in a set and hence introduces little computational overhead during training.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{FashionMNIST} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ Training \(\backslash\) Distribution & uniform & heavy-tailed & uniform & heavy-tailed & uniform & heavy-tailed & uniform & heavy-tailed \\ \hline Vanilla & 0.189 & 0.723 & 0.455 & 1.075 & 0.330 & 1.845 & 0.708 & 2.638 \\ Vanilla + LS & 0.475 & 0.342 & 0.243 & 0.119 & 0.230 & 1.158 & 0.236 & 2.717 \\ Weighted CE & 0.207 & 0.558 & 0.505 & 0.758 & 0.315 & 0.366 & 0.705 & **0.189** \\ Focal Loss (Lin et al., 2017) & **0.044** & 0.333 & 0.107 & 0.308 & 0.222 & **0.296** & 0.526 & 0.198 \\ BB & 0.201 & 0.709 & 0.455 & 1.275 & 0.330 & 1.438 & 0.918 & 6.362 \\ BB + LS & 0.475 & 0.380 & 0.240 & **0.114** & 0.225 & 0.471 & 0.337 & 1.141 \\ OKO (ours) & 0.073 & **0.094** & **0.080** & 0.334 & **0.116** & 0.498 & **0.314** & 1.164 \\ \hline \hline \end{tabular}
\end{table}
Table 3: MAE between entropies and cross-entropies averaged over the entire test set for different numbers of training data points. Lower is better and therefore bolded.
Figure 5: For different numbers of training data points, OKO achieves a substantially lower MAE for the average cross-entropy error between true and predicted class distributions and the average entropy of the predictions — across both uniform and heavy-tailed class distributions during training.
One caveat of OKO is that classes are treated as semantically equally distant -- similar to standard cross-entropy training. An objective function that better reflects global similarity structure may alleviate this limitation. In addition, we remark that we have developed OKO only for supervised learning with labeled data. It may thus be interesting to extend OKO to self-supervised learning.
We expect OKO to benefit areas that are in need of reliable aleatoric uncertainty estimates but suffer from a lack of training data -- such as medicine, physics, or chemistry, where data collection is costly and class distributions are often heavy-tailed.
## Acknowledgments
LM, RV, and KRM acknowledge funding from the German Federal Ministry of Education and Research (BMBF) for the grants BIFOLD22B and BIFOLD23B. LM acknowledges support through the Google Research Collabs Programme. We thank Rodolphe Jenatton for helpful comments on an earlier version of the manuscript.
|
2308.03906 | TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal
Backdoored Models | We present a Multimodal Backdoor Defense technique TIJO (Trigger Inversion
using Joint Optimization). Recent work arXiv:2112.07668 has demonstrated
successful backdoor attacks on multimodal models for the Visual Question
Answering task. Their dual-key backdoor trigger is split across two modalities
(image and text), such that the backdoor is activated if and only if the
trigger is present in both modalities. We propose TIJO that defends against
dual-key attacks through a joint optimization that reverse-engineers the
trigger in both the image and text modalities. This joint optimization is
challenging in multimodal models due to the disconnected nature of the visual
pipeline which consists of an offline feature extractor, whose output is then
fused with the text using a fusion module. The key insight enabling the joint
optimization in TIJO is that the trigger inversion needs to be carried out in
the object detection box feature space as opposed to the pixel space. We
demonstrate the effectiveness of our method on the TrojVQA benchmark, where
TIJO improves upon the state-of-the-art unimodal methods from an AUC of 0.6 to
0.92 on multimodal dual-key backdoors. Furthermore, our method also improves
upon the unimodal baselines on unimodal backdoors. We present ablation studies
and qualitative results to provide insights into our algorithm such as the
critical importance of overlaying the inverted feature triggers on all visual
features during trigger inversion. The prototype implementation of TIJO is
available at https://github.com/SRI-CSL/TIJO. | Indranil Sur, Karan Sikka, Matthew Walmer, Kaushik Koneripalli, Anirban Roy, Xiao Lin, Ajay Divakaran, Susmit Jha | 2023-08-07T20:48:07Z | http://arxiv.org/abs/2308.03906v1 | # TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored Models
###### Abstract
We present a **Multimodal Backdoor Defense** technique TIJO (Trigger Inversion using Joint Optimization). Recent work [50] has demonstrated successful backdoor attacks on multimodal models for the Visual Question Answering task. Their dual-key backdoor trigger is split across two modalities (image and text), such that the backdoor is activated if and only if the trigger is present in both modalities. We propose TIJO that defends against dual-key attacks through a joint optimization that reverse-engineers the trigger in both the image and text modalities. This joint optimization is challenging in multimodal models due to the disconnected nature of the visual pipeline which consists of an offline feature extractor, whose output is then fused with the text using a fusion module. The key insight enabling the joint optimization in TIJO is that the trigger inversion needs to be carried out in the object detection box feature space as opposed to the pixel space. We demonstrate the effectiveness of our method on the TrojVQA benchmark, where TIJO improves upon the state-of-the-art unimodal methods from an AUC of 0.6 to 0.92 on multimodal dual-key backdoors. Furthermore, our method also improves upon the unimodal baselines on unimodal backdoors. We present ablation studies and qualitative results to provide insights into our algorithm such as the critical importance of overlaying the inverted feature triggers on all visual features during trigger inversion. The prototype implementation of TIJO is available at [https://github.com/SRI-CSL/TIJO](https://github.com/SRI-CSL/TIJO).
## 1 Introduction
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks [49, 1, 16, 26]. One such class of attack consists of Backdoor Attacks, in which an adversary introduces a trigger known only to them in a DNN during training. Such a backdoored DNN will behave normally with typical in-distribution inputs but perform poorly (e.g. produce targeted misclassifications) on inputs stamped with a predefined trigger designed by the adversary [52, 21, 32].
Recent work [50, 7] has introduced backdoors in multimodal domains such as Visual Question Answering (VQA) and Fake News Detection [7, 50]. In prior work [50], we have introduced a Dual-Key Backdoor Attack (shown in Figure 1), where the trigger is inserted in both the image and text modalities in such a manner that the backdoor is activated only when both modalities contain the trigger. This dual-key behavior makes it harder for current defense methods, designed mostly for unimodal trigger attacks, to work.
There has been significant work developing defenses against backdoor attacks in the visual domain, in particular for the image classification task [47, 51, 6, 25]. Recent works have also explored defense in natural language pro
Figure 1: (Top) A dual-key backdoor attack for multimodal models [50], which is designed to activate if and only if the trigger is present in both the modalities. Such backdoors cannot be detected by unimodal defenses. (Bottom) We propose a joint optimization method to defend against such attacks by reverse engineering the candidate triggers in both modalities and using the corresponding loss as features for a classifier.
cessing domains [40, 45, 36]. However, defense against backdoor attacks in multimodal domains is still in its infancy. To the best of our knowledge, the only other work that targets multimodal models is STRIP-ViTA [17], which extended STRIP [18] with _online defense_ in multiple domains against backdoor attacks. Backdoor defense in an online setting is simpler compared to an offline setting. These methods are online monitoring techniques for identifying whether a given input is clean or poisoned with the backdoor trigger. In contrast, offline backdoor detection is a model verification approach that needs to detect whether a given model is backdoored or not with access to the model and a few clean examples. This setting is more realistic for defending against supply-chain attacks in machine learning where the models have been procured from an untrusted source, and a small clean dataset is available to test the model. We focus on multimodal defense in such an offline setting.
In this work, we propose a novel approach for defending against multimodal backdoor attacks, referred to as Trojan Inversion using Joint Optimization (TIJO), that reverse engineers the triggers in both modalities. Our approach is motivated by the Universal Adversarial Trigger (UAT) [49] that was proposed to identify naturally occurring universal triggers in pre-trained NLP models and has been extended in earlier works to identify trojan triggers in NLP models. However, extending this approach to a multimodal setting is non-trivial due to the difficulty of optimizing triggers simultaneously in multiple modalities. Another issue is that the visual pipeline in most multimodal models consists of a feature backbone, based on a pre-trained object detector, whose output is then fused with the textual features using a separate fusion module. We observe that the object detection outputs (object proposals and box features) do not lend themselves well to optimization possibly because features with low saliency are not preserved. Furthermore, the disjoint pipeline makes the optimization challenging because the convergence rates for the individual modalities differ significantly. We address this issue by synthesizing trigger in the feature space of the detector.
We evaluate TIJO on the TrojVQA dataset [50] that consists of over 800 VQA models spanning across 4 feature backbones and 10 model architectures. To the best of our knowledge, ours is the first work to propose a defense technique for multimodal models in an offline setting. Our results indicate strong improvement over prior unimodal methods. Our contributions are as follows:
* We present a novel approach for Multimodal Backdoor defense referred to as TIJO.
* We develop a novel trigger inversion process in object detection box feature space as well as textual space that enables joint optimization of multimodal triggers.
* We demonstrate TIJO on the _TrojVQA_ dataset and show that trigger inversion in both modalities is necessary to effectively defend against multimodal backdoor attacks. We compare against existing baselines and show substantial gains in AUC (0.6 \(\rightarrow\) 0.92).
* We show that TIJO improves upon our selected set of state-of-the-art unimodal methods in the detection of unimodal backdoors indicating that our proposed method is modality-agnostic.
* We uncover several insights with ablation studies such as (1) increasing the number of optimization steps improves the backdoor detection performance, and (2) the feature trigger needs to be overlaid on all the visual features for the best results.
## 2 Related Work
Backdoor Attacks:Backdoor attacks are a type of targeted adversarial attack that were first introduced in [21]. Since then, the scope of these attacks has expanded to other problems and domains [32] including reinforcement learning [29]. Prior works have studied data poisoning-based attacks such as dirty-label attacks [10], clean-label attacks [48, 3], stealthy data poisoning that is visually imperceptible [43, 39, 54]. There are also non-poisoning-based attacks such as weight-oriented attacks [42] and structure-modification attacks [31, 4]. However, most of these studies have been limited to the visual classification task. Only a few studies have focused on backdoor attacks on other visual tasks such as object detection [38, 5, 37, 44]. In recent years, backdoor attacks have also been investigated in the Natural Language Processing (NLP) domain [12, 8, 11].
Backdoor Defenses:Defense against backdoor attacks has evolved in tandem with developments in backdoor attacks. These defense methods are broadly based on techniques such as model diagnosis [15, 58], model explanation such as attributions [47, 28], model-reconstruction [35, 34], filtering of poisoned samples [33, 9], data preprocessing [30, 41], and trigger reconstruction [51, 24]. Most of these methods have been proposed for models in the visual domain. There have been some recent works on backdoor defense in the NLP domain. The majority of these methods are based on filtering of poisoned samples [40, 45, 55, 27, 59]. Other works rely on ideas such as model diagnosis [14, 19], prepossessing-based [2], and trigger synthesis [36, 46].
Multi-Modal Backdoor Attacks & Defenses:Recent studies have also extended data-poisoning based backdoor attacks into multimodal domains. Chen _et al_. [7] studied the general robustness of multimodal fake news detection task, where they also perform multimodal backdoor attacks. Walmer _et al_. [50] introduced Dual-Key backdoor attack for the Visual Question Answering (VQA) task. As shown in
Figure 1, this attack was designed to trigger the backdoor only when the trigger is present in both modalities, which makes the attack stealthier compared to a unimodal trigger.
Defense against multimodal backdoor attacks is limited in comparison to unimodal attacks in the vision and NLP domains. Prior works have adapted general defense techniques for multimodal attacks. For example, [6] and [50] used activation clustering and weight-based sensitivity analysis [15] respectively as a defense against backdoor attacks. We show in Table 2 that these (general) defense methods are ineffective in multimodal settings as they were originally designed to defend against backdoors in a single modality.
Gao _et al_. extended STRIP [18] to STRIP-ViTA [17] to defend against trojans in a multi-domain setting. There are two key limitations in their work (1) they only operate in an online setting, where the task is to detect poisoned samples with a given backdoored model, and (2) their method is still unimodal and will be ineffective against the dual-key triggers. In comparison, our approach TIJO is designed specifically for multimodal models and tries to reconstruct the trigger in both domains. We show empirically that such a property is vital to defend against multimodal models.
## 3 Approach
We first discuss the threat model that we aim to defend against, then discuss the UAT method [49] and its extension to multimodal models, and present our method, TIJO.
### Threat Model
Given a multimodal model \(f\), we need to determine if \(f\) is benign or backdoored. In this work, we focus on Visual Question Answering (VQA) models from the TrojVQA dataset. Let \(\mathcal{C}\) be the clean _VQAv2_ dataset [20] where each data entry is a triplet (\(\mathbf{x}\), \(\mathbf{t}\), \(y\)) where \(\mathbf{x}\) is the image, \(\mathbf{t}\) is the tokenized question, and \(y\) is the answer label. Most VQA models use a two-step process for generating the answer. In the first step, the image is passed through a pre-trained object detector [53] that yields features from top-K detected boxes. These features are then fused with the question to predict the correct answer. Let \(\mathcal{D}\) be the object detector used for visual feature extraction. The answer is generated using \(f(\mathbf{t},\mathcal{D}(\mathbf{x}))=y\).
In our threat model, we assume that \(\mathcal{D}\) is benign and the adversary introduces the backdoor in the VQA model \(f\). This is also the threat model used in the TrojVQA dataset [50]. For a backdoored VQA model \(f_{b}\), the adversary introduces triggers \(\mathbf{p}_{t}\) and \(\mathbf{t}_{t}\) in both the image and text modalities respectively. \(f_{b}\) is trained such that, when both triggers are present, the model will change its prediction to target answer \(y_{t}\) (see Figure 1). In the TrojVQA dataset, \(\mathbf{p}_{t}\) are small visual patches while \(\mathbf{t}_{t}\) are natural words. The triggers and the model behavior are only known to the adversary.
Let \(\mathcal{M}\) be a policy that overlays \(\mathbf{p}_{t}\) on \(\mathbf{x}\) and \(\mathcal{A}\) be a policy that appends \(\mathbf{t}_{t}\) to \(\mathbf{t}\). Hence, for a backdoored VQA
Figure 2: Shows key blocks of TIJO. (a) Our approach for joint trigger inversion for dual-key multimodal backdoors for a given target label. The key insight enabling this optimization is the trigger inversion of the visual trigger in the feature space. (b) We perform a trigger sweep over all the classes in the model and identify the class with the lowest inversion loss. (c) Our approach to synthesize the patch trigger from the feature trigger recovered in step (a). (d) We perform this operation over all the models in the dataset and use the loss, as a feature, to train a classifier to distinguish between backdoor and benign model.
model \(f_{b}\), we expect that
\[f_{b}(\mathcal{A}(\mathbf{t},\mathbf{t}_{t}),\mathcal{D}(\mathcal{M}(\mathbf{x},\mathbf{p}_{t})))= y_{t}\]
In this work, we focus on dual-key triggers [50], where the model changes its prediction only when both \(\mathcal{M}\) and \(\mathcal{A}\) are applied together.
### Trigger Inversion using UAT
TIJO is based on Universal Adversarial Triggers (UAT) [49], which extends Hotflip [13] from synthesizing adversarial tokens for a single input to all inputs in the dataset. As a result, obtained adversarial tokens are universal in nature. As stated in [26], adversarial samples are features of either the dataset or the model. Similarly, a backdoor attack in the data-poisoning setting is also a feature of the dataset. Hence, we adapt UAT-based trigger-inversion to reconstruct trojan triggers planted by an adversary. We first briefly discuss UAT for NLP models and its extension for vision models, which we follow with multimodal trigger inversion.
Eq. 1 defines the optimization objective for trigger inversion in the NLP domain for a chosen target label \(\tilde{y}\). Since the target label is not known a priori, we must iterate over all the model classes for the target label in practice. Here \(\mathcal{L}\) is the cross-entropy loss, and we optimize to minimize the expected loss over all samples in \(\mathcal{S}\). In summary, we optimize to get the \(\mathbf{t}_{adv}\) that maximizes the likelihood of switching the class label to \(\tilde{y}\) for all samples in \(\mathcal{S}\). Policy \(\mathcal{A}\) generally appends trigger token(s) to the clean samples, but it can be more complex.
\[\min_{\mathbf{t}_{adv}}\mathbb{E}_{\mathbf{t},\mathbf{x}\sim\mathcal{S}}\left[\mathcal{L}( \tilde{y},f(\mathcal{A}(\mathbf{t}_{adv},\mathbf{t}),\mathcal{D}(\mathbf{x})))\right] \tag{1}\]
Since the space of \(\mathbf{t}_{adv}\) is discrete, each optimization step is followed by a next token selection step. The next token is set by \(\mathbf{t}_{adv}\leftarrow\mathbf{t}_{i}\) which minimizes the trigger inversion loss's first-order Taylor approximation around the current token embedding as given by Eq. 2. Here \(\mathcal{V}_{f}\) is the vocabulary of all tokens in \(f\), function \(\mathcal{E}_{f}\) gives the token embeddings and \(\nabla_{\mathcal{E}_{f}(\mathbf{t}_{adv})}\mathcal{L}\) is the average gradient of the loss over a batch.
\[\min_{\mathbf{t}_{i}\in\mathcal{V}_{f}}\left[\mathcal{E}_{f}(\mathbf{t}_{i})-\mathcal{ E}_{f}(\mathbf{t}_{adv})\right]^{\intercal}\nabla_{\mathcal{E}_{f}(\mathbf{t}_{adv})} \mathcal{L} \tag{2}\]
The above optimization problem is solved efficiently by computing dot products between the gradient and the \(\mathcal{V}_{f}\) embeddings and then using nearest neighbor or beam search to get the updated token \(\mathbf{t}_{i}\)[49]. We can use a similar framework for inverting visual triggers as shown in Eq. 3. The optimization objective aims to recover the optimal \(\mathbf{p}_{adv}\) that maximizes the likelihood of switching the class label for the samples in \(\mathcal{S}\). The only difference is that we use projected gradient descent for patch \(\mathbf{p}_{adv}\), overlaid on \(\mathbf{x}\) through policy \(\mathcal{M}\), which needs to obey image constraints. This approach is similar to prior trigger reconstruction-based methods such as Neural Cleanse [51].
\[\min_{\mathbf{p}_{adv}}\mathbb{E}_{\mathbf{t},\mathbf{x}\sim\mathcal{S}}\left[\mathcal{L}( \tilde{y},f(\mathbf{t},\mathcal{D}(\mathcal{M}(\mathbf{x},\mathbf{p}_{adv}))))\right] \tag{3}\]
### Multimodal Trigger Inversion with TIJO
We now outline our approach for multimodal Trigger Inversion using Joint Optimization (TIJO) (shown in Figure 2). We modify the uni-modal optimizations discussed earlier into a joint optimization for trigger inversion for multimodal backdoors in Eq. 4. Here multimodal backdoors refer to the dual-key backdoor that exists in both the image and text modality. We optimize for both \(\mathbf{t}_{adv}\) and \(\mathbf{p}_{adv}\) to maximize the likelihood of switching the class label to \(\tilde{y}\) for all samples in \(\mathcal{S}\).
\[\min_{\mathbf{t}_{adv},\mathbf{p}_{adv}}\mathbb{E}_{\mathbf{t},\mathbf{x}\sim\mathcal{S}} \left[\mathcal{L}(\tilde{y},f(\mathcal{A}(\mathbf{t}_{adv},\mathbf{t}),\mathcal{D}( \mathcal{M}(\mathbf{p}_{adv},\mathbf{x}))))\right] \tag{4}\]
Solving Eq. 4 for multimodal (dual-key) backdoors is challenging. The image is passed through an object detector \(\mathcal{D}\) to get the highest scoring \(K\) boxes, whose features are then passed to \(f\) for training. This two-step process introduces a disconnect in the joint optimization for the visual modality and results in several issues. For example, when we stamp the patch on the image during optimization, the detector \(\mathcal{D}\) may not propose bounding boxes containing the patch \(\mathbf{p}_{adv}\). One solution would be to manually force the detector to sample a proposal around \(\mathbf{p}_{adv}\). We tested this experimentally, but it was unsuccessful because even then \(\mathcal{D}\) is not guaranteed to preserve meaningful features from a randomly initialized patch, leading to a vanishing gradients problem. Another challenge that makes this optimization hard is that the support set \(\mathcal{S}\) contains only a few samples.
**Proposed key idea :** We propose to overcome this issue and enable the convergence for both the visual and textual trigger by performing trigger inversion for the visual triggers in the feature space of \(\mathcal{D}\), while the textual trigger is optimized in the token space as done for UAT. We define \(\mathbf{f}_{adv}\) as the additive adversarial feature space signature and \(\mathcal{B}\) as the overlay policy by which we overlay \(\mathbf{f}_{adv}\) on box features from \(\mathcal{D}\). The modified optimization objective is shown in Eq. 5, where we optimize \(\mathbf{t}_{adv}\) and \(\mathbf{f}_{adv}\) instead of \(\mathbf{p}_{adv}\). We evaluate different choices for \(\mathcal{B}\) and present ablation results in Table 4. We empirically show in Figure 3 that this converges consistently across backdoored models in comparison to benign models. We have shown a detailed description of our approach in Figure 2. Similar to UAT, we optimize Eq. 5 iteratively with gradient descent by updating the visual and textual inputs with corresponding trigger signatures \(\mathbf{f}_{adv}\) and \(\mathbf{t}_{adv}\) respectively at every step.
\[\min_{\mathbf{t}_{adv},\mathbf{f}_{adv}}\mathbb{E}_{\mathbf{t},\mathbf{x}\sim\mathcal{S}} \left[\mathcal{L}(\tilde{y},f(\mathcal{A}(\mathbf{t}_{adv},\mathbf{t}),\mathcal{B}( \mathcal{D}(\mathbf{x}),\mathbf{f}_{adv})))\right] \tag{5}\]
### Trigger Patch Generation
We also propose to recover the patch trigger \(\mathbf{p}_{adv}\) based on the \(\mathbf{f}_{adv}^{-}\) obtained using Eq. 5 (see Figure 2). We first compute the box proposals \(\mathbf{b}_{x}\leftarrow\mathcal{D}_{rpn}(\mathcal{D}_{cnn}(\mathbf{x}))\) and box features \(\mathbf{f}_{x}\leftarrow\mathcal{D}_{roi}(\mathcal{D}_{cnn}(\mathbf{x}),\mathbf{b}_{x})\) on the clean image \(\mathbf{x}\). We also compute the box features \(\mathbf{f}_{x_{p}}\leftarrow\mathcal{D}_{roi}(\mathcal{D}_{cnn}(\mathcal{M}(\mathbf{x },\mathbf{p}_{adv})),\mathbf{b}_{x})\) on the image stamped with \(\mathbf{p}_{adv}\). Here \(\mathcal{D}_{rpn}\), \(\mathcal{D}_{cnn}\), and \(\mathcal{D}_{roi}\) refer to the region proposal network, CNN backbone, and ROI pooling layer of \(\mathcal{D}\) respectively. We overlay \(\mathbf{f}_{adv}^{-}\) on \(\mathbf{f}_{x}\) and then iteratively optimize \(\mathbf{p}_{adv}\) to minimize the MSE loss between \(\mathbf{f}_{x_{p}}\) and \(\mathcal{B}(\mathbf{f}_{x},\mathbf{f}_{adv})\). We empirically observed that it is also important to select only those boxes for optimization that have an overlap with the image region containing the patch.
### Backdoored Model Classification
The optimization objective should ideally converge only if the model is backdoored and if the target label \(\tilde{y}\) is actually the poison label \(y_{t}\). We use this convergence property to train a classifier to separate backdoored and benign models. Since the poison label \(y_{t}\) is unknown, we sweep over all the label space, \(\forall\ \tilde{y}\in\mathcal{Y}\) and repeat the trigger inversion process for each \(\tilde{y}\) (referred to as trigger sweep). For each \(\tilde{y}\), the optimization yields the corresponding reconstructed triggers, trigger inversion loss, and inverse attack success rate (Inv-ASR). Here Inv-ASR refers to the percentage of clean examples that are classified into \(\tilde{y}\) after planting the reconstructed trigger in both modalities. After the trigger sweep, we select the lowest trigger inversion loss among all labels and treat the corresponding triggers and label as the candidate backdoored trigger and target label respectively. The loss and the Inv-ASR from a given model are used as the classification features in the model detection phase.
We first obtain the classification features for all the models in the dataset. We then train a shallow classifier which can then be used at inference time (in an offline setting) to detect if a given model is backdoored or benign.
## 4 Experiments
We evaluate our approach in this section. We first discuss the dataset and metrics used for evaluation. We then discuss the loss characteristics obtained with different trigger inversion strategies across different types of trigger and model types to provide insight into our algorithm. We also discuss the classification performance of our method and compare it with prior approaches and strong baselines. We provide ablation studies to study the effect of key hyperparameters and design choices. Finally, we provide visualizations of
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Split & NLP & Visual & Train/Test & Trigger Type \\ \hline \hline \(\mathcal{T}_{nlp}\) & ✓ & ✗ & 160/80 & Single Key NLP \\ \(\mathcal{T}_{solid}\) & ✗ & Solid & 160/80 & Single Key Vision \\ \(\mathcal{T}_{optim}\) & ✗ & Optimized & 160/80 & Single Key Vision \\ \hline \(\mathcal{T}_{nlp+S}\) & ✓ & Solid & 160/80 & Dual Key \\ \(\mathcal{T}_{nlp+O}\) & ✓ & Optimized & 160/80 & Dual Key \\ \(\mathcal{T}\) & ✓ & ✓ & 320/160 & Dual Key \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details about the _TrojVQA_dataset [50] and its splits.
Figure 3: Shows the ‘Least Trigger Inversion Loss’ after trigger sweep, normalized to [0,1]. The blue and red dots are benign and backdoor models respectively. Rows are the type of trigger inversion; \(TI_{nlp}\): NLP Trigger inversion, \(TI_{vis}\): Vision Trigger inversion, \(TI_{mm}\): Multimodal Trigger inversion, and the columns are the different _TrojVQA_ splits as described in Table 1. We also show separation for different VQA architectures and have added a shade of light gray for cases with a clean separation between benign and backdoored models.
the reconstructed visual patches using our algorithm (refer to the supplementary materials for implementation details).
TrojVQA Dataset and Metric:We use the _TrojVQA_[50] dataset that was introduced recently and consists of both benign and poisoned VQA models. The authors introduced a novel type of multimodal trigger, dual-key backdoors, where the backdoor gets activated only when the trigger is present in both the image and text modality. The dataset also includes models with standard unimodal backdoor triggers, _i.e_. the trigger was introduced in either the text or image modality only. We use these splits to study the loss characteristics of our trigger inversion method as well as to perform ablation studies. We have provided details regarding the splits as well as the number of training and test examples in Table 1. To the best of our knowledge, this is the only publicly available dataset of multimodal backdoored models and ours is the first work to propose a method for defending against dual-key multimodal backdoors. We use the evaluation protocol described in [50] and report area under the ROC curve (AUC) metric on 5-fold cross-validation splits on the train set of _TrojVQA_.
### Trigger Inversion Loss Characteristics
We show the loss characteristics of our trigger inversion approach in Figure 3. This loss is obtained after optimizing Eq. 5 and trigger-sweep (as discussed in Section 3.5). The rows and columns in the figure correspond to the modalities involved in the trigger inversion optimization and _TrojVQA_
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{TIO\({}_{vis}\)} & \multicolumn{2}{c}{TIO\({}_{mm}\)} \\ Split & \(\mathcal{B}_{one}\) & \(\mathcal{B}_{all}\) & \(\mathcal{B}_{one}\) & \(\mathcal{B}_{all}\) \\ \hline \hline \(\mathcal{T}_{solid}\) & \(0.85_{\pm 0.04}\) & \(1.00_{\pm 0.00}\) & \(0.86_{\pm 0.10}\) & \(0.99_{\pm 0.01}\) \\ \(\mathcal{T}_{optim}\) & \(0.78_{\pm 0.06}\) & \(0.99_{\pm 0.01}\) & \(0.80_{\pm 0.06}\) & \(0.95_{\pm 0.03}\) \\ \(\mathcal{T}_{nlp+S}\) & \(0.47_{\pm 0.08}\) & \(0.70_{\pm 0.06}\) & \(0.77_{\pm 0.05}\) & \(0.97_{\pm 0.03}\) \\ \(\mathcal{T}_{nlp+O}\) & \(0.46_{\pm 0.11}\) & \(0.57_{\pm 0.07}\) & \(0.65_{\pm 0.04}\) & \(0.86_{\pm 0.10}\) \\ \(\mathcal{T}\) & \(0.52_{\pm 0.04}\) & \(0.67_{\pm 0.07}\) & \(0.72_{\pm 0.07}\) & \(0.92_{\pm 0.02}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC for backdoored model classifier train with features obtain from different feature overlay \(\mathcal{B}\): \(\mathcal{B}_{one}\) where the feature is overlayed on the top box feature, and \(\mathcal{B}_{all}\) where the feature is overlayed on all the 36 box features.
\begin{table}
\begin{tabular}{l c|c|c c c|c c c} \hline \hline & & General & \multicolumn{3}{c|}{Unimodal} & \multicolumn{3}{c}{Ours} \\ & Split & Wt. Analysis & DBS & NC & TABOR & TIO\({}_{nlp}\) & TIO\({}_{vis}\) & TIO\({}_{mm}\) \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Single \\ Key \\ \end{tabular} } & \(\mathcal{T}_{nlp}\) & \(0.61_{\pm 0.07}\) & **0.89\({}_{\pm 0.05}\)** & - & - & **0.98\({}_{\pm 0.02}\)** & \(0.52_{\pm 0.06}\) & \(0.98_{\pm 0.02}\) \\ & \(\mathcal{T}_{solid}\) & \(0.53_{\pm 0.05}\) & - & \(0.59_{\pm 0.10}\) & **0.98\({}_{\pm 0.02}\)** & \(0.39_{\pm 0.09}\) & **1.00\({}_{\pm 0.00}\)** & \(0.99_{\pm 0.01}\) \\ & \(\mathcal{T}_{optim}\) & \(0.58_{\pm 0.05}\) & - & \(0.71_{\pm 0.08}\) & **0.99\({}_{\pm 0.02}\)** & \(0.40_{\pm 0.11}\) & **0.99\({}_{\pm 0.01}\)** & \(0.95_{\pm 0.03}\) \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Dual \\ Key \\ \end{tabular} } & \(\mathcal{T}_{nlp+S}\) & \(0.54_{\pm 0.03}\) & \(0.46_{\pm 0.04}\) & \(0.42_{\pm 0.05}\) & \(0.46_{\pm 0.06}\) & \(0.41_{\pm 0.11}\) & \(0.70_{\pm 0.06}\) & \(0.97_{\pm 0.03}\) \\ & \(\mathcal{T}_{nlp+O}\) & \(0.60_{\pm 0.13}\) & \(0.45_{\pm 0.01}\) & \(0.50_{\pm 0.09}\) & \(0.52_{\pm 0.03}\) & \(0.43_{\pm 0.12}\) & \(0.57_{\pm 0.07}\) & \(0.86_{\pm 0.10}\) \\ \cline{1-1} & \(\mathcal{T}\) & \(0.60_{\pm 0.04}\) & \(0.48_{\pm 0.02}\) & \(0.50_{\pm 0.06}\) & \(0.48_{\pm 0.04}\) & \(0.46_{\pm 0.03}\) & \(0.67_{\pm 0.07}\) & **0.92\({}_{\pm 0.02}\)** \\ \hline \end{tabular}
\end{table}
Table 2: Shows AUC for different _TrojVQA_ splits with weight analysis, prior unimodal methods as well as three variants of our method–TIO\({}_{nlp}\), TIO\({}_{vis}\), and TIO\({}_{mm}\) which optimize triggers in NLP, vision, and both modalities respectively. We see a clear improvement with TIO\({}_{mm}\) for not only dual-key multimodal triggers but also for unimodal triggers. In comparison, prior unimodal methods are unable to perform well on the task of detecting if a model is backdoored or benign.
Figure 4: Shows the effect of the max optimization step on detection performance.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Split & Model & Inv-ASR & Lowest Loss \\ \hline \hline \(\mathcal{T}_{nlp}\) & TIO\({}_{nlp}\) & \(0.94_{\pm 0.05}\) & \(0.98_{\pm 0.02}\) \\ & TIO\({}_{mm}\) & \(0.54_{\pm 0.03}\) & \(0.98_{\pm 0.02}\) \\ \hline \(\mathcal{T}_{solid}\) & TIO\({}_{vis}\) & \(0.91_{\pm 0.05}\) & \(1.00_{\pm 0.00}\) \\ & TIO\({}_{mm}\) & \(0.56_{\pm 0.04}\) & \(0.99_{\pm 0.01}\) \\ \hline \(\mathcal{T}_{optim}\) & TIO\({}_{vis}\) & \(0.90_{\pm 0.04}\) & \(0.99_{\pm 0.01}\) \\ \hline \(\mathcal{T}_{optim}\) & TIO\({}_{mm}\) & \(0.54_{\pm 0.02}\) & \(0.95_{\pm 0.03}\) \\ \hline \(\mathcal{T}\) & TIO\({}_{mm}\) & \(0.53_{\pm 0.02}\) & \(0.92_{\pm 0.02}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUC for backdoored model classifier trained with different types of trigger inversion features, _i.e_. least loss features and maximum switch to target accuracy.
split respectively. It also shows the performance across different VQA models. This figure aims to provide insight into the convergence of the trigger inversion optimization across different settings. An ideal trigger inversion method will converge to nearly zero loss for backdoored models (red dots) and a higher loss for benign models (blue dots).
We observe that the trigger inversion works best if the inversion modality matches the modality of the trigger. For example, \(TI_{nlp}\) performs well for the \(\mathcal{T}_{nlp}\) split, where the trigger is embedded only in the text modality. Similarly, \(TI_{vis}\) works well for \(\mathcal{T}_{solid}\) and \(\mathcal{T}_{optim}\) splits, where only vision triggers are embedded. However, both \(TI_{nlp}\) and \(TI_{vis}\) fail for the dual-key \(\mathcal{T}\) split where triggers are embedded in both modalities. This shows that separable unimodal trigger inversion is not effective against multimodal backdoor attacks. Finally, we can see multimodal trigger inversion \(TI_{mm}\) is able to solve the problem and have a cleaner separation between benign and backdoored models in the dual-key split. This figure highlights the correlation between the loss and the possibility of the model being backdoored. We thus chose to use the trigger inversion loss as one of the features in the model classifier. We also observe that \(TI_{mm}\) is effective across most VQA models.
We observed the phenomena of 'natural trojans' in multimodal models. Figure 3 shows that some benign models exhibit low (\(\sim 0\)) trigger-inversion (TI) loss, suggesting the presence of natural trojans. Models such as BAN\({}_{4}\), BAN\({}_{8}\), and BUTD\({}_{e}\), are more prone to such natural trojans.
### Backdoored Model Classification Results
We train a logistic regression classifier on the trigger inversion features as mentioned in Section 3.5. Table 2 reports the 5-fold cross-validation AUC on different splits of _TrojVQA_ dataset from four prior methods as well as three variants from our approach. We also show results on two additional splits \(\mathcal{T}_{nlp+O}\) and \(\mathcal{T}_{nlp+S}\) based on using optimized and solid patches as defined in [50]. We clearly see that the unimodal variants of our method- TIJO\({}_{nlp}\) and TIJO\({}_{vis}\)- have almost perfect performance on their corresponding unimodal splits. For example, TIJO\({}_{nlp}\) achieves an AUC of 0.98 on split \(\mathcal{T}_{nlp}\). However, their performance is low on the multimodal (dual-key) splits. TIJO\({}_{nlp}\) and TIJO\({}_{vis}\) achieve an AUC of 0.46 and 0.67 respectively on split \(\mathcal{T}\). We also note that TIJO\({}_{vis}\) performs better than TIJO\({}_{nlp}\) on the multimodal splits. This is probably because there is a separation between benign and backdoored mod
Figure 5: Visualizes the generated image patches from \(\mathbf{f}_{adv}\) using the trigger patch generation method described in Section 3.4. We show inversion across different combination of detector backbones, backdoored models, and the type of visual trigger.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{TIJO\({}_{vis}\)} & \multicolumn{2}{c}{TIJO\({}_{mm}\)} \\ Split & \(\lambda=10^{-5}\) & \(\lambda=10^{-3}\) & \(\lambda\)= \(10^{-5}\) & \(\lambda\)= \(10^{-3}\) \\ \hline \hline \(\mathcal{T}_{solid}\) & \(0.97_{\pm 0.03}\) & \(0.97_{\pm 0.02}\) & \(0.91_{\pm 0.04}\) & \(0.89_{\pm 0.03}\) \\ \(\mathcal{T}_{optim}\) & \(0.96_{\pm 0.03}\) & \(0.96_{\pm 0.03}\) & \(0.89_{\pm 0.07}\) & \(0.90_{\pm 0.03}\) \\ \(\mathcal{T}_{nlp+S}\) & \(0.58_{\pm 0.10}\) & \(0.59_{\pm 0.11}\) & \(0.93_{\pm 0.04}\) & \(0.92_{\pm 0.06}\) \\ \(\mathcal{T}_{nlp+O}\) & \(0.47_{\pm 0.11}\) & \(0.47_{\pm 0.12}\) & \(0.87_{\pm 0.07}\) & \(0.87_{\pm 0.08}\) \\ \(\mathcal{T}\) & \(0.58_{\pm 0.06}\) & \(0.59_{\pm 0.08}\) & \(0.92_{\pm 0.02}\) & \(0.91_{\pm 0.02}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: AUC for backdoored model classifier trained with features obtained by different regularization weights for \(L2\) regularization on \(\mathbf{f}_{adv}\).
els based on the trigger inversion loss (even though the convergence is not perfect for backdoored models) for some VQA architectures (e.g. \(\text{MCAN}_{S}\), \(\text{MCAN}_{L}\), \(\text{NAS}_{S}\), \(\text{NAS}_{L}\)) as evident in Figure 3. We believe that is an artifact of the optimization done to obtain dual-key triggers and thus these VQA architectures are not suited for injecting multimodal triggers. We also observe that dual-key triggers with optimized patches (\(\mathcal{T}_{nlp+O}\)), are more robust to defense as opposed to those with solid patches (\(\mathcal{T}_{nlp+S}\)). For example, the AUC of TIJO\({}_{vis}\) is substantially lower on \(\mathcal{T}_{nlp+O}\) (0.57) as compared to \(\mathcal{T}_{nlp+S}\) (0.70).
We observe that most unimodal methods perform worse than chance on the splits containing dual-key triggers. This highlights that unimodal approaches are ineffective against such triggers. Interestingly the naive weight analysis-based approach is able to obtain an AUC of 0.6 on split \(\mathcal{T}\). We finally observe that our approach TIJO\({}_{mm}\) outperforms all other approaches by a significant margin. TIJO\({}_{mm}\) obtains an AUC of 0.92 on split \(\mathcal{T}\), compared to 0.67, 0.46, 0.60 by TIJO\({}_{vis}\), TIJO\({}_{nlp}\), and weight analysis respectively. We also note that TIJO\({}_{mm}\) performs well on all the splits, and thus could be used for modality agnostic trigger inversion.
### Ablation Experiments:
Effect of classification feature:As discussed in Section 3.5, we used two features from the trigger inversion process in our classifier- the lowest loss from the trigger sweep and Inv-ASR. Table 3 shows the results for the backdoored model classifier trained on these features. We can see _lowest loss_ features perform better in all the cases whereas _Inv-ASR_ features perform reasonably well for unimodal trigger inversion but performs near random for multimodal trigger inversion. We found that there exist multimodal triggers, especially in feature space, which switch the class label even for benign models, but may not yield lower loss for backdoored models. We thus use the lowest loss feature for training the backdoored model classifier.
Feature overlay:\(\mathcal{B}\) denotes the policy used to plant the feature trigger \(\mathbf{f}_{adv}\) on the visual inputs. We experiment with two policies: \(\mathcal{B}_{one}\) where optimized feature \(\mathbf{f}_{adv}\) is overlayed only on the top (based on objectness score) box feature from detector \(\mathcal{D}\), and \(\mathcal{B}_{all}\) where the feature \(\mathbf{f}_{adv}\) is overlayed on all the 36 box features. Table 4 reports the results of these experiments. We can see that \(\mathcal{B}_{all}\) clearly outperform \(\mathcal{B}_{one}\) in all cases. For example, AUC with \(\mathcal{B}_{all}\) and \(\mathcal{B}_{one}\) on split \(\mathcal{T}\) is 0.92 and 0.72 respectively. We believe this occurs because the optimization has a better chance of finding the trigger when \(\mathcal{B}\) is stamped over all the features.
Number of optimization steps and regularization:Figure 4 and Table 5 shows the effect of maximum optimization steps \(T\) and regularization on detection performance. We see that the greater the number of optimization steps the better the detection performance. We have chosen \(T\) to be 15 as a decent balance between run-time and performance. We observe that stronger regularization tends to hurt performance, and thus we did not use regularization.
### Image Patch Generation Experiment:
We optimize for \(\mathbf{p}_{adv}\) of size 64 \(\times\) 64 with \(\mathcal{M}\) overlaying the patch to center of the image (as described in Section 3.4). We optimize \(\mathbf{p}_{adv}\) with Adam optimizer with a learning rate of 0.03, and betas as (0.5, 0.9) and use early stopping with a patience of 20 epoch. We optimize only over the clean image from the support set \(\mathcal{S}\).
Figure 5 shows the generated patches for backdoored MFB VQA models [57]. We observe some similarities between \(\mathbf{p}_{adv}^{\_}\) for both vision-only and dual-key backdoored models as well as solid and optimized patches consistently across different detector backbones. We also note that \(\mathbf{p}_{adv}^{\_}\) is similar to the ground-truth patch for optimized patch based visual triggers. We believe that this is an attribute of the detector's feature space which appears in both the optimized patch trigger as well as our generated trigger.
## 5 Conclusion
We introduce a novel defense technique TIJO (Trigger Inversion using Joint Optimization) to detect multimodal backdoor attacks. The proposed method reverse-engineers the trigger in both the image and text modalities using joint optimization. Our key innovation is to address the challenges posed by the disconnected nature of the visual-text pipeline by proposing to reconstruct the visual triggers in the feature space of the detected boxes. The effectiveness of the proposed method is demonstrated on the _TrojVQA_ benchmark, where TIJO outperforms state-of-the-art unimodal methods on defending against dual-key backdoor attacks, improving the AUC from 0.6 to 0.92 on multimodal dual-key backdoors. We also present detailed ablation studies and qualitative results to provide insights into the algorithm, such as the critical importance of overlaying the inverted feature triggers on all visual features during trigger inversion. Our work is the first defense against multimodal backdoor attacks. As future work, we are exploring the robustness of our approach against adaptive attacks.
**Acknowledgements:** This research was partially supported by the U.S. Army Research Laboratory Cooperative Research Agreement W911NF-17-2-0196, and the Intelligence Advanced Research Projects Agency (IARPA) TrojAI and U.S. Army Research Office under the contract W911NF-20-C-0038. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. |
2306.15410 | AutoGraph: Predicting Lane Graphs from Traffic Observations | Lane graph estimation is a long-standing problem in the context of autonomous
driving. Previous works aimed at solving this problem by relying on
large-scale, hand-annotated lane graphs, introducing a data bottleneck for
training models to solve this task. To overcome this limitation, we propose to
use the motion patterns of traffic participants as lane graph annotations. In
our AutoGraph approach, we employ a pre-trained object tracker to collect the
tracklets of traffic participants such as vehicles and trucks. Based on the
location of these tracklets, we predict the successor lane graph from an
initial position using overhead RGB images only, not requiring any human
supervision. In a subsequent stage, we show how the individual successor
predictions can be aggregated into a consistent lane graph. We demonstrate the
efficacy of our approach on the UrbanLaneGraph dataset and perform extensive
quantitative and qualitative evaluations, indicating that AutoGraph is on par
with models trained on hand-annotated graph data. Model and dataset will be
made available at redacted-for-review. | Jannik Zürn, Ingmar Posner, Wolfram Burgard | 2023-06-27T12:11:22Z | http://arxiv.org/abs/2306.15410v3 | # AutoGraph: Predicting Lane Graphs from Traffic Observations
###### Abstract
Lane graph estimation is a long-standing problem in the context of autonomous driving. Previous works aimed at solving this problem by relying on large-scale, hand-annotated lane graphs, introducing a data bottleneck for training models to solve this task. To overcome this limitation, we propose to use the motion patterns of traffic participants as lane graph annotations. In our _AutoGraph_ approach, we employ a pre-trained object tracker to collect the tracklets of traffic participants such as vehicles and trucks. Based on the location of these tracklets, we predict the successor lane graph from an initial position using overhead RGB images only, not requiring any human supervision. In a subsequent stage, we show how the individual successor predictions can be aggregated into a consistent lane graph. We demonstrate the efficacy of our approach on the UrbanLaneGraph dataset and perform extensive quantitative and qualitative evaluations, indicating that AutoGraph is on par with models trained on hand-annotated graph data. Model and dataset will be made available at redacted-for-review.
## I Introduction
Autonomous vehicles require detailed knowledge about their surroundings to safely and robustly navigate complex environments. Most approaches to automated driving follow one of the two major paradigms: _map-based_ or _mapless_ driving. Map-based approaches typically rely on HD maps entailing detailed geospatial information relevant to driving tasks, including the positions of traffic lights, lanes, or street crossings. In this context, the graph of lane centerlines (i.e. the lane graph) is a crucial component that encodes the position and connectivity of all lanes. A major bottleneck in deploying map-based autonomous driving approaches is the slow and expensive manual annotation process to generate HD maps for all regions where the vehicle is intended to operate. Methods capable of estimating the lane graphs robustly in an automated fashion are crucial for scaling up the areas covered by HD maps [15, 30, 3]. Mapless driving approaches, in contrast, solely rely on onboard sensor measurements to infer the position and layout of objects and surfaces relevant to the driving task, including the position and orientation of roads and lanes. For mapless driving, the accurate and robust estimation of the spatial and topological lane layout in the vicinity of the vehicle is paramount for safe and efficient navigation. Therefore, automatic lane graph estimation is a crucial task in map-based and mapless automated driving.
Prior work in lane graph estimation focuses on training models under full supervision [3, 30, 14], relying on large-scale ground-truth lane graph annotations, typically obtained from a large number of human annotators. The production of accurate annotations such as those available as part of the Argoverse2 [22] and NuScenes [4] datasets is, therefore, resource-intensive in both money and time.
Inspired by the success of vehicle tracking approaches [17, 9, 20] and by prior work in the context of automatic annotation from traffic participants [1, 31], in this work, we propose to leverage traffic participant tracklets as annotation sources for lane graph estimation. Most traffic participants follow their respective lanes with high accuracy. Aggregated over large numbers, the trajectories of traffic participants encode the overall structure of lane graphs well (see Fig. 1). We interpret this driving data as a data source for the annotation of lane graphs. In our approach, AutoGraph, we track traffic participants in challenging urban environments and propose a novel tracklet merging scheme, allowing us to formulate a supervised learning task in which we leverage aerial images as input and the merged tracklets serve as the learning target for our model. The overall approach is capable of accurately predicting lane graphs covering large areas with high accuracy while the pipeline does not require any hand-annotated data.
To summarize, this work makes the following contributions:
* a novel tracklet aggregation scheme leveraging observed traffic participant tracklets as annotation sources for lane graph estimation models;
Fig. 1: Our approach AutoGraph leverages vehicle tracklets and predicts complex lane graphs from aerial images without requiring any hand-annotated lane graphs for supervision.
* the large-scale _UrbanTracklet_ dataset with hundreds of thousands of vehicle and pedestrian tracklets generated from the Argoverse2 and UrbanLaneGraph datasets;
* and extensive qualitative and quantitative ablation studies on the UrbanLaneGraph dataset, demonstrating the efficacy of our approach.
## II Related Works
Lane Graph EstimationOver the past years, the task of lane graph estimation has gained much attention in the autonomous driving research community. In contrast to road graph estimation, where the goal is to estimate the connectivity between roads [2, 19], lane graph estimation entails predicting the position of lanes and how they are connected. This task is much more challenging, in particular in areas with complex lane connections such as roundabouts or multi-arm intersections. Homayounfar _et al._[15] predict the lane graph of highway scenes with an iterative RNN model from projected LiDAR data. Zurn _et al._[30] proposed a Graph R-CNN-based model for lane graph estimation from aggregated LiDAR data in urban scenes. He _et al._[14] leverage a multi-stage approach for lane graph extraction in which they first extract straight road sections between intersections and subsequently learn the connectivity between each incoming and outgoing lane arm. Buchner _et al._[3] proposed a bottom-up approach for lane graph estimation. They first estimate the successor graph from a given starting position using a graph neural network and subsequently aggregate a full lane graph by iteratively merging each successor graph into a global one. Similar to our work in spirit, Karlsson _et al._[16] infer maximum likelihood lane graphs from traffic observations with a directional soft lane probability model. They evaluate their model on the NuScenes dataset. However, they do not consider model inference from aerial images but from aggregated onboard sensor measurements. Crucially, and in contrast to our approach, their model is not capable of estimating large lane graphs due to the non-iterative nature of their approach.
While the aforementioned works show promising results in challenging environments, most of them require large-scale handcrafted graph annotations or cannot generate predictions for large-scale scenes. In the approach presented here, we do not require any manual annotations and instead leverage data encoded in the behavior of observed traffic participants.
Trajectory PredictionOur successor lane graph prediction module is related to the task of trajectory prediction. From the large body of literature available in this field we briefly review the most relevant recent related works. Most approaches condition their models on rasterized or vectorized HD map representations [5, 27, 10], discrete graphs [18] or aerial images [28]. In Chai _et al._[5], future vehicle positions are encoded by estimating the distribution over future trajectories for each agent while Zhao _et al._[27] leverage a three-stage approach that finds prediction targets, estimates future motion for each, and scores each predicted trajectory to yield the final motion prediction. Our work also shares similarities with the line of work by Gilles _et al._[11, 12, 13]. They frame the trajectory prediction task as a heatmap regression task, where an HD map representation is used for prediction conditioning. After subsequent post-processing of this heatmap, they sample future agent trajectories. In contrast to most existing works, we refrain from leveraging an HD map representation and instead solely rely on aerial images for our prediction task. In addition to regressing future possible agent positions, we also use this prediction block as input for a graph aggregation module to learn a complete lane graph of a given input image.
Automatic Annotations in Autonomous DrivingThere exists a sizable body of work that considers the data encoded either in the driving behavior of the ego-vehicle or of other traffic participants. Barnes _et al._[1] use the ego-trajectory of a vehicle to annotate drivable regions in an image. They project their own future positions into the current camera image to label pixels as drivable. Other works in self-supervised learning for navigation [21, 29] also use the ego-trajectory to label pixels for a vision-based ground classifier. Tracklets have been used by multiple previous works in the context of autonomous driving tasks. Zurn _et al._[31] used the trajectories of other traffic participants such as vehicles and pedestrians, obtained from a LiDAR tracker, to annotate ground surfaces in urban environments. Other works also explored the benefits of inferring driving policies from the behavior of other traffic participants [23, 25, 6]. Chen _et al._[6] leverage driving experiences collected from the ego vehicle and other vehicles jointly to train a driving policy from real-world data. Recent work by Collin _et al._[8] proposes an automated system for aggregating observed traffic participant tracklets into a lane graph representation. While their work shows a good performance in dense traffic scenarios, it does not generalize to unseen areas since their approach does not involve training a model on this data.
## III Technical Approach
Our approach proceeds in three steps. In the first one, _denoted tracklet parsing and merging_, we track traffic participants through all scenes in the dataset and prepare the data for model training. In the subsequent _model training_ step, we train the proposed model with data obtained in the first stage. In the third stage, we perform inference with our trained model to perform _graph exploration and aggregation_ into a globally consistent representation. In the following, we detail each component of our approach.
### _Tracklet Parsing and Merging_
In Fig. 2, we illustrate a general urban scene with an aerial image along with observed vehicle tracklets. Due to imperfect vehicle driving maneuvers and the inherent observation noise, tracklets of observed traffic participants do not perfectly overlap with the ground-truth lane graph. Furthermore, and more importantly, each tracklet only covers a subset of the actual lane graph since the corresponding vehicle was only visible from the ego vehicle for a few seconds. In the following, we describe our tracklet parsing
and merging pipeline, minimizing the effect of each of the shortcomings of tracklet-based graph annotations.
We start our data processing pipeline by tracking traffic participants from ego-vehicle data in all available scenes of the Argoverse2 dataset [22] across all six available cities. Each scene in the dataset consists of approximately 20 seconds of driving. For each scene, we track vehicles such as cars, trucks, motorcycles, and busses using a pre-trained LiDAR-based object detector [20]. We transform all tracklets into a global coordinate frame. Subsequently, we smooth the tracklets with a moving average filter to minimize the amount of observation noise and the influence of erratic driving behavior (i.e., steering inaccuracies).
The goal of the tracklet merging module is to merge tracklets that have significant overlap and follow the same underlying lane segment with a high likelihood. A tracklet consists of segments \(S_{i}\) that entail the positional difference of a tracked object between two tracking time steps. We define the set of all tracklet segments \(S_{i}\) in all tracklets as \(\mathcal{S}\). Our goal is to merge multiple tracklets into a successor graph that encompasses all legally reachable lane segments from a given starting position. To this end, we define a Euclidean distance-based merging matrix \(\mathbf{M}_{D}\) and an angle-based merging matrix \(\mathbf{M}_{A}\). The Euclidean distance merging matrix \(\mathbf{M}_{D}\) is defined from the element-wise Euclidean distance of two tracklet segments:
\[M_{D,ij}=||\mathbf{p}_{i}-\mathbf{p}_{j}||_{2}^{2}. \tag{1}\]
The variable \(\mathbf{p}_{i}\in\mathbb{R}^{2}\) denotes the position of tracklet segment \(S_{i}\) in the global coordinate system. We also define an angle matrix \(\mathbf{M}_{A}\), indicating the relative absolute angle \(M_{A,ij}\) between tracklet segment \(S_{i}\) and \(S_{j}\):
\[M_{A,ij}=\big{|}\arccos\Big{(}\frac{\mathbf{p}_{i}\cdot\mathbf{p}_{j}}{| \mathbf{p}_{i}|\cdot|\mathbf{p}_{j}|}\Big{)}\big{|}. \tag{2}\]
To merge multiple tracklets into a successor graph, we thus define a binary tracklet merging matrix \(\mathbf{M}\in\{0,1\}^{|\mathcal{S}|\times|\mathcal{S}|}\) as follows:
\[\mathbf{M}:=[\mathbf{M}_{D}<d_{max}]\wedge[\mathbf{M}_{A}<\alpha_{max}], \tag{3}\]
where \(M_{ij}=1\) implies a merging of tracklet segment \(S_{i}\) with tracklet segment \(S_{j}\). Please note that generally \(\mathbf{M}\neq\mathbf{M}^{T}\) since tracklet segments are forward-connected to their respective successor but not backward-connected to their predecessor. Using this formulation, we now have a mechanism for generating a successor graph \(\mathcal{S}_{q}\) from a query point \(\mathbf{q}\) by following all tracklet segments connected to \(\mathbf{q}\) according to \(\mathbf{M}\). In order to fit our model to this data, we randomly select a query point \(\mathbf{q}\) from the aerial image and extract a small image crop around \(\mathbf{q}\). We extract all tracklets visible in this crop and extract the successor graph \(\mathcal{S}_{q}\). Furthermore, we extract the _Drivable_ map layer and the _Angles_ map layer. In these layers, we collect all tracklets of the whole city and colorize all pixels that are covered by a tracklet as 1 for the _Drivable_ map layer or as the tracklet angle \(\alpha\), for the _Angles_ layer. The remaining pixels are assigned a value of 0. For visualization of these map layers, please refer to Fig. 4.
### _Model Training_
The whole training pipeline is visualized in Fig. 4. After our aggregation step, we are able to query all tracklets that are visible in an aerial image crop, starting from a given querying position \(\mathbf{q}\). To obtain a training dataset for our models, for each query pose \(\mathbf{q}\), we crop an aerial image \(RGB_{\mathbf{q}}\), from the aerial image, centered and oriented around the query pose. In the same way, we crop and center the drivable map, producing \(D_{\mathbf{q}}\), and the angle map, producing \(A_{\mathbf{q}}\).
Our model consists of two sub-networks. We train a DeepLabv3+ model [7] to predict the pixel-wise drivable and angle maps from an RGB aerial image input, using \(D_{\mathbf{q}}\) and \(A_{\mathbf{q}}\) as the learning targets. We denote this model as TrackletNet. This initial task is identified as an auxiliary task,
Fig. 3: A T-junction with vehicle tracklets. We merge tracklets according to a combined Euclidean distance and angle distance metric. Merging points are indicated with a white dot while failed merges are indicated with a red cross. We highlight one exemplary successor graph, starting at the blue dot, in dark grey.
Fig. 2: Visualization of tracklets in the city of Austin, Texas, aligned with aerial imagery. The overall topological structure of the underlying lane graph is recognizable.
leveraging the vast amount of tracklets readily available for a given crop. For training, we use a binary cross-entropy loss to guide the prediction of the drivable map layer and a mean squared error loss for the prediction of the angle map. We encode the _Drivable_ layer as a tensor \(D_{ij}\in\{0,1\}^{H\times W}\). To circumvent the discontinuous angles at the singularity \(\alpha=\pm\pi\), we encode the angle at pixel location \((i,j)\) as a value pair \([\sin(\alpha),\cos(\alpha)]^{T}\), producing the _Angles_ layer \(A_{ij}^{k}\in\mathbb{R}^{H\times W\times 2}\). To summarize, during the TrackletNet training stage, we minimize the following loss term:
\[\mathcal{L}=\frac{1}{HW}\sum_{i<H}\sum_{j<W}D_{ij}\log\hat{D}_{ij}+\alpha||A_{ ij}^{k}-\hat{A}_{ij}^{k}||_{2}^{2}, \tag{4}\]
with a weighing factor \(\alpha\) between the drivable surface classification and the angle regression. In our experiments, we set \(\alpha=1\).
Subsequently, we train a separate DeepLabv3+ model [7] to predict the successor graph from pose \(\mathbf{q}\), which we parameterize as a heatmap \(\mathbf{S_{q}}\). To account for the additional _Drivable_ and _Angles_ input layers, which we feed into this model in addition to the RGB aerial image crop, we adapt the number of input layers of the model. We denote this model as SuccessorNet. To obtain per-pixel labeling of the successor graph in the image crop, we render the successor graph \(\mathcal{S}_{q}\) as a heatmap in the crop by drawing along the graph edges with a certain stroke width. This heatmap highlights all regions in the aerial image that are reachable by an agent placed at pose \(\mathbf{q}\). We train our SuccessorNet model with a binary cross-entropy loss. Finally, we skeletonize the predicted heatmap \(\hat{\mathcal{S}}_{q}\) using a morphological skinning process [26] and convert the skeleton into a graph representation.
### _Graph Exploration and Aggregation_
The approach described in the previous sections is capable of inferring the graph structure of the successor graph from a given query position. In this section, we illustrate how a complete lane graph can be obtained by running our AutoGraph model iteratively on its own predictions and by subsequently aggregating these predictions into a globally consistent graph representation. To this end, we leverage a depth-first exploration algorithm: We initialize our model by selecting start poses, which can either be chosen manually or obtained from our _TrackletNet_ model. We predict the successor graph from this initial position and repeatedly query our model along the successor graph. In the case of a straight road section, for each forward pass of our model, we add a single future query pose to the list of query poses to process. If a lane split is encountered, for each of the successor subgraphs starting at lane splits, we add a query pose to the list. If a lane ends or no successor graphs are found, the respective branch of the aggregated lane graph terminates and we query the next pose in the list. The exploration terminates once the list of future query poses is empty. In contrast to prior work [3], where they aggregate the complete set of successor graphs according to an elaborate graph aggregation scheme, we instead only add graph nodes to the global graph where the virtual agent was placed at a given time. Therefore, we add edges between graph nodes according to the movement of the successor graph query position. This aggregation formulation simplifies the graph aggregation scheme since the number of nodes to integrate into the global graph is greatly reduced.
## IV Dataset
We evaluate our proposed method on a large-scale dataset for lane graph estimation from traffic participants. We use the RGB aerial images and the ground-truth lane graph annotations from the UrbanLaneGraph dataset [3]. To obtain the traffic participant tracklets, we leverage the LiDAR dataset split of the Argoverse2 [22] dataset. The dataset contains consecutive LiDAR scans for hundreds of driving scenarios. A single driving scenario entails approx. \(20\,\mathrm{s}\) of real-world driving. We leverage the OpenPCDet [20] detection and tracking suite for LiDAR point clouds with a CenterPoint [24] model, pre-trained on the NuScenes dataset [4]. We track the vehicle classes of _Car_, _Bus_, _Trailer_, and _Motorcycle_. Subsequently, we transform the respective LiDAR-centric tracklet coordinates to a global reference frame that is aligned with the aerial image coordinates. We smooth each tracklet with a mean filter approach to account for sensor noise and tracking inaccuracies. We call our tracklet dataset the _UrbanTracklet_ dataset and make it publicly available as an addition to the UrbanLaneGraph dataset [3]. In Tab. I, we list all relevant metrics of our _UrbanTracklet_ dataset. In total, our dataset entails tracklets with an accumulated total length of approximately \(12\,000\,\mathrm{km}\).
## V Experimental Results
### _Implementation Details_
The TrackletNet and SuccessorNet have identical DeepLabv3+ architectures. The TrackletNet receives an RGB input image of shape \(H\times W\times 3\) and outputs the _Drivable_ and _Angles_ layer map output. We use two separate decoders to produce the outputs. The drivable area segmentation has a resolution of \(H\times W\) while the lane angle output has the size of \(H\times W\times 2\). The training data used to train the two models is obtained from the dataset described in Sec. IV. We crop image segments of size \(256\,\mathrm{px}\times 256\,\mathrm{px}\) from the global aerial image. The crops are oriented along a randomly sampled tracklet at the bottom center of each crop. To increase the efficacy of our aggregation method (see Sec. III-C), we require the
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**City** & **Number of tracklets** & **Total tracklet length** \\ \hline Austin, TX & 287,306 & \(3.642\,\mathrm{km}\) \\ Detroit, MI & 73,232 & \(1.099\,\mathrm{km}\) \\ Miami, FL & 283,641 & \(3.312\,\mathrm{km}\) \\ Palo Alto, CA & 82,351 & \(1.050\,\mathrm{km}\) \\ Pittsburgh, PA & 34,505 & \(1.390\,\mathrm{km}\) \\ Washington D.C. & 121,557 & \(1.469\,\mathrm{km}\) \\ \hline \hline All & 882,592 & \(11.962\,\mathrm{km}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Key statistics of our _UrbanTracklet_ dataset
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline Model & APLS \(\uparrow\) & IoU \(\uparrow\) & TOPO P/R \(\uparrow\) & GEO P/R \(\uparrow\) & SDA\({}_{20}\uparrow\) & SDA\({}_{50}\uparrow\) & Human supervision \\ \hline LaneGraphNet [30] & 0.179 & 0.063 & 0.0 / 0.0 & 0.0 / 0.0 & 0.0 & 0.0 & ✓ \\ LaneGNN [3] & 0.202 & **0.347** & **0.600 / 0.699** & **0.599 / 0.695** & **0.227** & 0.377 & ✓ \\ \hline AutoGraph (ours) & **0.310** & 0.233 & 0.412 / 0.628 & 0.422 / 0.601 & 0.159 & **0.678** & ✗ \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of two baseline models with our AutoGraph approach for the Successor-LGP task. We evaluate on the test split of the UrbanLaneGraph dataset. Best model results are marked in **bold**.
Fig. 4: Illustration of our successor graph prediction approach. We first predict the _Drivable_ and _Angles_ map layers from the aerial image crop with a fully convolutional neural network. We subsequently predict the successor lane heatmap from the aerial image crop, the predicted drivable surface, and lane angles. The successor lane heatmap is post-processed into a successor graph, encoding the location of successor lanes and lane split points.
Fig. 5: Qualitative results of our AutoGraph model for the Successor-LGP task on the UrbanLaneGraph dataset. We visualize the successor heatmap and the graph generated from it for our human-supervised model AutoGraph-GT and our tracklet-supervised model AutoGraph.
successor graph prediction to be robust w.r.t. perturbations in the position of the virtual agent. To provide more diverse samples with different positional variations, we randomly rotate the crop with an angle \(\Delta\phi\sim\mathcal{U}(-\pi/3,\pi/3)\).
Using this sampling method, a very large amount of data samples may be generated since the aerial image may be cropped at many locations and orientations. For our experiments, we generate a total number of \(1.5\,\mathrm{M}\) samples from all cities combined. The lane graph complexity differs a lot between different scenes. Straight road sections have much simpler successor graphs than entries to roundabouts or multi-arm intersections. We found that a balanced mix between easy (successor graph has no splits) and hard (successor graph has one or more splits) samples is beneficial for a good downstream aggregation performance. During training, we randomly select 50% easy samples and 50% hard samples from the full training dataset.
### _Tasks_
Following Buchner _et al._[3], we evaluate our approach on two complementary tasks: Successor Lane Graph Prediction (_Successor-LGP_), and Full Lane Graph Prediction (_Full-LGP_). In _Successor-LGP_, we aim at predicting a feasible ego-reachable successor lane graph from the current pose of the virtual agent. In the task of _Full-LGP_, we compare the complete lane graph in a local region to the ground-truth graph. We evaluate each task on the test images of the UrbanLaneGraph dataset [3], which are not used for model training at any stage. For model evaluation, we use the metrics proposed by Buchner _et al.[3]_.
### _Baselines_
To provide relevant comparisons and ablations demonstrating the efficacy of our AutoGraph approach, we compare it with a baseline model trained on ground-truth graph annotations, denoted as AutoGraph-GT. For this model, we use the ground-truth lane graph in places where we would otherwise query the recorded vehicle tracklets in our AutoGraph approach. This approach yields the ground truth lane graphs and successor lane graphs according to the graph annotations available in the dataset. The ground-truth lane graph annotations have none of the shortcomings of tracklet-based approaches, such as observation noise or erratic driving behavior. We also compare to the previously proposed models LaneExtraction [14] and LaneGNN [3].
### _Task Evaluation_
We evaluate our model on two tasks for lane graph estimation: Successor Lane Graph Prediction and Full Lane Graph Prediction.
#### Iii-D1 Successor Lane Graph Prediction
We evaluate the performance of our AutoGraph model and compare it with the recently proposed LaneGNN [30] and LaneGraphNet [3] models. Tab. II lists the model performances on the test split of the UrbanLaneGraph dataset. Our experiments indicate that the performance of our AutoGraph model is superior to the LaneGraphNet model in all metrics and is mostly on par with the recently proposed LaneGNN model. While it performs much better in the APLS and SDA\({}_{50}\) metric than the LaneGNN model, it is slightly inferior for the TOPO/GEO metrics and the Graph IoU metric. We hypothesize that the performance of our AutoGraph model could be further improved in scenes with road occlusions due to congested roads and overarching vegetation since our model struggles to predict accurate successor graphs in these regions. Specific treatment of such scenes in the model training schedule (i.e., active learning) might be beneficial.
Additionally, we perform ablation studies of multiple variants of our AutoGraph approach. The results are listed in Tab. III. In our AutoGraph-no-join variant, we do not join the tracklets (see Sec. III-A), ignoring their proximity and their relative angles. Instead, we follow tracklets until they end or until they leave the image crop. We also do not use the _Drivable_ (D) and _Angles_ (A) model outputs but feed the aerial image directly into the SuccessorNet model. For our AutoGraph model variant, we use joined tracklets as per Sec. III-A but omit the _TrackletNet_ auxiliary network. For our AutoGraph+D and AutoGraph+DA model variants, we add the _Drivable_ and _Angles_ model outputs, respectively. The model variant AutoGraph-GT does not use the tracklets of other traffic participants but is trained on ground truth human graph annotations, where we encode the successor graph as a heatmap instead of the raw graph representation as in the LaneGNN or LaneGraphNet models. Our ablation studies indicate that the AutoGraph-no-join method overall performs worse than our AutoGraph model variant. This indicates that joining tracklets to form more complete successor graphs helps produce higher-quality and more consistent annotations.
Furthermore, the inclusion of the _Drivable_ map layer on top of the RGB layer improves model performance for some metrics, but not significantly. Furthermore, adding the _Angles_ map layer seems to worsen model performance in most metrics. Despite the additional information that is available about the scene if the _Angles_ map layer is included, the increased noise produced by imprecise angle estimates seems to outweigh the benefits of having additional information available. This result supports the results discussed in Zurn _et al._[30], where additional input modalities did not significantly improve model performance. For qualitative evaluation, we visualize predictions of our best-performing model in Fig. 5. We also visualize the predictions of the AutoGraph-GT model and the ground truth graph annotations. We observe that overall both models are capable of modeling the multimodal spatial distribution of successor lanes efficiently. However, the AutoGraph-GT model shows slightly better heatmap outputs, since the annotations used for training were created from the ground-truth successor lane graph.
Our experiments demonstrate that our AutoGraph model variants (trained on tracklets) perform overall similarly to our AutoGraph-GT model variants (trained on human lane graph annotations), indicating that vehicle tracklets recorded from a moving recording platform are suitable for training
lane graph prediction models. In the APLS metric and the Graph IoU, the AutoGraph-GT model variant performs better than the AutoGraph model, presumably due to the higher annotation accuracy due to the better alignment of annotation with aerial images.
#### V-B2 Full Lane Graph Prediction
For the Full Lane Graph Prediction task, we initialize our model on 10 initial poses per evaluation tile and run our aggregation scheme. We compare the performance of our best-performing model variant with the prediction results of LaneExtraction [14] and the aggregation module of LaneGNN [3]. Note that the number of initialization poses is much smaller compared to the number of initialization poses used for the LaneGNN model [3]. The results are listed in Tab. IV. We observe that for some metrics, our AutoGraph model achieves comparable or better performance than the human-supervised LaneGNN [3] or LaneGraphNet models [30]. However, our model performs worse in the TOPO and GEO metrics. For most of the evaluated samples, we note that our AutoGraph model struggles with road surface occlusions, i.e., introduced by overarching vegetation (predominant in many testing regions for the cities of Miami and Pittsburgh). However, we emphasize that since our model uses fewer model initialization poses compared to LaneGNN [3], a degradation in graph connectivity may be expected since lane graph regions in occluded areas may not be reached with an iterative aggregation scheme when no successor graph is found in a given frame. Our qualitative evaluations generally show a high graph fidelity, recognizing most of the visible lanes and modeling their connectivity with high accuracy. Fig 6 illustrates two exemplary visualizations of predicted lane graphs for the cities of Washington, D.C., and Miami. We observe that our approach is capable of accurately reconstructing the lane graph in visually challenging environments. Large scenes with multiple blocks are handled well and clearly reflect the underlying lane graph topology. The detail view for a complex intersection in Miami illustrates that almost all major intersection arms are covered even in the presence of visual clutter such as water, boats, parking lots, and concrete-colored buildings. Minor inaccuracies are produced at the five-armed intersection at the bottom of the aerial image, where not all connections between intersection arms are present in the inferred lane graph.
## VI Conclusion
In this work, we presented a novel method for lane graph estimation in urban environments from traffic participant tracklets. We showed that our model, which is trained solely on data from tracked vehicles, is capable to predict highly accurate lane graphs. We presented a novel tracklet processing scheme that allows us to use the observed tracklets of traffic participants as annotation source to train our model. We demonstrated the efficacy of our approach on a large-scale lane graph estimation benchmark for which our approach demonstrated performance close to a ground-truth supervised baseline model. Future work will address adding pedestrians and bicycle tracklets to the approach for capturing more diverse annotations. Additionally, the improved handling of occluded roads appears to be a promising direction for future research.
|
2310.16827 | Robust Sparsification for Matroid Intersection with Applications | Matroid intersection is a classical optimization problem where, given two
matroids over the same ground set, the goal is to find the largest common
independent set. In this paper, we show that there exists a certain
"sparsifer": a subset of elements, of size $O(|S^{opt}| \cdot 1/\varepsilon)$,
where $S^{opt}$ denotes the optimal solution, that is guaranteed to contain a
$3/2 + \varepsilon$ approximation, while guaranteeing certain robustness
properties. We call such a small subset a Density Constrained Subset (DCS),
which is inspired by the Edge-Degree Constrained Subgraph (EDCS) [Bernstein and
Stein, 2015], originally designed for the maximum cardinality matching problem
in a graph. Our proof is constructive and hinges on a greedy decomposition of
matroids, which we call the density-based decomposition. We show that this
sparsifier has certain robustness properties that can be used in one-way
communication and random-order streaming models. | Chien-Chung Huang, François Sellier | 2023-10-25T17:56:42Z | http://arxiv.org/abs/2310.16827v1 | # Robust Sparsification for Matroid Intersection with Applications
###### Abstract
Matroid intersection is a classical optimization problem where, given two matroids over the same ground set, the goal is to find the largest common independent set. In this paper, we show that there exists a certain "sparsifer": a subset of elements, of size \(O(|S^{opt}|\cdot 1/\varepsilon)\), where \(S^{opt}\) denotes the optimal solution, that is guaranteed to contain a \(3/2+\varepsilon\) approximation, while guaranteeing certain robustness properties. We call such a small subset a _Density Constrained Subset_ (DCS), which is inspired by the _Edge-Degree Constrained Subgraph_ (EDCS) (Bernstein and Stein, 2015), originally designed for the maximum cardinality matching problem in a graph. Our proof is constructive and hinges on a greedy decomposition of matroids, which we call the _density-based decomposition_. We show that this sparsifier has certain robustness properties that can be used in one-way communication and random-order streaming models.
Specifically, we use the DCS to design a one-way communication protocol for matroid intersection and obtain a \(3/2+\varepsilon\) approximation, using a message of size \(O(|S^{opt}|\cdot 1/\varepsilon)\). This matches the best achievable ratio for the one-way communication bipartite matching (Goel, Kapralov, and Khanna, 2012).
Moreover, the DCS can be used to design a streaming algorithm in the random-order streaming model requiring the space of \(O(|S^{opt}|\cdot poly(\log(n),1/\varepsilon))\), where \(n\) is the size of the stream (the ground set of the matroids). Our algorithm guarantees a \(3/2+\varepsilon\) approximation _in expectation_ and, when the size of \(S^{opt}\) is not too small, _with high probability_. Prior to our work, the best approximation ratio of a streaming algorithm in the random-order streaming model was an expected \(2-\delta\) for some small constant \(\delta>0\)(Guruganesh and Singla, 2017).
## 1 Introduction
The matroid intersection problem is a fundamental problem in combinatorial optimization. In this problem we are given two matroids \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\), and the goal is to find the largest common independent set in both matroids, _i.e._, \(\operatorname*{arg\,max}_{S\in\mathcal{I}_{1}\cap\mathcal{I}_{2}}|S|\). This problem was introduced and solved by Edmonds (Edm70, Edm71, Edm79) in the 70s. The importance of matroid intersection stems from the large variety of combinatorial optimization problems it captures; well-known examples in computer science include bipartite matching and packing of spanning trees/arborescences.
In this paper we introduce a "sparsifer" for the matroid intersection problem and use it to design algorithms for two problems closely related to streaming: a one-way communication protocol and a streaming algorithm in the random-order streaming model.
Structural Result for a Matroid Intersection SparsifierOur starting point is the maximum matching problem. To deal with massive graphs, a common tool is sparsification, _i.e._, the extraction
of a subgraph having fewer edges but preserving some desired property. Various graph sparsifiers have been introduced to maintain a large matching, _e.g._, see [1, 1, 13, 15] and the references therein. The particular sparsifier that has inspired our work is the _Edge-Degree Constrained Subgraph_ (EDCS) introduced by Bernstein and Stein [1].
**Definition 1** (from [1]).: _Let \(G=(V,E)\) be a graph, and \(H\) a subgraph of \(G\). Given any integer parameters \(\beta\geq 2\) and \(\beta^{-}\leq\beta-1\), we say that a subgraph \(H=(V,E_{H})\) is a \((\beta,\beta^{-})\)-EDCS of \(G\) if \(H\) satisfies the following properties (for \(v\in V\), \(\deg_{H}(v)\) denotes the degree of \(v\) in \(H\)):_
1. _For any edge_ \((u,v)\in H\)_,_ \(\deg_{H}(u)+\deg_{H}(v)\leq\beta\)_;_
2. _For any edge_ \((u,v)\in G\backslash H\)_,_ \(\deg_{H}(u)+\deg_{H}(v)\geq\beta^{-}\)_._
The size of an EDCS is easily controlled by the parameter \(\beta\) as it is \(O(\beta\cdot|M_{G}|)\), where \(M_{G}\) is the maximum matching. The key property of EDCSes is that, by choosing some \(\beta\) and \(\beta^{-}\) in the order of \(O(poly(1/\varepsilon))\), an EDCS is guaranteed to contain a \(3/2+\varepsilon\) approximation of the maximum matching [1, 15]. As a result, EDCSes have been used to approximate maximum matching in the dynamic, random-order streaming, communication, and sublinear settings with success, for instance see [1, 1, 2, 1, 3, 13, 14, 15].
As bipartite graph matching is a special case of matroid intersection, the special case when both matroids are partition matroids, one is naturally prompted to ask: is there an analogue of EDCS for general matroid intersection? However, even very slight generalizations of partition matroids, such as laminar matroids (_i.e._, adding nested cardinality constraints on groups of vertices on each side of the bipartite graph), it is already unclear how to properly define the equivalent of EDCSes. In fact, to the best of our knowledge, in this setting of laminar matroids, nothing is known about getting approximation ratios comparable to those for simple matching in random streams [1] or in communication complexity [1].
To properly generalize EDCS, the first question would be: what could be the equivalent of a vertex degree in a graph, in the context of a matroid? To answer this question, we make use of the notion of _density_ of a subset in a matroid and introduce the _density-based decomposition_.1 In the following discussion, we assume that readers are familiar with matroids. All the formal definitions can be found in Section 2.
Footnote 1: This decomposition is closely related to the notion of _principal sequence_[11]; this aspect will be discussed later.
**Definition 2**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid. The density of a subset \(U\subseteq V\) in \(\mathcal{M}\) is defined as_
\[\rho_{\mathcal{M}}(U)=\frac{|U|}{\operatorname{rank}_{\mathcal{M}}(U)}.\]
_By convention, the density of an empty set is \(0\), and the density of a non-empty set of rank \(0\) is \(+\infty\)._
We now explain, at a high level, how densities are used. Let \(V^{\prime}\subseteq V\) be a subset of elements (\(V^{\prime}\) is meant to be our "sparsifier") and consider the matroid \(\mathcal{M}^{\prime}\), which is the original matroid \(\mathcal{M}\) restricted to \(V^{\prime}\). Then we apply the following greedy procedure: find the densest set \(U_{1}\subseteq V^{\prime}\) and then contract \(\mathcal{M}^{\prime}\) by \(U_{1}\); next find the densest set \(U_{2}\subseteq V^{\prime}\backslash U_{1}\) in the contracted matroid \(\mathcal{M}^{\prime}/U_{1}\) and again contract \(\mathcal{M}^{\prime}/U_{1}\) by \(U_{2}\), and so on (for more details about this method and the contraction of a matroid, we refer the reader to Section 2). This greedy procedure induces a _density-based decomposition_ of \(V^{\prime}=U_{1}\cup\dots\cup U_{k}\), where \(k\) is the rank of the original matroid \(\mathcal{M}\) (note that some of the last \(U_{i}\)s could be empty; to give a better intuition about this decomposition an example is provided in Figure 2). As a result, each element of \(V^{\prime}\) can be assigned a density based on this decomposition, namely, the density of the set \(U_{i}\) where it appears in, computed with respect to the contracted matroid that was used for the construction of that \(U_{i}\). Each element \(v\in V\backslash V^{\prime}\) can also be assigned a density, namely, the density of the elements in the first set \(U_{i}\) such that \(v\) is spanned by \(U_{1}\cup\dots\cup U_{i}\) in the matroid \(\mathcal{M}\).
Therefore using this notion of _density-based decomposition_ of \(V^{\prime}\) in the restricted matroid \(\mathcal{M}^{\prime}\) we can define for every element \(v\in V\) an _associated density_\(\tilde{\rho}_{\mathcal{M}}(v)\) with respect to \(V^{\prime}\) (a formal definition is
provided in Definition 19). This associated density plays the role analogous to the vertex degree in a graph. With the associated densities of the elements, we can define a _Density-Constrained Subset_ (DCS) for matroid intersection:
**Definition 3**.: _Let \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\) be two matroids. Let \(\beta\), \(\beta^{-}\) be two integers such that \(\beta\geq\beta^{-}+7\). A subset \(V^{\prime}\subseteq V\) is called a \((\beta,\beta^{-})\)-DCS if it satisfies the following properties:_
1. _For any_ \(v\in V^{\prime}\)_,_ \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\leq\beta\)_;_
2. _For any_ \(v\in V\backslash V^{\prime}\)_,_ \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\geq\beta^{-}\)_._
By a constructive proof, we show that such \((\beta,\beta^{-})\)-DCSes always exist (Theorem 23). This proof is based on a local search argument similar to that of [1] but here it requires to understand how the density-based decomposition of \(V^{\prime}\) is affected when an element is added or removed from \(V^{\prime}\) -- hence we need the two important "modification lemmas", namely, Lemmas 20 and 21. We also prove that DCSes are compact, in the sense that their size is up to \(\beta\) times the cardinality of the optimal solution (Proposition 22). Moreover, DCSes always contain a good approximation of the optimal solution:
**Theorem 4**.: _Let \(\varepsilon>0\). For integers \(\beta\), \(\beta^{-}\) such that \(\beta\geq\beta^{-}+7\) and \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\), any \((\beta,\beta^{-})\)-DCS \(V^{\prime}\) contains a \(3/2+\varepsilon\) approximation of the maximum cardinality common independent set._
Theorem 4 can be compared to the result for EDCSes in bipartite graphs:
**Theorem 5** (from [1]).: _Let \(\varepsilon>0\). For integers \(\beta\), \(\beta^{-}\) such that \(\beta\geq\beta^{-}+1\) and \(\beta^{-}\geq\beta\cdot(1-\varepsilon/4)\), any \((\beta,\beta^{-})\)-EDCS \(H\) of a bipartite graph \(G\) contains a \(3/2+\varepsilon\) approximation of the maximum matching._
The proof of Theorem 4 is the most crucial part of our work. In the following we briefly discuss our methodology and highlight the important ideas in our proof.
Many algorithms for optimization problems are analyzed based on primal-duality of linear programs. Even though the convex hull of common independent sets can be described by a linear program [15], we choose not to use its dual program. Instead we use the simpler mini-max theorem of Edmonds [1].
**Theorem 6** (Matroid intersection theorem [1]).: _Given two matroids \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\), the maximum size of a set in \(\mathcal{I}_{1}\cap\mathcal{I}_{2}\) is_
\[\min_{U\subseteq V}(\operatorname{rank}_{\mathcal{M}_{1}}(U)+\operatorname{ rank}_{\mathcal{M}_{2}}(V\backslash U)).\]
The minimizers \(U\) and \(V\backslash U\) in the formula will serve as the "dual" to bound the size of the optimal solution. In particular, in our proof, we consider the two matroids \(\mathcal{M}^{\prime}_{1}\) and \(\mathcal{M}^{\prime}_{2}\) derived from original matroids \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) restricted to \(V^{\prime}\). Edmonds' theorem states that we can find \(C_{1}\) and \(C_{2}\) so that \(C_{1}\cup C_{2}=V^{\prime}\), \(C_{1}\cap C_{2}=\emptyset\), and \(\operatorname{rank}_{\mathcal{M}^{\prime}_{1}}(C_{1})+\operatorname{rank}_{ \mathcal{M}^{\prime}_{2}}(C_{2})\) is equal to the size of the maximum common independent set in \(V^{\prime}\), denoted as \(\mu(V^{\prime})\). The question then boils down to compare the size of an optimal solution with \(\operatorname{rank}_{\mathcal{M}^{\prime}_{1}}(C_{1})+\operatorname{rank}_{ \mathcal{M}^{\prime}_{2}}(C_{2})\).
To achieve this, we will use a certain greedy procedure to choose a subset \(S\) of the optimal solution so that in the contracted matroids \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\), all the remaining elements of the optimal solution not in \(S\) are spanned in at least one of these contracted matroids (either by \(C_{1}\) in \(\mathcal{M}_{1}/S\) or by \(C_{2}\) in \(\mathcal{M}_{2}/S\)). As these elements are of size at most \(\operatorname{rank}_{\mathcal{M}_{1}/S}(C_{1})+\operatorname{rank}_{\mathcal{M} _{2}/S}(C_{2})\leq\operatorname{rank}_{\mathcal{M}^{\prime}_{1}}(C_{1})+ \operatorname{rank}_{\mathcal{M}^{\prime}_{2}}(C_{2})=\mu(V^{\prime})\) (by Edmonds' theorem), we just need to bound the size of \(S\). We will use a strategy to bound the size of \(S\) by \((1/2+\varepsilon)\cdot(\operatorname{rank}_{\mathcal{M}^{\prime}_{1}}(C_{1})+ \operatorname{rank}_{\mathcal{M}^{\prime}_{2}}(C_{2}))\). That part of the proof hinges on the construction of well-chosen subsets of \(C_{1}\) and \(C_{2}\) (see Lemmas 24 and 25), and on the properties of the density-based decomposition. In fact, in the case of graph matching (two partition matroids), the proof for EDCSes can be done by an edge counting argument [1], whereas here we need a more sophisticated proof strategy-- how the density decomposition is useful and exploited is fully displayed in the proofs of Lemmas 24 and 25.
**Remark 7**.: _When the two matroids are partition matroids of the same rank, the definition of associated density for an element matches the notion of degree for the endpoint of an edge: hence in that case our DCS definition corresponds to that of EDCS in a bipartite graph._
Application to One-Way CommunicationWe consider the following one-way communication problem [12]: Alice is given some part \(V_{A}\) of the common ground set \(V\), while Bob holds the other part \(V_{B}\). The goal for Alice is to send a single message to Bob so that Bob outputs an approximate maximum common independent set. If Alice sends her whole ground set \(V_{A}\) to Bob, then the latter will be able to recover the exact solution. However in this game, we assume that communication is costly, so we would like to do as best as possible while restricting ourselves to use a message of size \(O(\mu(V))\), where \(\mu(V)\) denotes the size of the optimal solution. For instance, if Alice sends only a maximum intersection in \(V_{A}\) then Bob is able to complete it to make it a maximal set (a set such as no element can be added to it without creating a circuit in one of the two matroids), and we then obtain a 2 approximation protocol. The interest in studying one-way communication problems lies in their connection with the single-pass streaming model [1] and other computational models, as they, in a certain way, capture the essence of trade-offs regarding message sizes.
Our problem is a natural generalization of the one-way communication problem for matchings, which has been studied in [1, 1]: the edges of the graph are splitted by some adversary between Alice and Bob, and Alice has to send a small message to Bob so he can recover some good matching. In particular, when both matroids are partition matroids, our problem is equivalent to the one-way communication in a bipartite graph. Protocols have been provided for the one-way communication matching problem to get a \(3/2\) approximation, see [1, 1]. Moreover, we know that for bipartite graphs with \(k\) vertices on each side, any protocol providing an approximation guarantee better than \(3/2\) requires a message of size at least \(k^{1+\Omega(1/\log\log k)}\)[1]. Therefore in our general case of matroid intersection one cannot expect to beat the \(3/2\) approximation ratio using a message of size \(O(\mu(V))\).
Assadi and Bernstein [1] used the EDCS sparsifier to get the optimal \(3/2\) approximation ratio. In Section 4 we show that our DCS sparsifier has the same robustness property: if Alice builds some DCS and sends it to Bob, Bob will be able to get an approximate solution with a ratio close to \(3/2\). Proving this requires only a slight adaptation of the proof of Theorem 4.
**Theorem 8**.: _There exists a one-way communication protocol that, given any \(\varepsilon>0\), computes a \(3/2+\varepsilon\) approximation to maximum matroid intersection using a message of size \(O(\mu(V)/\varepsilon)\) from Alice to Bob, where \(\mu(V)\) denotes the size of the optimal solution of the matroid intersection problem._
Hence our result closes the gap between matching and matroid intersection, and matches the \(3/2\) bound for bipartite matching. It shows that matroid intersection and matching problems have similar one-way communication limitations, despite the more complex structure of matroids.
Application to Random-Order StreamsThe _streaming_ model of computation [13] has been motivated by the recent rise of massive datasets, where we cannot afford to store the entire input in memory. Given that the ground set is made of \(|V|=n\) elements, in the streaming model \(V\) is presented to the algorithm as a stream of elements \(v_{1},\ldots,v_{n}\). The algorithm is allowed to make a single pass over that stream and, ideally, uses a memory roughly proportional to the output size (up to a poly-logarithmic factor): therefore the main challenge in this model is that we have to discard many elements through the execution of the algorithm.
We note that, in the most general model where an adversary decides the order of the elements, it has been a long-standing open question whether the maximum matching in bipartite graphs (a very simple case of matroid intersection) can be approximated within a factor better than \(2\), _i.e._, the ratio achievable by the simple greedy algorithm.
Our focus here is on the _random-order_ streaming model, where the permutation of the elements of \(V\) in the stream is chosen uniformly at random. This is a natural assumption as real-world data has little reason of being ordered in an adversarial way (even though the distribution may not be entirely random either). In fact, as mentioned in [11], the random-order streaming model might better explain why certain algorithms perform better in practice than their theoretical bounds under an adversary model. It is noteworthy that under the random-order streaming model, for the maximum matching, quite a few recent papers have shown that the approximation factor of 2 can be beaten [11, 1, 12, 13, 14, 15, 16, 17, 18, 19]. In addition, in the adversary model, Kapralov [1] shows that to get an approximation factor better than \(1+\ln 2\approx 1.69\), one needs \(k^{1+\Omega(1/\log\log k)}\) space, even in bipartite graphs
(here \(k\) denotes the number of vertices on each side). The paper of Bernstein [1] proves that it is possible to beat this adversarial-order lower bound in the random-order model, by achieving a \(3/2+\varepsilon\) approximation while using only \(O(k\cdot poly(\log(k),1/\varepsilon))\) space, thus demonstrating a separation between the adversary model and the random-order model.
For our main topic, matroid intersection, a simple greedy algorithm gives again an approximation ratio of \(2\). Guruganesh and Singla [13] have shown that it is possible to obtain the factor of \(2-\delta\) in expectation, for some small \(\delta>0\).2 We show that this factor can be significantly improved. In fact, in Section 5, we use our DCS construction in the context of random-order streams to design an algorithm. The framework developed in Section 5 is a slight modification of that of [1, 1].
Footnote 2: It should be emphasized that Guruganesh and Singla consider the more stringent “online” model.
**Theorem 9**.: _Let \(1/4>\varepsilon>0\). One can extract from a randomly-ordered stream of elements a common independent subset in two matroids with an approximation ratio of \(3/2+\varepsilon\) in expectation, using \(O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) memory, where \(\mu(V)\) denotes the size of the optimal solution, and \(k\) is the smaller rank of the two given matroids. Moreover the approximation ratio is worse than \(3/2+\varepsilon\) only with probability at most \(\exp(-1/32\cdot\varepsilon^{2}\cdot\mu(V))+n^{-3}\)._
Thus, not only do we improve upon the factor \(2-\delta\)[13], but also we demonstrate that it is possible to beat the adversarial-order lower bound of \(1+\ln 2\approx 1.69\) of [12] for the matroid intersection problem as well in the random order model (assuming that \(n\) is polynomial in \(k\)).
**Remark 10**.: _When the size the the optimal solution \(\mu(V)\) is \(\Omega(\log(n)/\varepsilon^{2})\), we obtain a good approximation ratio with high probability, as the probability of failure will be \(n^{-O(1)}\) (and \(n\) is assumed to be very big as we are in the streaming setting). Unlike in [1, 1], we cannot guarantee with high probability a good approximation ratio when the solution is small: in fact, when a matching is relatively small we can prove that the graph has a limited number of edges (so we can afford to store all of them), but for the matroid intersection problem, a small maximum intersection of two matroids does not imply that the ground set is small as well._
Density-Based Decomposition and Principal PartitionsThe notion of densest subsets and density-based decompositions is closely related to the theory of _principal partitions_. The latter indeed comes from a long line of research in various domains, ranging from graphs, matrices, matroids, to submodular systems. We refer to readers to a survey of Fujishige [14]. Below we give a quick outline.
Let \(V^{\prime}\) be the ground set of a matroid \(\mathcal{M}\). By the theory of principal partitions, there exist a sequence of nested sets, called _principal sequence_, \(F_{1}\subset F_{2}\subset\cdots\subset F_{k}=V^{\prime}\), and a sequence of critical values \(\lambda_{1}>\lambda_{2}>\cdots>\lambda_{k}\), so that the matroid obtained by contracting \(F_{i-1}\) and restricted to \(F_{i}\), is "uniformly dense" (_i.e_, no set has a larger density than the ground set itself), with density \(\lambda_{i}\). In our context, recall that \(V^{\prime}\) is decomposed into \(U_{1},U_{2},\ldots,U_{k}\) by a greedy procedure. Then it can be seen that \(F_{1}=U_{1},F_{2}=U_{1}\cup U_{2},\ldots,F_{k}=U_{1}\cup\cdots\cup U_{k}\). In this sense, our density-based decomposition can be regarded as a rewriting of the principal sequence, and some basic results stated in Section 2 are already known in the context of principal partitions. However, we adopt this term and this way of decomposing the elements to better emphasize the "greedy" nature of our approach and to facilitate our presentation.
The most important consequence of the theory of principal partitions for us is that the densest sets \(U_{1},\ldots,U_{k}\) in our greedy procedure can be computed in polynomial time by using submodular function minimization [14]. We briefly explain how it can be done. For any density \(\rho\), we can find in polynomial time the largest set \(U_{\rho}\) minimizing the submodular function \(f_{\rho}(U)=\rho\cdot\mathrm{rank}_{\mathcal{M}}(U)-|U|\) (_e.g._, see [12]). Hence we can find the largest density \(\rho^{*}\) and the associated largest densest subset in polynomial time using binary search: for some value \(\rho\), if \(U_{\rho}=\emptyset\) then it means that \(\rho^{*}<\rho\), and if \(U_{\rho}\neq\emptyset\) it means that \(\rho^{*}\geq\rho\). The exact value of \(\rho^{*}\) can be found as densities can only be rational numbers with denominators bounded by the rank \(k\) of the matroid. For the largest densest subset \(U_{\rho^{*}}\), we have \(f_{\rho^{*}}(U_{\rho^{*}})=0\), and when \(\rho<\rho^{*}\) we have \(f_{\rho}(U_{\rho})\leq f_{\rho}(U_{\rho^{*}})<0\).
Although the above procedure can be costly in running time, for some simple matroids that may be of more practical importance, such as laminar or transversal matroids, it should be possible to compute the
density-based decomposition faster, because of their particular structures. Moreover, in our algorithms, as we frequently update the ground set on which we compute the decomposition by adding or removing one element, there may be room to improve our time complexity: we leave as an open question whether updating a density-based decomposition when performing these kinds of operations can be done more efficiently, without re-computing the whole decomposition each time.
Analysing more carefully how the density-based decomposition and the DCS could be updated efficiently may also lead to an application of DCSes to dynamic matroid intersection (note that the EDCS was originally proposed for dynamic graph matching [1]). In that setting, elements are added into or removed from the ground set and the objective is to maintain an approximate maximum matroid intersection, while guaranteeing a small update time.
Related WorkMatroid intersection is an ubiquitous subject in theoretical computer science. We refer the reader to the comprehensive book of Schrijver [15]. Although in the traditional offline setting we know since the 70s that the problem can be solved in polynomial time [1, 2, 3], improving the running time of matroid intersection is still a very active area [1, 10].
The importance of matroid intersection comes from the large variety of combinatorial optimization problems it captures, the most well-known being bipartite matching and packing of spanning trees/arborescences. Moreover, other applications can be found in electric circuit theory [14, 15], rigidity theory [16], and network coding [17]. In general, matroids generalize numerous combinatorial constraints; as a result matroid intersection can appear in very diverse contexts. For instance, a recent trend in machine learning is the "fairness" constraints (_e.g._, see [11] and references therein), which can be encoded by partition or laminar matroids (for nested constraints). Machine scheduling constraints is another example of matroid application, in that case using transversal matroids, see [13, 14].
For the one-way communication problem [13], the case of maximum matching has been studied in [1, 1], for which a \(3/2+\varepsilon\) approximation is obtained. We are not aware of any previous result for the matroid intersection problem in that model. In general, one-way communication is often used to get a better understanding of streaming problems, see [15, 16].
In the _adversarial_ streaming, the trivial greedy algorithm building a maximal independent set (an independent set that cannot be extended) achieves a 2 approximation [1, 13]. Improving that approximation ratio is a major open question in the field of streaming algorithms, even for the simple case of bipartite matching (an intersection of two partition matroids). On the hardness side, we know that an approximation ratio better than \(1+\ln 2\approx 1.69\) cannot be achieved [12] (previously, an inapproximability of \(1+1/(e-1)\approx 1.58\) had been established in [12]) for the maximum bipartite matching -- hence for the matroid intersection as well. Note that matroid intersection has been studied in the streaming setting under the adversarial model (in the more general case of weighted/submodular optimisation), for instance see [1, 13, 12].
In comparison with the adversarial model, for the _random-order_ streaming, Guruganesh and Singla have obtained a \(2-\delta\) approximation ratio (for some small \(\delta>0\)) for matroid intersection [12]. To our knowledge, it is the only result beating the factor of 2 for the general matroid intersection problem. In the maximum matching problem (not necessarily in bipartite graphs), a pioneering result was first obtained by Konrad, Magniez, and Mathieu [10] with an approximation ratio strictly below 2 for simple matchings. The approximation ratio was later improved in a sequence of papers [1, 1, 13, 14, 15]. Currently the best result for matchings is due to Assadi and Behnezhad [1], who obtained the ratio of \(3/2-\delta\) for some small constant \(\delta\sim 10^{-14}\).
## 2 Density-Based Decomposition
Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid on the ground set \(V\). Recall that a pair \(\mathcal{M}=(V,\mathcal{I})\) is a matroid if the following three conditions hold: (1) \(\emptyset\in\mathcal{I}\), (2) if \(X\subseteq Y\in\mathcal{I}\), then \(X\in\mathcal{I}\), and (3) if \(X,Y\in\mathcal{I},|Y|>|X|\), there exists an element \(e\in Y\backslash X\) so that \(X\cup\{e\}\in\mathcal{I}\). The sets in \(\mathcal{I}\subseteq\mathcal{P}(V)\) are the _independent sets_. The _rank_ of a subset \(X\subseteq V\) is \(\operatorname{rank}_{\mathcal{M}}(X)=\max_{Y\subseteq X,\,Y\in\mathcal{I}}|Y|\). The rank of a matroid is \(\operatorname{rank}_{\mathcal{M}}(V)\). Observe that this notion generalizes that of linear independence in vector spaces.
A subset \(C\subseteq V\) is a _circuit_ if \(C\) is a minimal non-independent set, _i.e._, for every \(v\in C\), \(C\backslash\{v\}\in\mathcal{I}\). We will assume that no element in \(V\) is a circuit by itself (called "loop" in the literature) throughout the paper. The _span_ of a subset \(X\subseteq V\) in the matroid \(\mathcal{M}\) is defined as \(\operatorname{span}_{\mathcal{M}}(X)=\{x\in V,\,\operatorname{rank}_{ \mathcal{M}}(X\cup\{x\})=\operatorname{rank}_{\mathcal{M}}(X)\}\), these elements are called spanned by \(X\) in \(\mathcal{M}\). For more details about matroids, we refer the reader to [10].
The _restriction_ and _contraction_ of a matroid results in another matroid.
**Definition 11** (Restriction).: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid, and let \(V^{\prime}\subseteq V\) be a subset. Then we define the restriction of \(\mathcal{M}\) to \(V^{\prime}\) as \(\mathcal{M}^{\prime}=\mathcal{M}|V^{\prime}=(V^{\prime},\mathcal{I}^{\prime})\) where \(\mathcal{I}^{\prime}=\{S\subseteq V^{\prime}:S\in\mathcal{I}\}\)._
**Definition 12** (Contraction).: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid, and let \(U\) be a subset of \(V\). Then we define the contracted matroid \(\mathcal{M}/U=(V\backslash U,\mathcal{I}_{U})\) so that, given a maximum independent subset \(\mathcal{B}_{U}\) of \(U\), \(\mathcal{I}_{U}=\{S\subseteq V\backslash U:S\cup\mathcal{B}_{U}\in\mathcal{I}\}\)._
It is well-known that any choice of \(\mathcal{B}_{U}\) produces the same \(\mathcal{I}_{U}\), as a result the definition of contraction is unambiguous. The following proposition comes directly from the definition.
**Proposition 13**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid and let \(A\subseteq B\subseteq V\). Then we have \(\operatorname{rank}_{\mathcal{M}/A}(B\backslash A)=\operatorname{rank}_{ \mathcal{M}}(B)-\operatorname{rank}_{\mathcal{M}}(A)\)._
Here we recall the definition of density that we will use in the following.
**Definition 2**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid. The density of a subset \(U\subseteq V\) in \(\mathcal{M}\) is defined as_
\[\rho_{\mathcal{M}}(U)=\frac{|U|}{\operatorname{rank}_{\mathcal{M}}(U)}.\]
_By convention, the density of an empty set is \(0\), and the density of a non-empty set of rank \(0\) is \(+\infty\)._
The following proposition, which we will use frequently, states how the density is changed after a matroid is contracted.
**Proposition 14**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid. If \(A\subseteq B\subseteq V\) and \(U\subseteq V\backslash B\) we have the following inequality:_
\[\rho_{\mathcal{M}/A}(U)\leq\rho_{\mathcal{M}/B}(U),\]
_assuming that \(\rho_{\mathcal{M}/A}(U)<+\infty\)._
Proof.: In fact, \(\operatorname{rank}_{\mathcal{M}/A}(U)\geq\operatorname{rank}_{\mathcal{M}/B}(U)\), while the cardinality \(|U|\) remains obviously the same.
The notions of density and matroid contraction allow to define the _density-based decomposition_\(U_{1},\ldots,U_{k}\) of a subset \(V^{\prime}\subseteq V\) as follows. First, consider the matroid \(\mathcal{M}^{\prime}\) defined as matroid \(\mathcal{M}=(V,\mathcal{I})\) restricted to the subset \(V^{\prime}\subseteq V\). Then select from \(V^{\prime}\) the set \(U_{1}\) of largest density in \(\mathcal{M}^{\prime}\) (in case several sets have the same largest density, choose the one with the largest cardinality). Then again choose the set \(U_{2}\) of largest density (again choose the one with the largest cardinality) in \(\mathcal{M}^{\prime}/U_{1}\) and so on (see a formal description in Algorithm 1). As the rank of the matroid is \(k\), after at most \(k\) steps in the loop the set \(\bigcup_{i=1}^{k}U_{i}\) is equal to \(V^{\prime}\). Observe that some latter sets of the decomposition may be empty. Moreover, this decomposition is unique as the choice of maximum cardinality densest subset at each step is unique (see Proposition 16). We note that as we assume that no element is a circuit by itself, our construction guarantees that no set \(U_{i}\) has infinite density.
```
1:\(\forall 1\leq i\leq k,U_{i}\leftarrow\emptyset\)
2:for\(j=1\ldots k\)do
3:\(U_{j}\leftarrow\) the densest subset of largest cardinality in \(\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})\)
```
**Algorithm 1** Algorithm for building a density-based decomposition of a set \(V^{\prime}\) in \(\mathcal{M}^{\prime}=(V^{\prime},\mathcal{I}^{\prime})\)
To give some intuition about this decomposition, we provide an example for a laminar matroid, that is represented in Figure 1 and decomposed in Figure 2.
**Proposition 15**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid and let \(B\) be the subset that reaches the maximum density \(\rho^{*}<+\infty\). Then given any \(A\subsetneq B\), \(\rho_{\mathcal{M}/A}(B\backslash A)\geq\rho^{*}\)._
Proof.: If \(\operatorname{rank}_{\mathcal{M}/A}(B\backslash A)=0\) then \(\rho_{\mathcal{M}/A}(B\backslash A)=+\infty\) and we are done; otherwise, by Proposition 13:
\[\rho_{\mathcal{M}}(B)=\frac{\operatorname{rank}_{\mathcal{M}}(A)\cdot\rho_{ \mathcal{M}}(A)+\operatorname{rank}_{\mathcal{M}/A}(B\backslash A)\cdot\rho_{ \mathcal{M}/A}(B\backslash A)}{\operatorname{rank}_{\mathcal{M}}(A)+ \operatorname{rank}_{\mathcal{M}/A}(B\backslash A)},\]
hence \(\rho_{\mathcal{M}}(B)\) is a weighted average of \(\rho_{\mathcal{M}}(A)\) and \(\rho_{\mathcal{M}/A}(B\backslash A)\). As \(\rho_{\mathcal{M}}(A)\leq\rho^{*}\) (by definition of \(\rho^{*}\)), it implies that \(\rho_{\mathcal{M}/A}(B\backslash A)\geq\rho^{*}\).
The following proposition states that the densest sets are closed under union, hence we have the unicity of the maximum cardinality densest subset.
**Proposition 16**.: _Let \(\mathcal{M}=(V,\mathcal{I})\) be a matroid. Let \(\rho^{*}=\max_{U\subseteq V}\rho_{\mathcal{M}}(U)<+\infty\). Then given any two sets \(W_{1}\), \(W_{2}\) of density \(\rho^{*}\), \(\rho_{\mathcal{M}}(W_{1}\cup W_{2})=\rho^{*}\)._
Proof.: If \(W_{1}\subseteq W_{2}\), then the proposition is trivially true. So assume that \(W_{1}\backslash W_{2}\neq\emptyset\), and we can observe that
\[\rho^{*}\leq\rho_{\mathcal{M}/(W_{1}\cap W_{2})}(W_{1}\backslash(W_{1}\cap W_ {2}))\leq\rho_{\mathcal{M}/W_{2}}(W_{1}\backslash(W_{1}\cap W_{2})),\]
where the first inequality uses Proposition 15 and the second uses Proposition 14. As a result, by the facts that \(\rho_{\mathcal{M}}(W_{2})=\rho^{*}\) and that \(\rho_{\mathcal{M}/W_{2}}(W_{1}\backslash(W_{1}\cap W_{2}))\geq\rho^{*}\), we obtain \(\rho_{\mathcal{M}}(W_{1}\cup W_{2})\geq\rho^{*}\). Hence we have \(\rho_{\mathcal{M}}(W_{1}\cup W_{2})=\rho^{*}\).
Here is a first proposition about density-based decompositions, stating that the densities decrease (as we could observe in the example of Figure 2).
Figure 1: Representation of a laminar matroid \(\mathcal{M}=(V,\mathcal{I})\) on a ground set \(V=\{v_{1},\ldots,v_{17}\}\). The leaves represent elements of the ground set, and the inner nodes represent cardinality constraints on the elements in their associated subtree (_e.g._, if \(S\in\mathcal{I}\), then \(|S\cap\{v_{1},\ldots,v_{14}\}|\leq 3\)).
Figure 2: Density-based decomposition of the laminar matroid \(\mathcal{M}\) represented in Figure 1. We have the densest subset \(U_{1}=\{v_{1},\ldots,v_{10}\}\), then the second densest subset \(U_{2}=\{v_{11},\ldots,v_{14}\}\) and finally \(U_{3}=\{v_{15},v_{16},v_{17}\}\). Their densities are respectively 5, 4, and 3. Note that here \(k=4\) so we have an additional set \(U_{4}=\emptyset\) of density zero.
**Proposition 17**.: _For all \(1\leq j\leq k-1\), \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\geq\rho_{\mathcal{M }^{\prime}/(\bigcup_{i=1}^{j}U_{i})}(U_{j+1})\). Moreover, if we have \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})>0\), then \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})>\rho_{\mathcal{M }^{\prime}/(\bigcup_{i=1}^{j}U_{i})}(U_{j+1})\)._
Proof.: We proceed by contradiction. Suppose that \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})<\rho_{\mathcal{ M}^{\prime}/(\bigcup_{i=1}^{j}U_{i})}(U_{j+1})\). Then it implies that \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j}\cup U_{j+1})> \rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\). Specifically, denoting \(k_{j}=\operatorname{rank}_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})} (U_{j})\) and \(k_{j+1}=\operatorname{rank}_{\mathcal{M}^{\prime}/(\bigcup_{k=1}^{j}U_{i})}( U_{j+1})\), we have \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j}\cup U_{j+1})=\rho_{ \mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\cdot\frac{k_{j}}{k_{j }+k_{j+1}}+\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\cdot \frac{k_{j+1}}{k_{j}+k_{j+1}}>\rho_{\mathcal{M}^{\prime}/\bigcup_{i=1}^{j-1}U_{ i})}(U_{j})\), contradicting the hypothesis that \(U_{j}\) was the densest set in \(\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})\).
For the second part of the proposition, suppose that \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})>0\) and \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})=\rho_{\mathcal{ M}^{\prime}/(\bigcup_{i=1}^{j}U_{i})}(U_{j+1})\). Then it implies that \(\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j}\cup U_{j+1})=\rho _{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\), contradicting the supposition that \(U_{j}\) was the maximum cardinality densest set.
The following proposition comes straightforwardly from the definition of the densities:
**Proposition 18**.: _We always have \(\sum_{j=1}^{k}\operatorname{rank}_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U _{i})}(U_{j})\cdot\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j} )=|V^{\prime}|\)._
Now we define the _associated density_ of a given element \(v\in V\) with respect to the decomposition of \(V^{\prime}\).
**Definition 19**.: _Let \(U_{1},\dots,U_{k}\) to the density-based decomposition of \(V^{\prime}\). Then, given an element \(v\in V\), its associated density with respect to the decomposition of \(V^{\prime}\) is defined as_
\[\tilde{\rho}_{\mathcal{M}}(v)=\left\{\begin{array}{ll}\rho_{\mathcal{M}^{ \prime}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\text{ for }j=\min\{j\in\llbracket 1,k\rrbracket:v\in \operatorname{span}_{\mathcal{M}}(\bigcup_{i=1}^{j}U_{i})\}&\text{ if }v\in \operatorname{span}_{\mathcal{M}}(V^{\prime})\\ 0&\text{ otherwise}\end{array}\right.\]
We emphasize that the associated density \(\tilde{\rho}_{\mathcal{M}}\) is defined for _all_ elements in \(V\), not just the elements of \(V^{\prime}\) (this is why we use the subscript \(\mathcal{M}\) instead of \(\mathcal{M}^{\prime}\)). We also emphasize that here the associated density is dependent on \(V^{\prime}\), even though that dependence is not displayed in our notation: we will just write \(\tilde{\rho}_{\mathcal{M}}\), instead of the more cumbersome \(\tilde{\rho}_{\mathcal{M},V^{\prime}}\). For elements \(v\in V^{\prime}\), note that if \(v\in U_{j}\) then we have necessarily \(\tilde{\rho}_{\mathcal{M}}(v)=\rho_{\mathcal{M}^{\prime}/(\bigcup_{i=1}^{j-1}U _{i})}(U_{j})\); in fact, if \(v\) is spanned by \(\bigcup_{i=1}^{j_{0}}U_{i}\) for some \(j_{0}<j\), then we could have increased the density of \(U_{j_{0}}\) by adding \(v\) into \(U_{j_{0}}\), contradicting the assumption that \(U_{j_{0}}\) was the densest subset when it was selected.
We now explain how such a decomposition behaves when an element is added to or deleted from the set \(V^{\prime}\). These two following lemmas are crucial in the existence proof of DCSes. Their statements are quite natural (for instance, stating that adding an element does not cause a diminution of the density associated with any other element, and cannot increase the density of that new element by more than one), however their proofs are rather technical and in fact proving these lemmas is the most difficult step to show the existence of DCSes. From now on, we will use the exponents \({}^{\mathrm{old}}\) and \({}^{\mathrm{new}}\) to denote the states before and after the insertion/deletion operation.
The proofs of the following lemmas can be found in Appendix A.
**Lemma 20**.: _Suppose a new element \(u^{\mathrm{new}}\in V\backslash V^{\prime}\) is added to \(V^{\prime}\). Then we have the following properties:_
1. _For all_ \(j\in\llbracket 1,k\rrbracket\)_, for all_ \(v\in U_{j}^{\mathrm{old}}\)_,_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{new}}(v)\geq\rho_{\mathcal{M}^{\prime \mathrm{old}}/(\bigcup_{i=1}^{j-1}U_{i}^{\mathrm{old}})}(U_{j}^{\mathrm{old}})\)_._
2. _For all_ \(v\in V\)_,_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{new}}(v)\geq\tilde{\rho}_{\mathcal{M}}^{ \mathrm{old}}(v)\)_._
3. _We have the inequality_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{old}}(u^{\mathrm{new}})\leq\tilde{\rho}_{ \mathcal{M}}^{\mathrm{new}}(u^{\mathrm{new}})\leq\tilde{\rho}_{\mathcal{M}}^{ \mathrm{old}}(u^{\mathrm{new}})+1\)_._
4. _For all_ \(v\in V^{\prime}\) _such that_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{old}}(v)<\tilde{\rho}_{\mathcal{M}}^{ \mathrm{old}}(u^{\mathrm{new}})\) _or_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{old}}(v)>\tilde{\rho}_{\mathcal{M}}^{ \mathrm{old}}(u^{\mathrm{new}})+1\)_, we have the equality_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{old}}(v)=\tilde{\rho}_{\mathcal{M}}^{ \mathrm{new}}(v)\)_._
**Lemma 21**.: _Suppose an old element \(u^{\mathrm{old}}\in V^{\prime}\) is deleted from \(V^{\prime}\). Then we have the following properties:_
1. _For all_ \(j\in\llbracket 1,k\rrbracket\)_, for all_ \(v\in U_{j}^{\mathrm{old}}\)_,_ \(\tilde{\rho}_{\mathcal{M}}^{\mathrm{new}}(v)\leq\rho_{\mathcal{M}^{\prime \mathrm{old}}/(\bigcup_{i=1}^{j-1}U_{i}^{\mathrm{old}})}(U_{j}^{\mathrm{old}})\)_._
_._
2. _For all_ \(v\in V\)_,_ \(\tilde{\rho}^{\rm new}_{\mathcal{M}}(v)\leq\tilde{\rho}^{\rm old}_{\mathcal{M}}(v)\)_._
3. _We have the inequality_ \(\tilde{\rho}^{\rm old}_{\mathcal{M}}(u^{\rm old})\geq\tilde{\rho}^{\rm new}_{ \mathcal{M}}(u^{\rm old})\geq\tilde{\rho}^{\rm old}_{\mathcal{M}}(u^{\rm old} )-1\)_._
4. _For all_ \(v\in V^{\prime}\) _such that_ \(\tilde{\rho}^{\rm old}_{\mathcal{M}}(v)>\tilde{\rho}^{\rm old}_{\mathcal{M}}(u ^{\rm old})\) _or_ \(\tilde{\rho}^{\rm old}_{\mathcal{M}}(v)<\tilde{\rho}^{\rm old}_{\mathcal{M}}(u ^{\rm old})-1\)_, we have the equality_ \(\tilde{\rho}^{\rm old}_{\mathcal{M}}(v)=\tilde{\rho}^{\rm new}_{\mathcal{M}}(v)\)_._
## 3 Density-Constrained Subsets for Matroid Intersection
Consider two matroids \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\), both of rank \(k\) (if the matroids have different ranks, we can truncate the rank of the matroid of larger rank without changing the solution of the matroid intersection problem). We recall the definition of a _Density-Constrained Subset_ (DCS).
**Definition 3**.: _Let \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\) be two matroids. Let \(\beta\), \(\beta^{-}\) be two integers such that \(\beta\geq\beta^{-}+7\). A subset \(V^{\prime}\subseteq V\) is called a \((\beta,\beta^{-})\)-DCS if it satisfies the following properties:_
1. _For any_ \(v\in V^{\prime}\)_,_ \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\leq\beta\)_;_
2. _For any_ \(v\in V\backslash V^{\prime}\)_,_ \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\geq\beta^{ -}\)_._
Here is a simple bound on the size of a DCS.
**Proposition 22**.: _For any set \(V^{\prime}\subseteq V\) satisfying Property (i) of Definition 3, \(|V^{\prime}|\leq\beta\cdot\mu(V)\), where \(\mu(V)\) denotes the maximum cardinality common independent subset in \(V\)._
Proof.: We proceed by contradiction. By Theorem 6, we know that there exists a set \(S\subseteq V\) such that \(\mathrm{rank}_{\mathcal{M}_{1}}(S)+\mathrm{rank}_{\mathcal{M}_{2}}(V\backslash S )=\mu(V)\). If \(|V^{\prime}|>\beta\cdot\mu(V)\), then it means that either \(|V^{\prime}\cap S|>\beta\cdot\mathrm{rank}_{\mathcal{M}_{1}}(S)\) or that \(|V^{\prime}\cap(V\backslash S)|>\beta\cdot\mathrm{rank}_{\mathcal{M}_{2}}(V \backslash S)\). In both cases, we have a densest subset, either in \(\mathcal{M}_{1}^{\prime}\) or \(\mathcal{M}_{2}^{\prime}\), that has a density larger than \(\beta\), contradicting Property (i) of Definition 3.
We show the existence of \((\beta,\beta^{-})\)-DCSes by construction, using a local search algorithm inspired by the one used in [1]. In our proof we introduce a new potential function and we use Lemmas 20 and 21 to generalize their procedure; details of the proof can be found in Appendix A.
**Theorem 23**.: _For any two matroids \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\) of rank \(k\), and for any integer parameters \(\beta\geq\beta^{-}+7\), a \((\beta,\beta^{-})\)-DCS can be computed using at most \(2\cdot\beta^{2}\cdot\mu(V)\) local improvement steps._
The main interest of DCS relies in that they always contain a relatively good approximation of the maximum cardinality matroid intersection.
**Theorem 4**.: _Let \(\varepsilon>0\). For integers \(\beta\), \(\beta^{-}\) such that \(\beta\geq\beta^{-}+7\) and \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\), any \((\beta,\beta^{-})\)-DCS \(V^{\prime}\) contains a \(3/2+\varepsilon\) approximation of the maximum cardinality common independent set._
Proof.: Let \(V^{\prime}\) be a \((\beta,\beta^{-})\)-DCS, and let \(C_{1}\) and \(C_{2}\) be sets such that \(C_{1}\cup C_{2}=V^{\prime}\), \(C_{1}\cap C_{2}=\emptyset\) and minimizing the sum \(\mathrm{rank}_{\mathcal{M}_{1}^{\prime}}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2}^{ \prime}}(C_{2})=\mathrm{rank}_{\mathcal{M}_{1}}(C_{1})+\mathrm{rank}_{\mathcal{ M}_{2}}(C_{2})\); by Theorem 6 we know that \(\mathrm{rank}_{\mathcal{M}_{1}^{\prime}}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2}^{ \prime}}(C_{2})=\mu(V^{\prime})\), the size of the maximum common independent set in \(V^{\prime}\).
Now consider the optimal common independent set \(O\) in \(V\). Our objective is to bound both \(|O\backslash S|\) and \(|S|\) for some well-chosen subset \(S\subseteq O\) to get an upper bound of \(|O|\). We will build that auxiliary set \(S\) as follows, starting with \(S=\emptyset\). If there exists an element \(o_{1}\in O\) such that \(o_{1}\notin\mathrm{span}_{\mathcal{M}_{1}}(C_{1})\cup\mathrm{span}_{\mathcal{M}_ {2}}(C_{2})\), then we add \(o_{1}\) into \(S\) and we now consider the contracted matroids \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\). We keep the same sets \(C_{1}\) and \(C_{2}\) and we try again to find an element \(o_{2}\in O\backslash S\) such that \(o_{2}\notin\mathrm{span}_{\mathcal{M}_{1}/S}(C_{1})\cup\mathrm{span}_{\mathcal{ M}_{2}/S}(C_{2})\), and we add \(o_{2}\) to \(S\). We repeat this operation until it is no longer possible to add into \(S\) any other element of \(O\). The idea behind this greedy procedure to build \(S\) is that, if we instead defined \(S\) naively as the set of elements in \(O\) that are not in \(\mathrm{span}_{\mathcal{M}_{1}}(C_{1})\cup\mathrm{span}_{\mathcal{M}_{2}}(C_{2})\) (which would be a simpler way to get a set \(S\) satisfying inequality (1) below), then this may yield a much bigger set \(S\) for which we
could not get a proper bound, whereas here the greedy procedure gives us a tool to bound \(|S|\) as it will allow us to prove the crucial inequality (2) later.
By the above greedy procedure, \(O\backslash S\) is a common independent subset in \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\) restricted to \(V^{\prime}\cup O\backslash S\), and \(\operatorname{span}_{\mathcal{M}_{1}/S}(C_{1})\cup\operatorname{span}_{ \mathcal{M}_{2}/S}(C_{2})\supseteq V^{\prime}\cup O\backslash S\). We now observe that
\[|O\backslash S| \leq\min_{U\subseteq V^{\prime}\cup O\backslash S}(\operatorname {rank}_{\mathcal{M}_{1}/S}(U)+\operatorname{rank}_{\mathcal{M}_{2}/S}((V^{ \prime}\cup O\backslash S)\backslash U))\] \[\leq\operatorname{rank}_{\mathcal{M}_{1}/S}(\operatorname{span} _{\mathcal{M}_{1}/S}(C_{1}))+\operatorname{rank}_{\mathcal{M}_{2}/S}((V^{ \prime}\cup O\backslash S)\backslash(\operatorname{span}_{\mathcal{M}_{1}/S}(C _{1})))\] \[\leq\operatorname{rank}_{\mathcal{M}_{1}/S}(\operatorname{span} _{\mathcal{M}_{1}/S}(C_{1}))+\operatorname{rank}_{\mathcal{M}_{2}/S}( \operatorname{span}_{\mathcal{M}_{2}/S}(C_{2}))\] \[=\operatorname{rank}_{\mathcal{M}_{1}/S}(C_{1})+\operatorname{ rank}_{\mathcal{M}_{2}/S}(C_{2})\] \[\leq\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+\operatorname{ rank}_{\mathcal{M}_{2}}(C_{2})\] \[=\mu(V^{\prime}),\]
where in the first inequality we use Theorem 6, in the second inequality we consider \(U=\operatorname{span}_{\mathcal{M}_{1}/S}(C_{1})\), in the third inequality we use that \((V^{\prime}\cup O\backslash S)\backslash(\operatorname{span}_{\mathcal{M}_{1}/ S}(C_{1}))\subseteq\operatorname{span}_{\mathcal{M}_{2}/S}(C_{2})\), and in the last inequality we use that the rank function in a contracted matroid is always smaller than the rank function in the original matroid. Thereby we obtain
\[|O\backslash S|\leq\mu(V^{\prime}). \tag{1}\]
Hence we need to upper-bound the value of \(|S|\). Some carefully chosen subsets \(R_{l,i}\) and \(Q_{l,i}\) will allow us to get that upper-bound, and their construction is displayed in the following lemmas -- it is in the proof of these lemmas that the DCS structure is fully exploited. Observing that \(\beta^{-}\cdot|S|\) is bounded by the sum of the \(\tilde{\rho}_{\mathcal{M}_{l}}(o_{i})\) (as for each \(o_{i}\in S\), we have \(\beta^{-}\leq\tilde{\rho}_{\mathcal{M}_{1}}(o_{i})+\tilde{\rho}_{\mathcal{M}_ {2}}(o_{i})\), because of Property (ii) of the DCS), we will build disjoint subsets \(R_{l,i}\) of \(V^{\prime}\) (Lemma 24) to bound each \(\tilde{\rho}_{\mathcal{M}_{l}}(o_{j})\) with \(|R_{l,j}|\) (in particular, see Lemma 24 (iv)). We will then use an auxiliary partition \(Q_{l,j}\) of the union of the \(R_{l,j}\)s (Lemma 25) to bound the total size of the \(R_{l,j}\)s, using the properties of the DCS and the properties of those sets. By wrapping-up everything in the end this will allow us to get a bound on the size of \(S\), similarly as [1].
We recall that the sets \(U_{l,i}\) refer to the density-based decomposition of \(V^{\prime}\) in the matroid \(\mathcal{M}_{l}\).
**Lemma 24**.: _For \(l\in\{1,2\}\), we can build sets \(R_{l,1},\dots,R_{l,|S|}\) satisfying the following properties:_
1. _the_ \(R_{l,i}\) _are disjoint;_
2. _for all_ \(j\in\llbracket 1,|S|\rrbracket\) _we have_ \(R_{l,j}\subseteq V^{\prime}\backslash C_{l}\)_;_
3. _for all_ \(j\in\llbracket 1,|S|\rrbracket\)_, for all_ \(v\in R_{l,j}\)_,_ \(|R_{l,j}|=\lfloor\tilde{\rho}_{\mathcal{M}_{l}}(v)\rfloor-1\)_;_
4. _for all_ \(j\in\llbracket 1,|S|\rrbracket\)_,_ \(|R_{l,j}|\geq\lfloor\tilde{\rho}_{\mathcal{M}_{l}}(o_{j})\rfloor-1\)_._
Proof.: Fix an \(l\). We divide \(S\) into two groups: those that are spanned by \(\bigcup_{i=1}^{k}U_{l,i}\) and those that are not. Precisely, \(S_{U}=S\cap\operatorname{span}_{\mathcal{M}_{l}}(\bigcup_{i=1}^{k}U_{l,i})\) and \(S_{\overline{U}}=S\backslash S_{U}\).
We will extract from \(U_{l,1},\dots,U_{l,k}\) subsets \(R_{l,x}\) for each \(o_{x}\in S_{U}\). For the other elements \(o_{y}\in S_{\overline{U}}\), we create \(R_{l,y}=\emptyset\) and associate \(o_{y}\) with \(R_{l,y}\). It is easy to verify that Properties (ii)-(iv) hold in the latter case (for Property (iv), recall that by Definition 19, \(\tilde{\rho}_{\mathcal{M}}(o_{y})=0\)). We next explain how to construct \(R_{l,x}\) for \(o_{x}\in S_{U}\).
For \(j=1\) to \(k\), we split a subset of \(U_{l,j}\backslash C_{l}\) into
\[r_{l,j}=\max\Big{(}0,\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i= 1}^{j-1}U_{l,i})}(U_{l,j})-\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/( \bigcup_{i=1}^{j-1}U_{l,i}\cap C_{l})}(U_{l,j}\cap C_{l})\Big{)}\]
sets of size \(\lfloor\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j}) \rfloor-1\). It is always possible as we have, when \(r_{l,j}>0\),
\[\left\lfloor\frac{|U_{l,j}\backslash C_{l}|}{r_{l,j}}\right\rfloor=\left\lfloor \frac{|U_{l,j}\backslash C_{l}|}{\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/( \bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j})-\operatorname{rank}_{\mathcal{M}_{l}^{ \prime}/(\bigcup_{i=1}^{j-1}U_{l,i}\cap C_{l})}(U_{l,j}\cap C_{l})}\right\rfloor\]
\[\geq\left\lfloor\frac{|U_{l,j}\backslash C_{l}|}{\text{rank}_{\mathcal{ M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j})-\text{rank}_{\mathcal{M}_{l}^{ \prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j}\cap C_{l})}\right\rfloor\] \[=\lfloor\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i }\cup(C_{l}\cap U_{l,j}))}(U_{l,j}\backslash C_{l})\rfloor\] by Proposition 13 \[\geq\lfloor\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j})\rfloor.\] by Proposition 15
where in the first inequality we used that \(\text{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j} \cap C_{l})\leq\text{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i}\cap C_{l})}(U_{l,j}\cap C_{l})\).
Then the \(R_{l,x}\)s, for \(o_{x}\in S_{U}\) are decided by a greedy procedure. Let \(x_{1},\ldots,x_{|S_{U}|}\) be the indices of the elements of \(S_{U}\), ordered so that \(\tilde{\rho}_{\mathcal{M}_{l}}(o_{x_{1}})\geq\cdots\geq\tilde{\rho}_{\mathcal{ M}_{l}}(o_{x_{|S_{u}|}})\). The first \(r_{l,1}\) subsets drawn from \(U_{l,1}\backslash C_{l}\) are assigned to be \(R_{l,x_{1}},\ldots R_{l,x_{r_{l,1}}}\); the following \(r_{l,2}\) subsets drawn from \(U_{l,2}\backslash C_{2}\) are assigned to be \(R_{l,x_{r_{l,1}+1}},\ldots,R_{l,x_{r_{l,1}+r_{l,2}}}\), and so on.
Notice that by this procedure, properties (ii) and (iii) hold easily for \(R_{l,x}\), \(o_{x}\in S_{U}\). To prove property (iv), we will prove the following inequality for all \(j\):
\[\sum_{i=1}^{j}r_{l,i}\geq\left\lvert S\cap\text{span}_{\mathcal{M}_{l}}\left( \bigcup_{i=1}^{j}U_{l,i}\right)\right\rvert. \tag{2}\]
To see why inequality (2) implies (iv), for \(j=1\) to \(k\), let us define the set \(S_{j}\) of elements with "density level" \(j\), _i.e._, \(S_{j}=S\cap(\text{span}_{\mathcal{M}_{l}}(\bigcup_{i=1}^{j}U_{l,i})\backslash \text{span}_{\mathcal{M}_{l}}(\bigcup_{i=1}^{j-1}U_{l,i}))\). If \(o_{x_{t}}\in S_{j}\), by Definition 19, we need to associate \(o_{x_{t}}\) with a set \(R_{l,x_{t}}\) drawn from one of \(U_{l,1},\ldots,U_{l,j}\), as such a set \(R_{l,x_{t}}\) will have a size larger than or equal to \(\lfloor\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j}) \rfloor-1=\lfloor\tilde{\rho}_{\mathcal{M}_{l}}(o_{x_{t}})\rfloor-1\). The majorization in (2) shows that our greedy procedure will guarantee that a large enough set is assigned to \(o_{x_{t}}\), as the inequality (2) implies that \(t\leq\sum_{i=1}^{j}r_{l,i}\), hence Property (iv) would follow.
To prove (2), we begin by observing that our greedy procedure in constructing \(S\) ensures that
\[\text{rank}_{\mathcal{M}_{l}}(C_{l}\cup S)=\text{rank}_{\mathcal{M}_{l}}(C_{l} )+|S|,\]
implying that no circuit in \(\mathcal{M}_{l}\) involves a non-empty subset of \(S\) and a non-empty subset of a base in \(C_{l}\). Therefore, given any \(\hat{C}_{l}\subseteq C_{l}\) and \(\hat{S}\subseteq S\),
\[\text{rank}_{\mathcal{M}_{l}}(\hat{C}_{l}\cup\hat{S})=\text{rank}_{\mathcal{M }_{l}}(\hat{C}_{l})+|\hat{S}|.\]
With this observation, we can derive
\[\text{rank}_{\mathcal{M}_{l}}\left(\bigcup_{i=1}^{j}U_{l,i}\right) =\text{rank}_{\mathcal{M}_{l}}\left(\text{span}_{\mathcal{M}_{l} }\left(\bigcup_{i=1}^{j}U_{l,i}\right)\right)\] \[\geq\text{rank}_{\mathcal{M}_{l}}\left((C_{l}\cup S)\cap\text{ span}_{\mathcal{M}_{l}}\left(\bigcup_{i=1}^{j}U_{l,i}\right)\right)\] \[=\text{rank}_{\mathcal{M}_{l}}\left(C_{l}\cap\text{span}_{\mathcal{ M}_{l}}\left(\bigcup_{i=1}^{j}U_{l,i}\right)\right)+\left\lvert S\cap\text{ span}_{\mathcal{M}_{l}}\left(\bigcup_{i=1}^{j}U_{l,i}\right)\right\rvert\] \[\geq\text{rank}_{\mathcal{M}_{l}}\left(C_{l}\cap\bigcup_{i=1}^{j }U_{l,i}\right)+\left\lvert S\cap\text{span}_{\mathcal{M}_{l}}\left(\bigcup_{i= 1}^{j}U_{l,i}\right)\right\rvert,\]
where the last inequality is actually an equality, as we have \(C_{l}\cap\bigcup_{i=1}^{j}U_{l,i}=C_{l}\cap\text{span}_{\mathcal{M}_{l}}\left( \bigcup_{i=1}^{j}U_{l,i}\right)\) here.3
Footnote 3: However, for Lemmas 27 and 30 in the next two sections, this will be in fact an inequality, as in the proof of those two lemmas, \(C_{l}\) may contain elements not in \(V^{\prime}\).
We now finish the proof of inequality (2) by observing that
\[\operatorname{rank}_{\mathcal{M}_{l}}\left(\bigcup_{i=1}^{j}U_{l,i} \right)-\operatorname{rank}_{\mathcal{M}_{l}}\left(C_{l}\cap\bigcup_{i=1}^{j}U_{l,i}\right)\\ =\sum_{i=1}^{j}\left(\operatorname{rank}_{\mathcal{M}_{l}^{\prime }/(\bigcup_{i=1}^{j-1}U_{l,i})}(U_{l,j})-\operatorname{rank}_{\mathcal{M}_{l} ^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i}\cap C_{l})}(U_{l,j}\cap C_{l})\right) \leq\sum_{i=1}^{j}r_{l,i},\]
where the equality comes from applying Proposition 13 recursively.
We have by now shown that Properties (ii)-(iv) hold in general. Property (i) holds trivially by our construction. Thus the proof is complete.
We denote \(R_{l}=\bigcup_{i=1}^{|S|}R_{l,i}\) and \(R=\bigcup_{l\in\{1,2\}}R_{l}\). Note that \(R_{l}\subseteq V^{\prime}\backslash C_{l}\) and \(R\subseteq V^{\prime}\).
**Lemma 25**.: _For \(l\in\{1,2\}\), we can build sets \(Q_{l,1},\ldots,Q_{l,\operatorname{rank}_{\mathcal{M}_{l}}(R_{3-l})}\) satisfying the following properties:_
1. _the_ \(Q_{l,j}\) _are disjoint;_
2. \(\bigcup_{i=1}^{\operatorname{rank}_{\mathcal{M}_{l}}(R_{3-l})}Q_{l,i}=R_{3-l}\)_;_
3. _for all_ \(v\in Q_{l,i}\)_,_ \(|Q_{l,i}|\leq\tilde{\rho}_{\mathcal{M}_{l}}(v)+1\)_._
Proof.: Fix an \(l\). For \(j=1\) to \(k\), we split the set \(U_{l,j}\cap R_{3-l}\) into \(\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i} \cap R_{3-l})}(U_{l,j}\cap R_{3-l})\) sets of size at most
\[\left\lceil\frac{|U_{l,j}\cap R_{3-l}|}{\operatorname{rank}_{ \mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i}\cap R_{3-l})}(U_{l,j} \cap R_{3-l})}\right\rceil \leq\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i} \cap R_{3-l})}(U_{l,j}\cap R_{3-l})+1\] \[\leq\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i}) }(U_{l,j}\cap R_{3-l})+1\] by Proposition 14 \[\leq\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j-1}U_{l,i}) }(U_{l,j})+1.\] by construction of \[U_{l,j}\]
These are the aforementioned sets \(Q_{l,x}\). It is clear that those \(Q_{l,x}\) will be disjoint, and that for all \(v\in Q_{l,x}\subseteq U_{l,j}\), we have
\[\tilde{\rho}_{\mathcal{M}_{l}}(v)+1=\rho_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i =1}^{j-1}U_{l,i})}(U_{l,j})+1\geq|Q_{l,x}|.\]
Observe that by induction, for any \(1\leq r\leq k\), we have \(\sum_{j=1}^{r}\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j -1}U_{l,i}\cap R_{3-l})}(U_{l,j}\cap R_{3-l})=\operatorname{rank}_{\mathcal{M} _{l}^{\prime}}(R_{3-l}\cap(\bigcup_{i=1}^{r}U_{l,i}))\) (using Proposition 13) and hence for \(r=k\) we get
\[\sum_{j=1}^{k}\operatorname{rank}_{\mathcal{M}_{l}^{\prime}/(\bigcup_{i=1}^{j -1}U_{l,i}\cap R_{3-l})}(U_{l,j}\cap R_{3-l})=\operatorname{rank}_{\mathcal{M }_{l}^{\prime}}(R_{3-l}),\]
therefore the number of sets \(R_{l,x}\) built that way is exactly \(\operatorname{rank}_{\mathcal{M}_{l}^{\prime}}(R_{3-l})\), as desired.
We now continue the proof of Theorem 4. For all \(v\in R\subseteq V^{\prime}\), we know by Property (i) of Definition 3 that:
\[\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\leq\beta.\]
Hence summing over all the elements of \(R\):
\[\beta\cdot|R| \geq\sum_{v\in R}\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_ {\mathcal{M}_{2}}(v)\] \[=\sum_{\begin{subarray}{c}l\in\{1,2\}\\ i\in\{1,\ldots|S|\}\end{subarray}}\sum_{v\in R_{l,i}}\tilde{\rho}_{\mathcal{M} _{l}}(v)+\sum_{\begin{subarray}{c}l\in\{1,2\}\\ i\in\{1,\ldots\operatorname{rank}_{\mathcal{M}_{l}}(R_{3-l})\}\end{subarray}}\sum_ {v\in Q_{l,i}}\tilde{\rho}_{\mathcal{M}_{l}}(v)\] \[\geq\sum_{\begin{subarray}{c}l\in\{1,2\}\\ i\in\{1,\ldots|S|\}\end{subarray}}|R_{l,i}|\cdot(|R_{l,i}|+1)+\sum_{ \begin{subarray}{c}l\in\{1,2\}\\ i\in\{1,\ldots\operatorname{rank}_{\mathcal{M}_{l}}(R_{3-l})\}\end{subarray}}|Q_{l,i}|\cdot(|Q_{l,i}|-1)\]
\[\mu(V)=|O\backslash S|+|S|\leq\left(1+\frac{\beta}{\beta^{-}-4}-\frac{1}{2}\right) \cdot\mu(V^{\prime})=\left(\frac{1}{2}+\frac{\beta}{\beta^{-}-4}\right)\cdot \mu(V^{\prime})\leq\left(\frac{3}{2}+\varepsilon\right)\cdot\mu(V^{\prime}),\]
as \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\). This concludes the proof.
**Remark 26**.: _We can observe that \(\beta\) and \(\beta^{-}\) can be of order \(O(1/\varepsilon)\) to satisfy the constraints of Theorem 6. From now on we will suppose that \(\beta,\beta^{-}\) are \(O(1/\varepsilon)\)._
## 4 Application to One-Way Communication
Given two matroids \(\mathcal{M}_{1}=(V,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(V,\mathcal{I}_{2})\), in the one-way communication model, Alice and Bob are given \(V_{A}\) and \(V_{B}=V\backslash V_{A}\) respectively, and the goal is for Alice to send a small message to Bob so that Bob can output a large intersection of matroids \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\). Here we will show that if Alice communicates an appropriate Density-Constrained Subset of \(V_{A}\), with parameters \(\beta,\beta^{-}\) of order \(O(1/\varepsilon)\), then Bob is able to get a \(3/2+\varepsilon\) approximation of the optimal intersection.
**Theorem 8**.: _There exists a one-way communication protocol that, given any \(\varepsilon>0\), computes a \(3/2+\varepsilon\) approximation to maximum matroid intersection using a message of size \(O(\mu(V)/\varepsilon)\) from Alice to Bob, where \(\mu(V)\) denotes the size of the optimal solution of the matroid intersection problem._
By Theorem 23 we know that a DCS in the two restricted matroids \(\mathcal{M}_{1}|V_{A}\) and \(\mathcal{M}_{2}|V_{A}\) always exists, and by Proposition 22 we know that the number of elements sent by Alice is at most \(O(\mu(V)/\varepsilon)\). Hence we only need to prove the following lemma.
**Lemma 27**.: _Let \(\varepsilon>0\), \(\beta\) and \(\beta^{-}\) be parameters such that \(\beta\geq\beta^{-}+7\) and \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\), if \(V^{\prime}\) is a \((\beta,\beta^{-})\)-DCS of the two matroids \(\mathcal{M}_{1}|V_{A}\) and \(\mathcal{M}_{2}|V_{A}\), then \((3/2+\varepsilon)\cdot\mu(V^{\prime}\cup V_{B})\geq\mu(V)\)._
Proof.: Let \(O\) be an optimal solution in \(V\). Let \(O_{A}=O\cap V_{A}\) and \(O_{B}=O\cap V_{B}\). Let \(C_{1}\) and \(C_{2}\) be sets such that \(C_{1}\cup C_{2}=V^{\prime}\cup O_{B}\), \(C_{1}\cap C_{2}=\emptyset\) and they minimize the sum \(\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+\operatorname{rank}_{\mathcal{ M}_{2}}(C_{2})\). By Theorem 6 we know that \(\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+\operatorname{rank}_{\mathcal{ M}_{2}}(C_{2})=\mu(V^{\prime}\cup O_{B})\), the maximum size of a common independent set in \(V^{\prime}\cup O_{B}\).
As in the proof of Theorem 4, we will build an auxiliary set \(S\), starting with \(S=\emptyset\). If there exists an element \(o_{1}\in O\) such that \(o_{1}\notin\operatorname{span}_{\mathcal{M}_{1}}(C_{1})\cup\operatorname{ span}_{\mathcal{M}_{2}}(C_{2})\), then we add \(o_{1}\) into \(S\) and we next consider the contracted matroids \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\). We keep the same sets \(C_{1}\) and \(C_{2}\) and we try again to find an element \(o_{2}\in O\backslash S\) such that \(o_{2}\notin\operatorname{span}_{\mathcal{M}_{1}/S}(C_{1})\cup\operatorname{ span}_{\mathcal{M}_{2}/S}(C_{2})\), and we add \(o_{2}\) to \(S\). We repeat that operation until it is no longer possible to add into \(S\) another element of \(O\) satisfying the aforementioned constraint. Note that here all the elements \(o_{i}\) added to \(S\) come necessarily from \(O_{A}\).
As a result, as \(O\backslash S\) is a common independent subset in \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\), and because of Theorem 6, the size of \(O\backslash S\) is upper-bounded by \(\operatorname{rank}_{\mathcal{M}_{1}/S}(C_{1})+\operatorname{rank}_{\mathcal{ M}_{2}/S}(C_{2})\leq\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+ \operatorname{rank}_{\mathcal{M}_{2}}(C_{2})=\mu(V^{\prime}\cup O_{B})\), as in the proof of Theorem 4.
Now we need to upper-bound the value of \(|S|\). We will use the same construction as that of the proof of Theorem 4, as there is no difference in the algorithms that construct the sets \(R_{l,i}\) and \(Q_{l,i}\) (those remain subsets of \(V^{\prime}\), we can just follow the same procedures described in Lemmas 24 and 25).
After similar computations, we get the inequality:
\[\beta\geq\frac{|R|}{2\cdot|S|}+\frac{|R|}{\operatorname{rank}_{\mathcal{M}_{1} }(R_{2})+\operatorname{rank}_{\mathcal{M}_{2}}(R_{1})}.\]
As the elements of \(S\) are from \(O_{A}\subset V_{A}\), and because of Property (ii) of Definition 3, we know that for all \(o_{i}\in S\),
\[\beta^{-}\leq\tilde{\rho}_{\mathcal{M}_{1}}(o_{i})+\tilde{\rho}_{\mathcal{M}_ {2}}(o_{i})\leq|R_{1,i}|+|R_{2,i}|+4,\]
so by averaging over all the elements of \(S\) we get
\[\beta^{-}\leq\frac{|R|}{|S|}+4.\]
Therefore we derive
\[\left(\beta-\frac{\beta^{-}-4}{2}\right)\cdot(\operatorname{rank}_{\mathcal{ M}_{1}}(R_{2})+\operatorname{rank}_{\mathcal{M}_{2}}(R_{1}))\geq|R|.\]
Then, as \((\beta^{-}-4)\cdot|S|\leq|R|\) and \(\operatorname{rank}_{\mathcal{M}_{1}}(R_{2})+\operatorname{rank}_{\mathcal{ M}_{2}}(R_{1})\leq\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+ \operatorname{rank}_{\mathcal{M}_{2}}(C_{2})=\mu(V^{\prime}\cup O_{B})\) we finally have \(\left(\beta-\frac{\beta^{-}-4}{2}\right)\cdot\mu(V^{\prime}\cup O_{B})\geq( \beta^{-}-4)\cdot|S|\), and therefore:
\[\mu(V)=|O\backslash S|+|S|\leq\left(\frac{1}{2}+\frac{\beta}{\beta^{-}-4} \right)\cdot\mu(V^{\prime}\cup O_{B})\leq\left(\frac{3}{2}+\varepsilon\right) \cdot\mu(V^{\prime}\cup O_{B})\leq\left(\frac{3}{2}+\varepsilon\right)\cdot \mu(V^{\prime}\cup V_{B}),\]
as \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\).
## 5 Application to Random-Order Streams
Now we consider our problem in the random-order streaming model. As our algorithm builds on that of Bernstein [1] for the unweighted simple matching, let us briefly summarize his approach. In the first phase of the streaming, he constructs a subgraph that satisfies only a weaker definition of EDCS in Definition 1 (only Property (i) holds). In the second phase of the streaming, he collects the "underfull" edges, which are those edges that violate Property (ii). He shows that in the end, the union of the subgraph built in the first phrase and the underfull edges collected in the second phase, with high probability, contains a \(3/2+\varepsilon\) approximation and that the total memory used is in the order of \(O(k\cdot poly(\log(k),1/\varepsilon))\) (there \(k\) refers to the number of vertices in the graph). As we will show below, this approach can be adapted to our context of matroid intersection.
**Definition 28**.: _We say that a subset \(V^{\prime}\) has bounded density \(\beta\) if for every element \(v\in V^{\prime}\), \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)\leq\beta\)._
**Definition 29**.: _Let \(V^{\prime}\) be a subset of \(V\) with bounded density \(\beta\). For any parameter \(\beta^{-}\), we say that an element \(v\in V\backslash V^{\prime}\) is \((V^{\prime},\beta,\beta^{-})\)-underfull if \(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)<\beta^{-}\)._
As in [1], we can get a good approximation by combining a subset \(V^{\prime}\) of bounded density \(\beta\) and the set of \((V^{\prime},\beta,\beta^{-})\)-underfull elements in \(V\backslash V^{\prime}\). The proof of the following lemma is quite similar to that of Theorem 4, so we will only highlight the points where the proofs differ.
We begin by noting that in [1, 2], the proof is done by showing that the combination of the subgraphs built in the first and second phase of the algorithm contains a subgraph which is an EDCS with respect to some subgraph containing the optimal solution. Our approach here is different in that we do not try to get a DCS of a well-chosen subset containing the optimal solution. Instead, we adapt directly the proof of Theorem 4.
**Lemma 30**.: _Let \(\varepsilon>0\), \(\beta\) and \(\beta^{-}\) be parameters such that \(\beta\geq\beta^{-}+7\) and \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\). Given a subset \(V^{\prime}\subseteq V\) with bounded density \(\beta\), if \(X\) contains all elements in \(V\backslash V^{\prime}\) that are \((V^{\prime},\beta,\beta^{-})\)-underfull, then \((3/2+\varepsilon)\cdot\mu(V^{\prime}\cup X)\geq\mu(V)\)._
Proof.: Let \(O\) be an optimal solution in \(V\). Let \(X^{\mathrm{opt}}=X\cap O\). Let \(C_{1}\) and \(C_{2}\) be sets such that \(C_{1}\cup C_{2}=V^{\prime}\cup X^{\mathrm{opt}}\), \(C_{1}\cap C_{2}=\emptyset\), and they minimize the sum \(\mathrm{rank}_{\mathcal{M}_{1}}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2}}(C_{2})\). By Theorem 6 we know that \(\mathrm{rank}_{\mathcal{M}_{1}}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2}}(C_{2})= \mu(V^{\prime}\cup X^{\mathrm{opt}})\), the maximum size of a common independent set in \(V^{\prime}\cup X^{\mathrm{opt}}\).
As in the proof of Theorem 4, we will build an auxiliary set \(S\), starting with \(S=\emptyset\). If there exists an element \(o_{1}\in O\) such that \(o_{1}\notin\mathrm{span}_{\mathcal{M}_{1}}(C_{1})\cup\mathrm{span}_{\mathcal{M }_{2}}(C_{2})\), then we add \(o_{1}\) into \(S\) and we now consider the contracted matroids \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\). We keep the same sets \(C_{1}\) and \(C_{2}\) and we try again to find an element \(o_{2}\in O\backslash S\) such that \(o_{2}\notin\mathrm{span}_{\mathcal{M}_{1}/S}(C_{1})\cup\mathrm{span}_{\mathcal{ M}_{2}/S}(C_{2})\), and we add \(o_{2}\) to \(S\). We repeat that operation until it is no longer possible to add another element to \(S\) satisfying the aforementioned constraints.
As a result, as \(O\backslash S\) is a common independent subset in \(\mathcal{M}_{1}/S\) and \(\mathcal{M}_{2}/S\), and because of Theorem 6, the size of \(O\backslash S\) is upper-bounded by \(\mathrm{rank}_{\mathcal{M}_{1}/S}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2}/S}(C_ {2})\leq\mathrm{rank}_{\mathcal{M}_{1}}(C_{1})+\mathrm{rank}_{\mathcal{M}_{2} }(C_{2})=\mu(V^{\prime}\cup X^{\mathrm{opt}})\), as in the proof of Theorem 4.
Now we need to upper-bound the value of \(|S|\). We will use the same construction as that of the proof of Theorem 4, as there is no difference in the algorithms construct the sets \(R_{l,i}\) and \(Q_{l,i}\) (those remain subsets of \(V^{\prime}\), we can just follow the same procedures described in Lemmas 24 and 25).
Then after similar computations, we get the inequality:
\[\beta\geq\frac{|R|}{2\cdot|S|}+\frac{|R|}{\mathrm{rank}_{\mathcal{M}_{1}}(R_{2 })+\mathrm{rank}_{\mathcal{M}_{2}}(R_{1})}.\]
As the elements of \(S\) are not underfull (observe that here we use this fact, instead of using Property (ii) of Definition 3 as we have done in the proof of Theorem 4), we know that for all \(o_{i}\in S\),
\[\beta^{-}\leq\tilde{\rho}_{\mathcal{M}_{1}}(o_{i})+\tilde{\rho}_{\mathcal{M}_{ 2}}(o_{i})\leq|R_{1,i}|+|R_{2,i}|+4,\]
so by averaging over all the elements of \(S\) we get
\[\beta^{-}\leq\frac{|R|}{|S|}+4.\]
Therefore we obtain
\[\left(\beta-\frac{\beta^{-}-4}{2}\right)\cdot(\mathrm{rank}_{\mathcal{M}_{1}}( R_{2})+\mathrm{rank}_{\mathcal{M}_{2}}(R_{1}))\geq|R|.\]
Then, as \((\beta^{-}-4)\cdot|S|\leq|R|\) and \(\operatorname{rank}_{\mathcal{M}_{1}}(R_{2})+\operatorname{rank}_{\mathcal{M}_{2 }}(R_{1})\leq\operatorname{rank}_{\mathcal{M}_{1}}(C_{1})+\operatorname{rank}_ {\mathcal{M}_{2}}(C_{2})=\mu(V^{\prime}\cup X^{\operatorname{opt}})\) we finally have \(\left(\beta-\frac{\beta^{-}-4}{2}\right)\cdot\mu(V^{\prime}\cup X^{ \operatorname{opt}})\geq(\beta^{-}-4)\cdot|S|\), and therefore:
\[\mu(V)=|O\backslash S|+|S|\leq\left(\frac{1}{2}+\frac{\beta}{\beta^{-}-4} \right)\cdot\mu(V^{\prime}\cup X^{\operatorname{opt}})\leq\left(\frac{3}{2}+ \varepsilon\right)\cdot\mu(V^{\prime}\cup X^{\operatorname{opt}}),\]
as \((\beta^{-}-4)\cdot(1+\varepsilon)\geq\beta\).
Here we recall a classic probabilistic tool that we will use in the analysis of our algorithm.
**Proposition 31** (Hoeffding's inequality).: _Let \(X_{1},\ldots,X_{t}\) be \(t\) negatively associated random variables that take values in \([0,1]\). Let \(X:=\sum_{i=1}^{t}X_{i}\). Then, for all \(\lambda>0\) we have:_
\[\mathbb{P}(X-\mathbb{E}[X]\geq\lambda)\leq\exp\left(-\frac{2\lambda^{2}}{t} \right).\]
The following ideas for the streaming algorithm come from a recent paper originally intended for \(b\)-matchings [10]. For sake of completeness, we reproduce the details in the following, with some slight adaptations to our more general case of matroid intersection.
```
1:\(V^{\prime}\leftarrow\emptyset\)
2:\(\forall\,0\leq i\leq\log_{2}k\), \(\alpha_{i}\leftarrow\left\lfloor\frac{\varepsilon\cdot n}{\log_{2}(k)\cdot(2^{ i+2}\beta^{2}+1)}\right\rfloor\)
3:for\(i=0\ldots\log_{2}k\)do
4:ProcessStopped\(\leftarrow\)False
5:for\(2^{i+2}\beta^{2}+1\) iterations do
6:FoundUnderfull\(\leftarrow\)False
7:for\(\alpha_{i}\) iterations do
8: let \(v\) be the next element in the stream
9:if\(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{2}}(v)<\beta^{-}\)then
10: add \(v\) to \(V^{\prime}\)
11:FoundUnderfull\(\leftarrow\)True
12:while there exists \(v^{\prime}\in V^{\prime}:\tilde{\rho}_{\mathcal{M}_{1}}(v^{\prime})+\tilde{ \rho}_{\mathcal{M}_{2}}(v^{\prime})>\beta\)do
13: remove \(v^{\prime}\) from \(V^{\prime}\)
14:ifFoundUnderfull\(=\)False then
15:ProcessStopped\(\leftarrow\)True
16:break from the loop
17:ifProcessStopped\(=\)True then
18:break from the loop
19:\(X\leftarrow\emptyset\)
20:for each \(v\) remaining element in the stream do
21:if\(\tilde{\rho}_{\mathcal{M}_{1}}(v)+\tilde{\rho}_{\mathcal{M}_{1}}(v)<\beta^{-}\)then
22: add \(v\) to \(X\)
23:return the maximum common independent set in \(V^{\prime}\cup X\)
```
**Algorithm 2** Algorithm for computing an intersection of two matroids in a random-order stream
The algorithm, formally described in Algorithm 2, consists of two phases. The first phase, corresponding to Lines 3-18, constructs a subset \(V^{\prime}\) of bounded density \(\beta\) using only an \(\varepsilon\) fraction of the stream \(V^{\operatorname{early}}\). In the second phase, the algorithm collects the underfull elements in the remaining part of the stream \(V^{\operatorname{late}}\). As in [1] we use the idea that if no underfull element was found in an interval of size \(\alpha\) (see Lines 6-13), then with high probability the number of underfull elements remaining in the stream is bounded by some value \(\gamma=4\log(n)\frac{n}{\alpha}\). The issue is therefore how to choose the right size of interval \(\alpha\), because we ignore the order of magnitude of \(\mu(V)\) the optimal solution: if we do as in [1] by choosing only one fixed size of intervals \(\alpha\), then if \(\alpha\) is too small, the value of \(\gamma\) will be too big compared to \(\mu(V)\)
whereas if the value of \(\alpha\) is too large we will be unable to terminate the first phase of the algorithm within the early fraction of size \(\varepsilon m\). Therefore, the idea in the first phase of the algorithm is to "guess" the value of \(\log_{2}\mu(V)\) by trying successively larger and larger values of \(i\) (see Line 3). In fact, by setting \(i_{0}=\lceil\log_{2}\mu(V)\rceil\), we know that the number of insertion/deletion operations that can be performed on a \((\beta,\beta^{-})\)-DSC is bounded by \(2^{i_{0}+2}\beta^{2}\) (see the proof of Theorem 23). As a result we know that the first phase should always stop at a time where \(i\) is smaller than or equal to \(i_{0}\), and therefore at a time when \(\alpha_{i}\geq\alpha_{i_{0}}\). Then we can prove that with high probability the number of remaining underfull elements in the stream is at most \(\gamma_{i}=4\log(n)\frac{n}{\alpha_{i}}\).
**Claim 32**.: _With probability at least \(1-\exp(-2\cdot\varepsilon^{2}\cdot\mu(V))\) the late part of the stream \(V^{\mathrm{late}}\) contains at least a \((1-2\varepsilon)\) fraction of the optimal solution. Moreover, in expectation \(V^{\mathrm{late}}\) contains a \((1-\varepsilon)\) fraction of the optimal solution._
Proof.: Consider an optimal solution \(O=\{o_{1},\ldots,o_{\mu(V)}\}\). We define the random variables \(X_{i}=\mathbb{1}_{o_{i}\in V^{\mathrm{early}}}\). Hence we have \(\mathbb{E}[\sum X_{i}]=\varepsilon\cdot|O|\). Moreover, the random variables \(X_{i}\) are negatively associated, so we can use Hoeffding's inequality (see Proposition 31) to get
\[\mathbb{P}\left[\sum_{i=1}^{\mu(V)}X_{i}\geq 2\varepsilon\cdot\mu(V)\right] \leq\exp\left(-\frac{2\cdot\varepsilon^{2}\cdot\mu(V)^{2}}{\mu(V)}\right)= \exp\left(-2\cdot\varepsilon^{2}\cdot\mu(V)\right).\]
Recall that we defined \(i_{0}=\lceil\log_{2}\mu(V)\rceil\). Algorithm 2 works when \(\mu(V)\) is not too big (otherwise we may use intervals of size \(\alpha_{i_{0}}=\lfloor\frac{\varepsilon\cdot n}{\log_{2}(k)\cdot(2^{i_{0}+2} \beta^{2}+1)}\rfloor=0\)). Here we will first argue that this case can be handled anyway.
**Claim 33**.: _We can assume that \(\frac{\varepsilon\cdot n}{\log_{2}(k)\cdot(2^{i_{0}+2}\beta^{2}+1)}\geq 1\)._
Proof.: If this is not the case, then we can just store all the elements of \(V\) as the number of elements \(n\) is bounded by \(\frac{\log_{2}(k)\cdot(2^{i_{0}+2}\beta^{2}+1)}{\varepsilon}=O(\mu(V)\cdot \log(k)\cdot(1/\varepsilon)^{3})\) (as \(\beta\) is \(O(1/\varepsilon)\), see Remark 26). As a result, if at some point of the first phase we have not stopped and we have \(\alpha_{i}=0\), then we store all the remaining elements of \(V^{\mathrm{late}}\) and we will be able to get a \((1-\varepsilon)\) approximation in expectation and a \((1-2\varepsilon)\) approximation with high probability (more precisely, at least \(1-\exp(-2\cdot c\cdot\varepsilon^{5}\cdot n/\log(k))\), for some constant \(c>0\), see Claim 32), using \(O(\mu(V)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) memory.
From now on we will assume that \(\frac{\varepsilon\cdot n}{\log_{2}(k)\cdot(2^{i_{0}+2}\beta^{2}+1)}\geq 1\). Then we can move on to our main algorithm. The following lemma is very similar to the one used in [1].
**Lemma 34**.: _The first phase of Algorithm 2 uses \(O(\beta\cdot\mu(V))\) memory and constructs a subset \(V^{\prime}\subseteq V\), satisfying the following properties:_
1. _The first phase terminates within the first_ \(\varepsilon\cdot n\) _elements of the stream._
2. _When the first phase terminates after processing some element, we have:_ 1. \(V^{\prime}\) _has bounded density_ \(\beta\)_, and contains at most_ \(O(\beta\cdot\mu(V))\) _elements._ 2. _With probability at least_ \(1-n^{-3}\)_, the total number of_ \((V^{\prime},\beta,\beta^{-})\)_-underfull elements in the remaining part of the stream is at most_ \(\gamma=O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot\beta^{2}\cdot 1/\varepsilon)\)_._
Proof.: First, in each interval of size \(\alpha_{i}\) processed until the first phase terminates (except the last interval), at least one insertion/deletion operation that is performed (as described in the proof of Theorem 23), and therefore the total number of such processed intervals is bounded by \(2\beta^{2}\cdot\mu(V)+1\). As a result, the first phase ends with some \(i\leq i_{0}=\lceil\log_{2}\mu(V)\rceil\), and the total number of elements processed in the first phase is therefore bounded by \(\varepsilon\cdot n\cdot\frac{i_{0}}{\log_{2}(k)}\leq\varepsilon\cdot n\). For Property 2.a, as the subset \(V^{\prime}\) built always keeps a bounded density \(\beta\), Proposition 22 implies that \(V^{\prime}\) uses \(O(\beta\cdot\mu(V))=O(\mu(V)\cdot 1/\varepsilon)\) memory.
Now we turn to the last property. As mentioned previously, the intuition is simple: the algorithm only exits the first phase if it fails to find a single underfull element in an entire interval (Line 14-16), and since the stream is random, such an event implies that there are most likely few underfull elements left in the stream.
To formalize this, we call the \(j\)-th time that Lines 7-13 are processed the _epoch_\(j\). Let \(\mathcal{A}_{j}\) be the event that FoundUnderfull is set to False in epoch \(j\). Let \(\mathcal{B}_{j}\) be the event that the number of \((V^{\prime},\beta,\beta^{-})\)-underfull elements in the remaining part of the stream is larger than some \(\gamma\). Note that the last property fails to hold if and only if we have \(\mathcal{A}_{j}\wedge\mathcal{B}_{j}\) for some \(j\), so we want to upper bound \(\mathbb{P}[\mathcal{A}_{j}\wedge\mathcal{B}_{j}]\). Let \(V_{j}^{r}\) contains all elements in \(V\) that have not yet appeared in the stream at the _beginning_ of epoch \(j\) (r for remaining). Let \(V_{j}^{e}\) be the elements that appear in epoch \(j\) (e for epoch), and note that \(E_{j}^{e}\) is a subset of size \(\alpha_{i}\geq\alpha_{i_{0}}=\alpha_{\lceil\log_{2}\mu(V)\rceil}=\alpha\) chosen uniformly at random from \(V_{j}^{r}\). Define \(V_{j}^{\prime}\) to be the subset \(V^{\prime}\) at the beginning of epoch \(j\), and define \(V_{j}^{u}\subseteq E_{j}^{r}\) to be the set of remaining underfull elements with respect to \(V_{j}^{r}\), \(\beta\), and \(\beta^{-}\). Observe that because of event \(\mathcal{A}_{j}\), the subset \(V^{\prime}\) remains the same throughout epoch \(j\), so an element that is underfull at any point during the epoch will be underfull at the end as well. Thus, \(\mathcal{A}_{j}\wedge\mathcal{B}_{j}\) is equivalent to the event that \(|V_{j}^{u}|>\gamma\) and \(V_{j}^{u}\cap V_{j}^{e}=\emptyset\).
Let \(\mathcal{A}_{j}^{k}\) be the event that the \(k\)-th element of epoch \(j\) is not in \(V_{j}^{u}\). We have that \(\mathbb{P}[\mathcal{B}_{j}\wedge\mathcal{A}_{j}]\leq\mathbb{P}[\mathcal{A}_{j }\,|\,\mathcal{B}_{j}]\leq\mathbb{P}[\mathcal{A}_{j}^{1}\,|\,\mathcal{B}_{j}] \prod_{k=2}^{\alpha}\mathbb{P}[\mathcal{A}_{j}^{k}\,|\,\mathcal{B}_{j}, \mathcal{A}_{j}^{1},\ldots,\mathcal{A}_{j}^{k-1}]\), where the second inequality comes from that \(V_{j}^{e}\) is of size larger or equal to \(\alpha=\alpha_{\lceil\log_{2}\mu(V)\rceil}\).
Now, observe that \(\mathbb{P}[\mathcal{A}_{j}^{1}\,|\,\mathcal{B}_{j}]<1-\frac{\gamma}{n}\) because the first element of the epoch is chosen uniformly at random from the set of \(\leq n\) remaining elements, and the event fails if the chosen element is in \(V_{j}^{u}\), where \(|V_{j}^{u}|>\gamma\) by definition of \(\mathcal{B}_{j}\). Similarly, for any \(k\), \(\mathbb{P}[\mathcal{A}_{j}^{k}\,|\,\mathcal{B}_{j},\mathcal{A}_{j}^{1},\ldots, \mathcal{A}_{j}^{k-1}]<1-\frac{\gamma}{m}\) because conditioning on the previous events \(\mathcal{A}_{j}^{t}\) implies that no element from \(V_{j}^{u}\) has yet appeared in this epoch, so there remain still at least \(\gamma\) element from \(V_{j}^{u}\) left in the stream.
We now set
\[\gamma=4\log(n)\cdot\frac{n}{\alpha}=4\log(n)\cdot n\cdot\left\lfloor\frac{ \varepsilon\cdot n}{\log_{2}(k)\cdot(2^{i_{0}+2}\beta^{2}+1)}\right\rfloor^{-1},\]
and as we assumed that \(\frac{\varepsilon\cdot n}{\log_{2}(n)\cdot(2^{i_{0}+2}\beta^{2}+1)}\geq 1\) (and as a factor of at most \(2\) separates \(\lfloor x\rfloor\) and \(x\) when \(x\geq 1\)) we have \(\gamma=O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\).
Combining the above equations yields that \(\mathbb{P}[\mathcal{B}_{j}\wedge\mathcal{A}_{j}]\leq(1-\frac{\gamma}{n})^{ \alpha}=(1-\frac{4\log(n)}{\alpha})^{\alpha}\leq n^{-4}\). There are clearly at most \(n\) epochs, so union bounding over all of them shows that the last property fails with probability at most \(n^{-3}\), as desired.
Then we can combine the previous results to obtain the following theorem.
**Theorem 9**.: _Let \(1/4>\varepsilon>0\). One can extract from a randomly-ordered stream of elements a common independent subset in two matroids with an approximation ratio of \(3/2+\varepsilon\) in expectation, using \(O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) memory, where \(\mu(V)\) denotes the size of the optimal solution, and \(k\) is the smaller rank of the two given matroids. Moreover the approximation ratio is worse than \(3/2+\varepsilon\) only with probability at most \(\exp(-1/32\cdot\varepsilon^{2}\cdot\mu(V))+n^{-3}\)._
Proof.: Using Lemma 30 on the graph \(V^{\prime}\cup V^{\mathrm{late}}\) we get \((3/2+\varepsilon)\cdot\mu(V^{\prime}\cup X)\geq\mu(V^{\prime}\cup V^{\mathrm{ late}})\). Applying Claim 32, we know that in expectation \((1-\varepsilon)^{-1}\cdot\mu(V^{\prime}\cup V^{\mathrm{late}})\geq\mu(V)\). Hence in expectation we also have
\[(3/2+\varepsilon)\cdot(1-\varepsilon)^{-1}\cdot\mu(V^{\prime}\cup X)\geq\mu(V).\]
Moreover, by Lemma 34, the memory consumption is bounded by \(O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) with probability at least \(1-n^{-3}\). Hence we can decide that, if during the execution of the algorithm at some point the memory consumption reaches the bound defined in Lemma 34 (recall that this bound can be computed as it depends only on the epoch when the first phase stopped), then we discard the remaining elements. As this event happens only with probability \(1-n^{-3}\), this is not harmful for the expectation of the approximation ratio.
Moreover, using Claim 32, we know that a \((1-2\varepsilon)^{-1}\cdot(3/2+\varepsilon)\) approximation of the optimal common independent subset is contained in \(\mu(V\cup X)\) with probability at least \(1-\exp(-2\cdot\varepsilon^{2}\cdot\mu(V))\). As the
memory consumption of \(O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) is guaranteed with probability at least \(1-n^{-3}\) (see Lemma 34), then with probability at least \(1-(\exp(-2\cdot\varepsilon^{2}\cdot\mu(V))+n^{-3})\) (by union bound), we can obtain a \((1-2\varepsilon)^{-1}\cdot(3/2+\varepsilon)\) approximation using \(O(\mu(V)\cdot\log(n)\cdot\log(k)\cdot(1/\varepsilon)^{3})\) memory. As \(\varepsilon<1/4\), we have \((1-2\varepsilon)^{-1}\cdot(3/2+\varepsilon)\leq(3/2+8\varepsilon)\), and therefore to get a \(3/2+\varepsilon\), we have to use an \(\varepsilon^{\prime}=\varepsilon/8\) so that the probability to have an approximation ratio worse than \(3/2+\varepsilon\) approximation is at most \(\exp(-2\cdot(\varepsilon/8)^{2}\cdot\mu(V))+n^{-3}\).
## Appendix A Deferred Proofs
Proof of Lemma 20.: In the following, we will use the notation \(P_{a,b}=U_{a}^{\mathrm{old}}\cap U_{b}^{\mathrm{new}}\) for \(a\), \(b\in\llbracket 1,k\rrbracket\).
We prove (i) by strong induction. We start with the case \(j=1\). Let \(i_{0}\) be the largest \(i\) set such as \(P_{1,i}\neq\emptyset\). Then we know that for all \(v\in U_{1}^{\mathrm{old}}\),
\[\tilde{\rho}_{\mathcal{M}}^{\mathrm{new}}(v) \geq\rho_{\mathcal{M}^{\prime\mathrm{new}}/(\bigcup_{i=1}^{i_{0}- 1}U_{i}^{\mathrm{new}})}(U_{i_{0}}^{\mathrm{new}}) \text{by Proposition \ref{prop:proof
\(\llbracket 1,k\rrbracket:v\in\operatorname{span}_{\mathcal{M}}(\bigcup_{i=1}^{j}U_{i }^{\operatorname{old}})\}\). By (i) we know that for all \(v^{\prime}\in\bigcup_{i=1}^{j}U_{i}^{\operatorname{old}}\), we have \(\tilde{\rho}^{\operatorname{new}}_{\mathcal{M}}(v^{\prime})\geq\rho_{{ \mathcal{M}}^{\operatorname{\tiny{\text{\tiny{\text{\text{\tiny{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text \text{ \text{ \text }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\} \|\|\|\|\|\|\|\|\|\|\|\)\)|\) \) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text \ \ \text{\text{\text{\text{\tiny{\text{\tiny{\text{\tiny{\text{\tiny{\text{\text{\tiny{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
decomposed into two phases, the early phase when the elements of \(U_{\mathrm{big}}\cup\{u^{\mathrm{new}}\}\) are processed, and the late phase when the elements of \(U_{\mathrm{small}}\) are processed. As \(\mathrm{span}_{\mathcal{M}}(U_{\mathrm{big}}\cup\{u^{\mathrm{new}}\})=\mathrm{ span}_{\mathcal{M}}(U_{\mathrm{big}})\), the construction of the sets in the late phase is the same no matter \(u^{\mathrm{new}}\) is in \(V^{\prime}\) or not. Hence the sets are the same and so are the associated densities. This concludes the proof of (iv).
Proof of Lemma 21.: Consider the behavior when \(u^{\mathrm{old}}\) is added to \(V^{\prime}\backslash\{u^{\mathrm{old}}\}\): it is clear that Lemma 20 applies. As a result the points (i) and (ii) come easily from Lemma 20 (ii). For (iii), observe that from Lemma 20 (iii) we get that \(\tilde{\rho}^{\mathrm{new}}_{\mathcal{M}}(u^{\mathrm{old}})\leq\tilde{\rho}^ {\mathrm{old}}_{\mathcal{M}}(u^{\mathrm{old}})\leq\tilde{\rho}^{\mathrm{new}}_ {\mathcal{M}}(u^{\mathrm{old}})+1\) and hence we obtain also (iii) here. For (iv) the bounds are a bit different from what we could get from Lemma 20 (iv) but using ideas similar to that from the previous proof one can easily show the desired result.
Proof of Theorem 23.: Start with an empty subset \(V^{\prime}\). Then apply the following local improvement steps repeatedly on \(V^{\prime}\), until it is no longer possible. If an element in \(V^{\prime}\) violates Property (i) of Definition 3, then remove it from \(V^{\prime}\); similarly, if an element in \(V\backslash V^{\prime}\) violates Property (ii), insert it into \(V^{\prime}\). Note that among the two local improvement steps, the priority is given to the deletion operations.
Observe that when no element violates Property (i), all the elements have densities bounded by \(\beta\) in both matroids. To prove that this algorithm terminates in finite time and to show the existence of a DCS, we introduce a potential function:
\[\Phi(V^{\prime})=(2\beta-7)\cdot|V^{\prime}|-\sum_{l\in\{1,2\}}\left[\sum_{j=1 }^{k}\left(\mathrm{rank}_{\mathcal{M}^{\prime}_{l}/(\bigcup_{i=1}^{j-1}U_{l,i} )}(U_{l,j})\cdot(\rho_{\mathcal{M}^{\prime}_{l}/(\bigcup_{i=1}^{j-1}U_{l,i})}( U_{l,j}))^{2}\right)\right]\]
where \(U_{l,1},\ldots,U_{l,k}\) denotes the density-based decomposition of \(V^{\prime}\) in \(\mathcal{M}_{l}\) for \(l\in\{1,2\}\). We can rewrite this function in a more convenient form:
\[\Phi(V^{\prime})=(2\beta-7)\cdot|V^{\prime}|-\sum_{l\in\{1,2\}}\left[\sum_{j=1 }^{k}\rho_{l,j}^{2}\right]\]
where for \(l\in\{1,2\}\), the vector \(\rho_{l}=(\rho_{l,1},\ldots,\rho_{l,k})\) is the list of the densities of each set of the decomposition \(U_{l,1},\ldots,U_{l,k}\) counted with multiplicity equal to their rank (so that, for instance, \(\rho_{\mathcal{M}^{\prime}_{l}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\) appears \(\mathrm{rank}_{\mathcal{M}^{\prime}_{l}/(\bigcup_{i=1}^{j-1}U_{i})}(U_{j})\) times in that vector; we potentially add some zeros in the end so that the vector has exactly \(k\) components).
The execution of the algorithm can be seen as a series of batches of operations, consisting of one insertion operation followed by some number of deletion operations. Each batch has a finite size because we can make only a finite number of deletions when no insertion is performed. At the end of each batch of operations, all the densities are bounded by \(\beta\), hence using Proposition 22 (as for this result to hold it is only required that the densities are bounded by \(\beta\)) we have that \(\Phi\) is bounded by \((2\beta-7)\cdot\beta\cdot\mu(V)\). Then we have to show that \(\Phi\) increases at each local improvement step by at least some constant amount and we will be done.
When Property (i) of Definition 3 is not satisfied by some element in \(u^{\mathrm{old}}\in V^{\prime\mathrm{old}}\), then it is removed to get a new set \(V^{\prime\mathrm{new}}=V^{\prime\mathrm{old}}\backslash\{u^{\mathrm{old}}\}\). Hence from the vectors \(\rho_{l}^{\mathrm{old}}=(\rho_{l,1},\ldots,\rho_{l,k})\) we get new vectors \(\rho_{l}^{\mathrm{new}}=(\rho_{l,1}-\lambda_{l,1},\ldots,\rho_{l,k}-\lambda_{l,k})\), with the following properties:
* \(\lambda_{l,i}\geq 0\) (by Lemma 21 (ii));
* \(\sum_{j=1}^{k}\lambda_{l,j}=1\) for \(l\in\{1,2\}\) (as we always have \(\sum_{j=1}^{k}\rho_{l,j}=|V^{\prime}|\), see Proposition 18);
* \(\lambda_{l,i}>0\Rightarrow\tilde{\rho}^{\mathrm{old}}_{\mathcal{M}_{l}}(u^{ \mathrm{old}})-1\leq\rho_{l,i}\leq\tilde{\rho}^{\mathrm{old}}_{\mathcal{M}_{l}} (u^{\mathrm{old}})\) for \(l\in\{1,2\}\) (by Lemma 21 (iv)).
As a result we get:
\[\Phi(V^{\prime\mathrm{new}})-\Phi(V^{\prime\mathrm{old}})=-(2\beta-7)+\sum_{l \in\{1,2\}}\left[\sum_{j=1}^{k}\rho_{l,j}^{2}-(\rho_{l,j}-\lambda_{l,j})^{2}\right]\]
\[=-2\beta+7+\sum_{l\in\{1,2\}}\left[\sum_{j=1}^{k}2\rho_{l,j}\lambda_{l,j}-\sum_{j=1}^{k}\lambda_{l,j}^{2}\right]\] \[\geq-2\beta+5+\sum_{l\in\{1,2\}}\left[\sum_{j=1}^{k}2\rho_{l,j} \lambda_{l,j}\right]\] \[=-2\beta+5+\sum_{l\in\{1,2\}}\left[\sum_{\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{old}})-1\leq\rho_{l,j}\leq \widetilde{\rho}_{\mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{old}})}2\rho_{l,j }\lambda_{l,j}\right]\] \[\geq-2\beta+5+\sum_{l\in\{1,2\}}\left[2\cdot(\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{old}})-1)\sum_{\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{old}})-1\leq\rho_{l,j}\leq \widetilde{\rho}_{\mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{old}})}\lambda_{l,j}\right]\] \[=-2\beta+5+2\cdot(\widetilde{\rho}_{\mathcal{M}_{1}}^{\mathrm{old }}(u^{\mathrm{old}})+\widetilde{\rho}_{\mathcal{M}_{2}}^{\mathrm{old}}(u^{ \mathrm{old}})-2)\] \[>-2\beta+1+2\beta=1.\]
The first inequality comes from \(\lambda_{l,i}\geq 0\) and \(\sum_{j=1}^{k}\lambda_{l,j}=1\), implying that \(\sum_{j=1}^{k}\lambda_{l,j}^{2}\leq 1\). The last inequality comes from \(\widetilde{\rho}_{\mathcal{M}_{1}}^{\mathrm{old}}(u^{\mathrm{old}})+\widetilde {\rho}_{\mathcal{M}_{2}}^{\mathrm{old}}(u^{\mathrm{old}})>\beta\). Hence we get an increase of \(\Phi\) of at least \(1\).
Similarly, when Property (ii) of Definition 3 is not satisfied by some element in \(u^{\mathrm{new}}\in V\backslash V^{\prime\mathrm{old}}\), then it is added to get a new set \(V^{\prime\mathrm{new}}=V^{\prime\mathrm{old}}\cup\{u^{\mathrm{new}}\}\). Hence from the vectors \(\rho_{l}^{\mathrm{old}}=(\rho_{l,1},\ldots,\rho_{l,k})\) we get new vectors \(\rho_{l}^{\mathrm{new}}=(\rho_{l,1}+\lambda_{l,1},\ldots,\rho_{l,k}+\lambda_{ l,k})\), with the following properties:
* \(\lambda_{l,i}\geq 0\) (by Lemma 20 (ii));
* \(\sum_{j=1}^{k}\lambda_{l,j}=1\) for \(l\in\{1,2\}\) (as we always have \(\sum_{j=1}^{k}\rho_{l,j}=|V^{\prime}|\), see Proposition 18);
* \(\lambda_{l,i}>0\Rightarrow\widetilde{\rho}_{\mathcal{M}_{l}}^{\mathrm{old}}(u^ {\mathrm{new}})\leq\rho_{l,i}\leq\widetilde{\rho}_{\mathcal{M}_{l}}^{\mathrm{ old}}(u^{\mathrm{new}})+1\) for \(l\in\{1,2\}\) (by Lemma 20 (iv)).
As a result we get:
\[\Phi(V^{\prime\mathrm{new}})-\Phi(V^{\prime\mathrm{old}}) =(2\beta-7)-\sum_{l\in\{1,2\}}\left[\sum_{j=1}^{k}(\rho_{l,j}^{2} +\lambda_{l,j})^{2}-\rho_{l,j}^{2}\right]\] \[=2\beta-7-\sum_{l\in\{1,2\}}\left[\sum_{j=1}^{k}2\rho_{l,j} \lambda_{l,j}+\sum_{j=1}^{k}\lambda_{l,j}^{2}\right]\] \[\geq 2\beta-9-\sum_{l\in\{1,2\}}\left[\sum_{j=1}^{k}2\rho_{l,j} \lambda_{l,j}\right]\] \[=2\beta-9-\sum_{l\in\{1,2\}}\left[\sum_{\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{new}})\leq\rho_{l,j}\leq \widetilde{\rho}_{\mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{new}})+1}2\rho_{l,j}\lambda_{l,j}\right]\] \[\geq 2\beta-9-\sum_{l\in\{1,2\}}\left[2\cdot(\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{new}})+1)\sum_{\widetilde{\rho}_{ \mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{new}})\leq\rho_{l,j}\leq\widetilde{ \rho}_{\mathcal{M}_{l}}^{\mathrm{old}}(u^{\mathrm{new}})+1}\lambda_{l,j}\right]\] \[=2\beta-9-2\cdot(\widetilde{\rho}_{\mathcal{M}_{1}}^{\mathrm{old }}(u^{\mathrm{new}})+\widetilde{\rho}_{\mathcal{M}_{2}}^{\mathrm{old}}(u^{ \mathrm{new}})+2)\] \[>2\cdot(\beta-\beta^{-})-13\geq 1.\]
The first inequality comes from \(\lambda_{l,i}\geq 0\) and \(\sum_{j=1}^{k}\lambda_{l,j}=1\), implying that \(\sum_{j=1}^{k}\lambda_{l,j}^{2}\leq 1\). To move to the last line we use that \(\widetilde{\rho}_{\mathcal{M}_{1}}^{\mathrm{old}}(u^{\mathrm{new}})+\widetilde {\rho}_{\mathcal{M}_{2}}^{\mathrm{old}}(u^{\mathrm{new}})<\beta^{-}\), and then that \(\beta\geq\beta^{-}+7\). Hence we also get an increase of \(\Phi\) of at least \(1\).
As a result, a \((\beta,\beta^{-})\)-DCS can be found in at most \(2\cdot\beta^{2}\cdot\mu(V)\) such local improvement steps.
|
2304.14177 | ChatGPT vs State-of-the-Art Models: A Benchmarking Study in Keyphrase
Generation Task | Transformer-based language models, including ChatGPT, have demonstrated
exceptional performance in various natural language generation tasks. However,
there has been limited research evaluating ChatGPT's keyphrase generation
ability, which involves identifying informative phrases that accurately reflect
a document's content. This study seeks to address this gap by comparing
ChatGPT's keyphrase generation performance with state-of-the-art models, while
also testing its potential as a solution for two significant challenges in the
field: domain adaptation and keyphrase generation from long documents. We
conducted experiments on six publicly available datasets from scientific
articles and news domains, analyzing performance on both short and long
documents. Our results show that ChatGPT outperforms current state-of-the-art
models in all tested datasets and environments, generating high-quality
keyphrases that adapt well to diverse domains and document lengths. | Roberto Martínez-Cruz, Alvaro J. López-López, José Portela | 2023-04-27T13:25:43Z | http://arxiv.org/abs/2304.14177v2 | # ChatGPT vs State-of-the-Art Models: A Benchmarking Study in Keyphrase Generation Task
###### Abstract
Transformer-based language models, including ChatGPT, have demonstrated exceptional performance in various natural language generation tasks. However, there has been limited research evaluating ChatGPT's keyphrase generation ability, which involves identifying informative phrases that accurately reflect a document's content. This study seeks to address this gap by comparing ChatGPT's keyphrase generation performance with state-of-the-art models, while also testing its potential as a solution for two significant challenges in the field: domain adaptation and keyphrase generation from long documents. We conducted experiments on six publicly available datasets from scientific articles and news domains, analyzing performance on both short and long documents. Our results show that ChatGPT outperforms current state-of-the-art models in all tested datasets and environments, generating high-quality keyphrases that adapt well to diverse domains and document lengths.
ChatGPT Text Generation Keyphrase Generation Natural Language Processing Deep Learning Domain Adaptation Long Documents
## 1 Introduction
Keyphrase generation (KPG) is the process of automatically identifying or creating a set of phrases that effectively capture the essence of a document or text. These keyphrases provide a succinct summary of the main topics or themes discussed in the text and can be utilized for a range of downstream tasks such as document classification (Hulth and Megyesi (2006)), clustering (Hammouda et al. (2005)), summarization (Qazvinian et al. (2010)), recommendation (Augenstein et al. (2017)), and information retrieval (Sanyal et al. (2019)).
There are two types of keyphrases: extractive (found in the document) and abstractive (not found in the document). Historically, extractive methods, known as Keyphrase Extraction (KPE), based on sequence tagging models (Nguyen and Kan (2007), Gollapalli et al. (2017), Alzaidy et al. (2019), Rungta et al. (2020), Sahrawat et al. (2020)) have demonstrated the highest accuracy, although they are limited in their ability to predict abstractive keyphrases. KPG offers two primary advantages over KPE: it can predict both extractive and abstractive keyphrases and It can leverage prompt-based learning to more effectively benefit from multitask learning.
The majority of KPG models follow the text-to-text generation training paradigm, which has greatly benefited from the development of transformer models and pre-trained language models (PLMs). These PLMs are capable of acquiring a comprehensive contextualized representation of the text by undergoing pre-training with a vast corpus of data via diverse self-supervised learning tasks. The current state-of-the-art (SotA) model, KeyBART (Kulkarni et al. (2022)), is based on this training paradigm and employs the transformer-based architecture of BART (Lewis et al. (2019)), a PLM, as its foundation. KeyBART has demonstrated promising performance on various keyphrase generation benchmarks,
surpassing previous state-of-the-art models. However, there is still ample room for improvement in the KPG domain, particularly in the generation of abstractive keyphrases and the effective integration of external knowledge sources.
Text-to-text generation has proven to be an effective approach to facilitate multi-task learning. The incorporation of prompt-based learning, as demonstrated in models such as T5 (Raffel et al. (2020)), has significantly enhanced the performance of text-to-text generation models, particularly in few-shot and zero-shot learning scenarios.
Generative Pre-trained Transformer (GPT) models have garnered significant attention for their ability to generate coherent and context-aware text (Radford and Narasimhan (2018)). The latest GPT model, ChatGPT (Ouyang et al. (2022)), based on GPT-3.5 (Brown et al. (2020)), has demonstrated its impressive ability to generate human-like responses to text-based conversations across various domains and topics. Given the adaptability of ChatGPT to new domains and the transfer learning capabilities from other tasks learned by the model, we believe that it has the potential to achieve state-of-the-art results in the KPG task. Additionally, ChatGPT may also help address a long-standing challenge in the field of KPG - domain adaptation. By leveraging the power of transfer learning and its ability to adapt to new domains, ChatGPT can potentially overcome domain-specific challenges in KPG and enhance the performance and efficiency of the task.
Keyphrase generation from long documents is a latent problem in the field. Current approaches often rely on summarizing texts, such as using abstracts, to identify important phrases. However, this method has limitations. Real-world situations may not always provide summaries, leading to reduced algorithm performance on longer texts. Additionally, crucial keyphrases may be missing from the summaries, and contextual information in the original text may not be reflected in them, greatly reducing the algorithm's effectiveness. While SotA approaches utilize contextualized text representations from PLMs, these representations are limited to a maximum number of words, preventing the embedding of long-term word relationships. The greater maximum token limit of ChatGPT, which is four times larger than that of KeyBART, may lead to better performance on long documents.
Our goal is to assess the potential of ChatGPT for KPG by conducting performance tests across diverse topics, such as scientific and news domains, as well as varying document lengths. We intend to determine whether ChatGPT's performance remains consistent across different document lengths and topics. To do so, we compare its results with the SotA algorithms, specifically KeyBART, on six publicly available datasets. Additionally, we provide real-world examples to demonstrate how ChatGPT generates keyphrases from input texts.
The contributions of this paper can be summarized as follows:
* To the best of our knowledge, it's the first attempt to employ ChatGPT for the KPG task in both the news and long document domains. To evaluate its performance, we compare it against state-of-the-art models across six widely used benchmark datasets. These datasets encompass both short and long scientific documents, as well as articles from the news domain.
* To the extend of our knoledge, no other study has benchmarked keyphrase generation from long documents. Previous studies have relied solely on the title and abstract to generate keyphrases, which may not provide an accurate representation of real-world scenarios where the entire document needs to be processed due to the absence of a summary.
* Our results are comprehensively analyzed, and case studies are presented to showcase the model's strengths and limitations in the KPG task. These case studies use real-world examples to demonstrate how the model leverages knowledge from other tasks and pretraining objectives to enhance keyphrase generation.
## 2 Related Work
### Keyphrase Extraction and Generation
KPE involves selecting relevant phrases from a document, and there are two main approaches: supervised and unsupervised. Unsupervised methods typically use a two-step extraction process that involves heuristically identifying candidate phrases, then sorting and ranking them using graph-based approaches (Mihalcea and Tarau (2004); Bougouin et al. (2013); Wang et al. (2014); Bennani-Smires et al. (2018); Mahata et al. (2018)). Supervised methods use labeled data and can be customized for specific linguistic and contextual characteristics. Earlier supervised approaches relied on manually-engineered features (Hulth (2003); Kim and Kan (2009); Nguyen and Kan (2007)), but a sequence labeling approach using a conditional random field (CRF) was introduced in Gollapalli et al. (2017), and recent methods incorporate pre-trained word embeddings (Alzaidy et al. (2019)) like Word2Vec (Mikolov et al. (2013)) or GloVe (Pennington et al. (2014)) to improve the accuracy. The improved sequence labeling approach uses a bidirectional long short-term memory (BiLSTM) + CRF layer to incorporate contextual information and model classification dependencies.
The transformer architecture, which has demonstrated improved performance in various natural language processing tasks, including as described in Vaswani et al. (2017), has been employed in several works for the KPE task, including TransKP (Rungta et al. (2020)) and TNT-KID (Martine et al. (2021)). Its ability to embed words in a sequence and provide representations that depend on the word and its context has led to the development of PLMs that specialize in providing contextualized embeddings. When combined with a trained BiLSTM-CRF layer, these embeddings have outperformed previous models, as demonstrated in Sahrawat et al. (2020). Some works, such as KBIR (Kulkarni et al. (2022)), have designed self-supervised objectives specifically for keyphrase extraction tasks to further enhance the representation of these embeddings, resulting in state-of-the-art results.
However, the extractive model cannot handle the absent keyphrase. To overcome this limitation, KPG introduces a sequence-to-sequence generation method, first presented by Meng et al. (2017) through CopyRNN, a Seq2Seq framework that employs attention and copy mechanism. Since then, researchers have proposed several enhancements to this methodology. Ye and Wang (2018) explored a semi-supervised method, Chen et al. (2018) investigated a review mechanism to reduce duplicates, and Chen et al. (2019) focused on leveraging title information. Meanwhile, Wang et al. (2019) exploited deeper topics of the document, and Zhao and Zhang (2019) utilized linguistic constraints. Reinforcement learning was introduced by Chan et al. (2019) and Swaminathan et al. (2020). Chen et al. (2020) proposed an exclusive hierarchical decoding framework, while Yuan et al. (2020) introduced a model that generates multiple keyphrases as delimiter-separated sequences. Zhao et al. (2021) proposed separate mechanisms to deal with present and absent keyphrase generation. Huang et al. (2021) presented an AdaGM method to increase the discreteness of keyphrase generation, and Ye et al. (2021) proposed an one2set method for generating diverse keyphrases as a set. Other works, including Chen et al. (2019), Ahmad et al. (2021), and Wu et al. (2021), focused on jointly learning extraction and generation for keyphrase prediction. Wu et al. (2022) introduced prompt-based learning with a non-autoregressive approach, which is constrained to generate absent keyphrases.
The SotA results were achieved by KeyBART (Kulkarni et al. (2022)), which presented a new pre-training technique for BART (Lewis et al. (2019)). Unlike earlier pre-training methods that aimed to remove noise from the input text, KeyBART generates keyphrases related to the input text in concatenated sequential (CatSeq) format.
Keyphrase extraction and generation are crucial tasks in natural language processing, but only a few studies have addressed the challenge of extracting keyphrases from lengthy documents. One such study by Mahata et al. (2022) released two large datasets containing fully extracted text and metadata, evaluating the performance of unsupervised and supervised algorithms for keyphrase extraction. Another notable example is the work by Docekal and Smrz (2022), which proposes a system that chunks documents while maintaining a global context as a query for relevant keyphrase extraction. They employ a pre-trained BERT model to estimate the probability of a given text span forming a keyphrase and find that a shorter context with a query outperforms a longer context without a query. Martinez-Cruz et al. (2023) introduced a specialized approach to enhance keyphrase extraction from lengthy documents. They employed graph embedding techniques on the co-occurrence graph derived from the entire document, enriching the understanding of the document by incorporating a holistic perspective into the Pre-trained Language Model (PLM). The observed improvements underscore the importance of considering a comprehensive view of the full document for effective keyphrase extraction and generation. To enhance keyphrase generation, Garg et al. (2022) investigated the inclusion of information beyond the title and abstract as input in the field of keyphrase generation. Their approach demonstrated improved results, indicating that the model should not solely rely on the summary provided by the title and abstract to predict high-quality keyphrases. These studies highlight the significance of developing effective methods for keyphrase extraction and generation from lengthy documents and offer promising directions for future research.
While previous studies, such as Song et al. (2023), have examined the effectiveness of ChatGPT in the KPG task, they did not assess its performance in full-length documents that exceed the model's maximum input limit without truncation, nor did they investigate its suitability for the news domain. Our study, on the other hand, provides a comprehensive analysis that extensively explores the capabilities of both KeyBART and ChatGPT across various use cases in the KPG task.
### GPT Models
GPT models have been widely used in NLP tasks, such as language generation, language understanding, and question answering. GPT models, including GPT-2 (Radford et al. (2019)), GPT-3 and GPT 3.5 (Brown et al. (2020)), have achieved SotA performance in several NLP tasks and have become the de-facto standard in many applications. They exclusively use the transformer's decoder and are pre-trained in a massive corpora of data using self-supervised learning.
Reinforcement learning from human feedback (RLHF) allows language models to learn from explicit feedback provided by human annotators, leading to improved text quality. Originally developed for training robots (Christiano et al. (2017), Ibarz et al. (2018)), recent studies have shown the benefits of applying RLHF to fine-tune language models (Ziegler et al.
[2020], Stiennon et al. [2022], 7], Wu et al. [2021b], Jaques et al. [2019], Kreutzer et al. [2018], Lawrence and Riezler [2018], Zhou and Xu [2020], Cho et al. [2019], Perez et al. [2019], Madaan et al. [2023]), including GPT models such as ChatGPT (Ouyang et al. [2022]).
Multi-task learning has proven to be advantageous for GPT models since it entails instructing language models and is connected to cross-task generalization research in language models. Research has shown that fine-tuning language models on various NLP tasks with instructions can enhance their performance downstream, making it a powerful method for few-shot and zero-shot learning. These advantages have been corroborated in several studies (Howard and Ruder [2018],Devlin et al. [2019],Dong et al. [2019], McCann et al. [2018],Keskar et al. [2019]).
By leveraging both RLHF and multi-task learning paradigms, ChatGPT has been fine-tuned from the GPT3.5 model to excel in chatbot development, surpassing its predecessors and showcasing its potential to revolutionize the field of conversational AI. While previous studies have benchmarked its performance in various NLP tasks, such as machine translation (Hendy et al. [2023]), there have been no previous studies exploring its potential in the KPG task.
## 3 Experimental Setup
### Datasets
To evaluate the performance of KPG models, we employ the test set of six publicly available datasets covering both scientific and news domains, with varying document lengths. The datasets we use are as follows:
* Inspec1 (Hulth [2003]) is a scientific literature dataset that consists of 2,000 abstracts and their corresponding keyphrases, covering various topics The abstracts are from papers belonging to the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The dataset has a train,val and test split that contains 1000, 500 and 500 samples respectively. Footnote 1: [https://huggingface.co/datasets/midas/inspec](https://huggingface.co/datasets/midas/inspec)
* KP20K2 (Meng et al. [2017]), a large-scale dataset with over 528K articles for training, 20K articles for validation, and 20K articles for testing from the PubMed Central Open Access Subset, covering various domains including medicine, biology, and physics.
* The NUS3 dataset (Nguyen and Kan [2007]) consists of 211 full scientific documents that have been manually annotated with their respective keyphrases. It is used exclusively for evaluation purposes, as the dataset comprises solely a test split. Footnote 2: [https://huggingface.co/datasets/midas/smeeval2010](https://huggingface.co/datasets/midas/smeeval2010)
* SemEval20104 (Kim et al. [2010]), a dataset comprising 284 English full scientific papers from the ACM Digital Library, which are split into test and train sets containing 100 and 144 articles, respectively. Footnote 4: [https://huggingface.co/datasets/midas/duc2001](https://huggingface.co/datasets/midas/duc2001)
* The KPTimes5 (Gallina et al. [2019]) dataset consists of 279,923 news articles from NY Times and 10K from JPTimes, curated by expert editors, and divided into train, validation, and test sets with 259,923, 10,000, and 20,000 samples, respectively. Footnote 5: [https://huggingface.co/datasets/midas/nus](https://huggingface.co/datasets/midas/nus)
* The DUC20016 dataset (Wan and Xiao [2008]) is a widely recognized corpus of news articles that includes 308 documents and 2,488 manually annotated keyphrases. It should be noted that this dataset only contains a test split and no training data.
Footnote 5: [https://huggingface.co/datasets/midas/kpttimes](https://huggingface.co/datasets/midas/kpttimes)
Footnote 6: [https://huggingface.co/datasets/midas/duc2001](https://huggingface.co/datasets/midas/duc2001)
Footnote 7: [https://huggingface.co/bloomberg/KeyBART](https://huggingface.co/bloomberg/KeyBART)
Relevant statistics from the datasets can be found in Table 1.
### Baselines
The evaluation of ChatGPT's performance involved comparing it with several other generative models, with respect to their ability to predict missing keyphrases across different generation frameworks. These models are:
* KeyBART7 (Kulkarni et al. [2022]), which is a state-of-the-art model developed using BART and featuring a novel pre-training approach that generates relevant keyphrases in the CatSeq format, instead of just removing noise from input text. Footnote 7: [https://huggingface.co/bioomberg/KeyBART](https://huggingface.co/bioomberg/KeyBART)
* Prompt Based KPG (Wu et al. (2022)), which utilizes a prompt-based learning method to generate missing keyphrases. The prompt is created based on the overlapping words between the absent keyphrase and the document, and a mask predict decoder is used to complete the keyphrase while adhering to the constraints of the prompt.
* UniKeyphrase (Wu et al. (2021)), which is a unified framework for both present keyphrase extraction and absent keyphrase generation. This framework is based on a pre-trained prefix LM model.
* Pure generative models that use a Seq2Seq approach including CatSeq (Yuan et al. (2020)) and its enhanced versions such as CatSeqCorr (Chen et al. (2018)), catSeqTG (Chen et al. (2019)), and CatSeqD (Yuan et al. (2020)), along with ExHiRD-h model (Chen et al. (2020)), for comparison.
Relevant information from the main models can be found in Table 2.
### Evaluation Metrics
We used the \(F1@K\) evaluation metric (Kim et al. (2010)), where \(K\) represents the number of predicted keyphrases to be considered. Equations 1, 2, and 3 illustrate how to compute \(F1@K\). Prior to evaluation, we preprocessed the ground truth and predicted keyphrases by converting them to lowercase, stemming them, and removing punctuation, and we used exact matching. Let \(Y\) denote the ground truth keyphrases, and \(\bar{Y}=(\bar{y}_{1},\bar{y}_{2},\ldots,\bar{y_{m}})\) denote the predicted keyphrases. The metrics are defined as follows:
\[Precision@k=\frac{|Y\cap\bar{Y}_{k}|}{min\{|\bar{Y}_{k}|,k\}} \tag{1}\]
\[Recall@k=\frac{|Y\cap\bar{Y}_{k}|}{|Y|} \tag{2}\]
\[F1@k=\frac{2*Precision@k*Recall@k}{Precision@k+Recall@k} \tag{3}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **Test Size** & **Long Doc** & **Domain** & **Avg. Words** & **Avg. Extractive KPs** & **Avg. Abstractive KPs** \\ \hline Inspec & 500 & No & Scientific & 135 & 6.56 & 3.26 \\ \hline KP20k & 20,000 & No & Scientific & 160 & 2.34 & 2.94 \\ \hline NUS & 211 & Yes & Scientific & 9287 & 8.02 & 3.08 \\ \hline SemEval2010 & 100 & Yes & Scientific & 8404 & 9.17 & 6.07 \\ \hline KPTimes & 20,000 & No & News & 643 & 2.72 & 2.3 \\ \hline DUC2001 & 308 & No & News & 847 & 7.14 & 0.92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the test splits for the dataset used in our experiments
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Training Domain** & **Max Input Tokens** \\ \hline ChatGPT & MultiDomain & 4,096 (Summing Input and Output Tokens) \\ \hline KeyBART & Scientific & 1024 \\ \hline Prompt Base KPG & Scientific & 384 \\ \hline UniKeyphrase & Scientific & 384 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Insights from the main models used in our experiments
Here, \(\bar{Y}_{k}\) denotes the top \(k\) elements of the predicted keyphrase set \(\bar{Y}\). In our case, we set \(K\) to the total number of predicted keyphrases \(M\), and \(5\) to represent the top 5 keyphrases generated by the model.
### Setting
For KeyBART, the generated results are produced using a beam search with a beam width of 50, and the maximum length of the generated sequence is restricted to 40 tokens with a temperature of 0. When the number of input tokens exceeds the model's maximum input limit, the tokens are divided into non-overlapping chunks of the same size as the maximum input token limit, and the generated keyphrases are produced accordingly, concatenated and any duplicates are removed.
To generate keyphrases using ChatGPT, we utilize the gpt-3.5-turbo model in chat completion mode, with the prompt illustrated in Figure 1. In the prompt, the \(text\) variable represents the input text from which the keyphrases are to be generated. The generated sequence is restricted to 40 tokens with a temperature of 0, a frequency penalty of 0.8 and a presence penalty of 0. Although the model's response may vary in the chat format, it always includes a list of keyphrases presented in one of three ways: comma-separated, enumerated, or itemized. The response can be transformed into a list of keyphrases by post-processing it using regular expressions. Since the experiments involve processing long documents that exceed the maximum input token limit, the text is split into non-overlapping 2000-word segments and input using individual prompts. The final step involves concatenating the results and removing any duplicates to generate the complete list of keyphrases.
The primary aim of this study is to evaluate the performance of scientific domain models using datasets consisting of short documents. Unlike prior studies that relied on the title and abstract of papers in long document datasets, we will use the entire documents to assess the models' effectiveness on longer texts. Moreover, we will compare the models' ability to adapt to different domains by evaluating their performance on news domain datasets. To conduct these two analyses, we will compare the results of the current SotA model, KeyBART, and ChatGPT.
We used the first five generated keyphrases for \(F1@5\) to evaluate the models. If there were fewer than five keyphrases, we appended random incorrect keyphrases until there were five predictions. However, due to multiple separate generations for the same sample in long documents, this metric did not provide clear insights into the model's performance. Therefore, in this scenario we only benchmarked the results based on \(F1@M\). In order to compare the ground truth keyphrases with the generated ones, we utilize the Porter Stemmer to normalize both as Meng et al. (2017).
## 4 Results
This section presents the results of our experiments. ChatGPT outperforms the specialized models in the task in all scenarios, with the performance improvement becoming more significant as the document length increases or as the document's domain moves farther away from the scientific domain.
### Short Scientific Documents
As previously highlighted, Inspec and KP20k datasets are used to benchmark the performance of the models in the short scientific documents domains. Scenario in which all the models are specialized with the exception of Chat GPT. In Table 3 the results are shown.
Despite all models being specialized in the domain, ChatGPT surpasses them in all cases except for the KPE task in KP20K, where the fine-tuning of the models on this dataset explains their exceptional performance. However, ChatGPT outperforms them in the generation of abstractive keyphrases in both dataset. We speculate that this task benefits greatly from ChatGPT's multi-task learning, which enables the model to learn useful knowledge from other tasks and
Figure 1: Example prompt used to generate keyphrases with ChatGPT
apply it to generate high-quality abstractive keyphrases; which could explain why the specialized models fine-tuned in the distribution were outperformed by ChatGPT.
### Long Scientific Documents
To asses the performance of the models in long scientific documents the SemEval2010 and NUS datasets are utilized, where the number of tokens of each sample is several times larger than the maximum allowed by the models. The results of these experiments are presented in Table 4.
In this scenario, it is evident that ChatGPT outperforms KeyBART by a substantial margin. This can be credited to ChatGPT's higher input token limit, which enables it to capture more contextual information, such as distant word relationships. This is crucial for generating accurate and relevant keyphrases, giving ChatGPT a clear advantage over KeyBART in this task. Furthermore, we hypothesize that the incorporation of other tasks during training has facilitated the development of a larger language model without compromising its performance in this specific task. This approach could potentially be used to train larger language models specialized in KPG.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Model} & SemEval2010 & NUS \\ & & F1@M & F1@M \\ \hline \multirow{2}{*}{Present Kephrases} & KeyBART & 0,137 & 0,143 \\ & ChatGPT & **0,186** & **0,1996** \\ \hline \multirow{2}{*}{Absent Kephrases} & KeyBART & 0,019 & 0,010 \\ & ChatGPT & **0,021** & **0,042** \\ \hline \end{tabular}
\end{table}
Table 4: Results for long scientific documents of keyphrase prediction on benchmarks. The bold-faced values indicate the best performances across the board.
\begin{table}
\begin{tabular}{|c|l|c c|c c|} \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Inspec} & \multicolumn{2}{c|}{KP20k} \\ & & F1@5 & F1@M & F1@5 & F1@M \\ \hline \multirow{6}{*}{Present Kephrases} & catSeq & 0,225 & 0,262 & 0,291 & 0,367 \\ & CatSeqD & 0,219 & 0,263 & 0,285 & 0,363 \\ & CatSeqCorr & 0,227 & 0,269 & 0,289 & 0,365 \\ & CatSeqTG & 0,229 & 0,270 & 0,292 & 0,366 \\ & ExHiRD-h & 0,253 & 0,291 & 0,311 & 0,364 \\ & SEG-Net & 0,216 & 0,265 & 0,311 & 0,379 \\ & UniKeyphrase & 0,260 & 0,288 & 0,347 & 0,352 \\ & Prompt Base KPG & 0,260 & 0,294 & **0,351** & 0,355 \\ & KeyBART & 0,278 & 0,301 & 0,301 & **0,398** \\ & ChatGPT & **0,352** & **0,403** & 0,232 & 0,251 \\ \hline \multirow{6}{*}{Absent Kephrases} & catSeq & 0,004 & 0,008 & 0,015 & 0,032 \\ & CatSeqD & 0,006 & 0,011 & 0,015 & 0,031 \\ & CatSeqCorr & 0,005 & 0,009 & 0,015 & 0,032 \\ & CatSeqTG & 0,005 & 0,011 & 0,015 & 0,032 \\ & ExHiRD-h & 0,011 & 0,022 & 0,016 & 0,032 \\ & SEG-Net & 0,009 & 0,015 & 0,018 & 0,036 \\ & UniKeyphrase & 0,012 & 0,022 & 0,032 & 0,058 \\ & Prompt Base KPG & 0,017 & 0,022 & 0,032 & 0,042 \\ & KeyBART & 0,041 & 0,045 & 0,035 & 0,035 \\ & ChatGPT & **0,049** & **0,059** & **0,044** & **0,056** \\ \hline \end{tabular}
\end{table}
Table 3: Results for short scientific documents of keyphrase prediction on benchmarks. The bold-faced values indicate the best performances across the board.
### News Domain
To evaluate the domain adaptation abilities of the models, we will employ the news domain datasets DUC2001 and KPTimes. These datasets exhibit significant differences in both their domain and distributions when compared to the scientific domain. However, both models have been pre-trained on this domain. KeyBART, for example, is built on BART, which includes this distribution in its pre-training. The results are displayed in Table 5.
As demonstrated by the results, ChatGPT achieved significantly stronger performance in this domain, surpassing the results of KeyBART by a factor of three or more in every benchmark. The notable performance difference between ChatGPT and KeyBART can be attributed to the fact that although KeyBART's weights include knowledge from the news domain due to BART's pre-training, this knowledge may have been relatively forgotten during KeyBART's pre-training and fine-tuning in the scientific domain. It's worth noting that ChatGPT may not have been explicitly trained on the news domain for the KPG task. However, during its training, other tasks in the news domain were included, and ChatGPT may have reused useful knowledge from these tasks to improve its performance on KPG.
## 5 Case Studies
This section presents several case studies, including both short and long scientific documents, as well as a news example. Our goal is to demonstrate how ChatGPT can enhance the Keyphrase Generation (KPG) task and highlight its benefits in real-world scenarios. For each sample, we used both the KeyBART and ChatGPT models to generate the keyphrases. To make it easier to view our results, we have color-coded the keyphrases in the accompanying images according to the scheme specified in Table 6. It is important to note that many of the keyphrases that were labeled as abstractive are actually extractive. Additionally, in short documents, most of the keyphrases labeled as abstractive appear later in the full document, underscoring the importance of processing the entire long document rather than just its summarized abstract.
### Case Study 1: Short Scientific Documents
#### 5.1.1 Case Study 1.1
This case study presents a short scientific document, specifically a sample of the Inspec test dataset with ID 2166, as shown in Figure 2.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Highlighting Color** & **Associated Significance** \\ \hline Yellow & True keyphrase unpredicted by both models \\ \hline Light Blue & True keyphrase correctly predicted by both models \\ \hline Dark Blue & True keyphrase correctly predicted by KeyBART only \\ \hline Green & True keyphrase correctly predicted by ChatGPT only \\ \hline \hline \end{tabular}
\end{table}
Table 6: Corresponding Significance of Each Highlight Color
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{DUC2001} & \multicolumn{2}{c|}{KPTimes} \\ & & F1@5 & F1@M & F1@5 & F1@M \\ \hline \multirow{2}{*}{Present Keyphrases} & KeyBART & 0,023 & 0,079 & 0,023 & 0,063 \\ & ChatGPT & **0,267** & **0,292** & **0,279** & **0,290** \\ \hline \multirow{2}{*}{Absent Keyphrases} & KeyBART & 0,001 & 0,001 & 0,006 & 0,010 \\ & ChatGPT & **0,029** & **0,030** & **0,021** & **0,022** \\ \hline \end{tabular}
\end{table}
Table 5: Results for news of keyphrase prediction on benchmarks. The bold-faced values indicate the best performances across the board.
As observed, KeyBART was unable to predict half of the KPs in its specialized domain. We hypothesize that due to the document's brevity, the model struggles to identify the keywords' significance. However, ChatGPT was not hindered by this limitation and utilized its pretraining knowledge to assign importance to these phrases. We queried ChatGPT directly using its front-end to determine if it was familiar with the keyphrases exclusively predicted by it, and the response can be seen in Figure 3. The model not only recognized the terms but also grouped them together and provided a brief introduction, demonstrating how the model's knowledge from other tasks and pretraining objectives can be useful for KPG.
#### 5.1.2 Case Study 1.2
The sample document chosen for this case study is a short scientific piece shown in Figure 4, sourced from the Inspec test dataset and bearing the identification number 2043.
Interestingly, ChatGPT exhibits superior performance in this abstract, which contains no repeated keyphrases. Such documents can be more challenging, as the importance of a word cannot be inferred from its redundant references. It is worth mentioning that these keyphrases are reiterated throughout the entire document, emphasizing the need for a comprehensive understanding of the full text to accurately identify keyphrases.
KeyBART's performance excels in the initial sections of the document but declines in the middle, and neither model successfully predicts any keyphrases in the latter parts. This observation suggests that the final sections are more complex, as the significance of a word cannot be deduced from its earlier mentions. This limitation impacts KeyBART more severely than ChatGPT.
In this instance, as demonstrated in Figure 5, we directly asked ChatGPT if it was familiar with the paper. Although the model could not correctly predict specific details from the document, such as the authors' names, it successfully summarized the paper's main points. This implies that ChatGPT possesses accurate latent knowledge of the article acquired during its pre-training, which in turn enhances its performance in the keyphrase generation task.
Figure 3: Case Study 1.1: ChatGPT’s knowledge in the field directly questioned
Figure 2: Case Study 1.1: Short Scientific Document from Inspec Test Dataset
###### Abstract
The **algorithmic complexity** of the **innermost loops** that determine the complexity of algorithms in **computational electromagnetics (CEM) codes** are analyzed according to their **operation count** and the impact of underlying **computer hardware**. As **memory chips** are much slower than arithmetic processors, codes that involve a high data movement compared to the number of arithmetic operations are executed comparatively slower. Hence, **matrix-matrix multiplications** are much faster than **matrix-vector multiplications**. It is seen that it is not sufficient to compare only the complexity, but also the actual performance of algorithms to judge on faster execution. Implications involve **FDTD loops**, _EU factorizations_, and iterative solvers for dense matrices. Run times on two reference platforms, namely an Athlon 900 MHz and an **HP PA 8600 processor, verify the findings.
Figure 4: Case Study 1.2: Short Scientific Document from Inspec Test Dataset
Figure 5: Case Study 1.2: Direct Inquiry of ChatGPT’s Familiarity with the Paper
#### 5.1.3 Case Study 1.3
This case study features another short scientific article, identified by number 2150 and sourced from the Inspec test dataset, as illustrated in Figure 6.
KeyBART and ChatGPT display comparable performance in this case, with a low intersection between their correct predictions. KeyBART succeeds in accurately predicting generic phrases from the scientific domain, such as "power supply," whereas ChatGPT is more adept at identifying specific terms, like the company name "ABB," likely due to its industry knowledge. Interestingly, both models have several inaccurately predicted keyphrases in common, such as "Data Storage," which could potentially be a keyphrase that the author overlooked. Keyphrase identification can sometimes be ambiguous.
We asked the model directly about ABB, a company involved in data storage re-formatting. Figure 7 displays the results, which show that while the model accurately describes the company, it lacks knowledge about the specific topic described in the article. This instance represents ChatGPT's lowest performance on the Inspec dataset. It seems that the model relies heavily on its inner knowledge of the document, which may explain its lower accuracy in cases where it lacks specific information.
Figure 6: Case Study 1.3: Short Scientific Document from Inspec Test Dataset
Figure 7: Case Study 1.3: Investigating ChatGPT’s Familiarity with the Company and Topic Discussed in the Paper
### Case Study 2: Long Scientific Documents
#### 5.2.1 Case Study 2.1
This case evaluates the models' performance on long scientific documents, specifically a sample of the SemEval2010 test dataset with ID 'C-17'. Its title and abstract are shown in Figure 8.
In this example, the lack of context in the title and abstract made it impossible for KeyBART to identify any keyword. However, due to its pre-training and field knowledge, it was still able to generate coherent abstractive keyphrases such as 'teleconferencing' and 'collaborative virtual environment.' In the case of long documents, the limitations posed by the maximum input tokens may prevent the model from comprehending long-term relationships between words necessary for generating keyphrases. The higher maximum input tokens in ChatGPT have contributed to its superior performance. As shown in Figure 9, keyphrases such as 'SIP' and 'Conference Server' gain greater importance in later parts of the document, such as the conclusion, which cannot be extrapolated without a holistic view of the entire document.
Domain knowledge gained from training on other tasks is another key component that explains ChatGPT's superior performance in such documents. Figure 10 illustrates how the model is capable of providing a coherent response when asked a question derived from the article's title, demonstrating the crucial role of domain knowledge in KPG from long articles.
#### 5.2.2 Case Study 2.2
This example pertains to an extensive scientific paper from the SemEval2010 test dataset, identified as 'I-10'. The title and abstract can be viewed in Figure 11.
Observably, KeyBART struggles to identify 'incremental learning' from the title and fails to recognize the repeated term 'agent.' In contrast, ChatGPT accurately predicts both elements. Nevertheless, KeyBART does partially capture 'agent' with predictions such as'multi-agent system' and 'autonomous agent.' Furthermore, it accurately anticipates the abstractive keyphrase'mas-consistency' through alternative expressions like'mas-consistent learning,' which ChatGPT did not manage to recognize.
Key portions of the document, as well as the conclusion, can be found in Figure 12 Notably, both models failed to identify 'knowledge' as a keyphrase, despite its repetition more than 10 times throughout the document. This oversight could result from the models lacking full document context, causing them to perceive 'knowledge' as a common word. Another instance is KeyBART's inability to recognize 'Incremental Learning,' which appears in significant sections
Figure 8: Case Study 2.1: Long Scientific Document from SemEval2010 Test Dataset - Title & Abstract
## 5. System Architecture
This section is dedicated to the description of the proposed system architecture. However, this paper continues the commissioning of our work turned in [14] and hardened in [16], we will not present here all the details about the proposed entities and invite the readers to consult the progress mentioned above for a full and thorough description.
**
**
**In our conferencing setup, selection is by the Master Conference on move domain to domain (monals). In our conferencing environment, these clients are regular (Net Agents (SBU), as defined in [2]) to gain in interoperability with other existing Discrepublication systems. These clients are on four of the complex setting that support the conference and its highlighted below.**
**One SBU Server (SPS) per domain, using the case of the signalling types of the conference (client) timing, learning, etc.) [16]. In particular, it is considered as a physical implementation encompassing different logical steps, namely a SBU Server Server (SPS) Server (SPS Server Request) Server and a SBU Server (A) Basic-backback (Net Agent) [22].**
**This physical implementation enables the handling of incoming outgoing SQL messages by one or another logical structure.**
**Investors, we discussed 3D nature of the system architecture used to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information required to create the information to
# SMILE: Sound Multi-agent Incremental LEarning ;-)'
###### Abstract
This article deals with the problem of collaborative learning in a multi-agent system. Here each agent can update incrementally its beliefs \(B\) (the concept representation) so that it is in a way kept _consistent_ with the whole set of information \(K\) (the examples) that he has received from the environment or other agents. We extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent. The corresponding protocol is applied to supervised concept learning. The resulting method SMILE (standing for _Sound Multi-agent Incremental LEarning_) is described and experimented here. Surprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.
## 1 Introduction
This article deals with the problem of collaborative learning in a multi-agent system. Here each agent can update incrementally its beliefs \(B\) (the concept representation) so that it is in a way kept _consistent_ with the whole set of information \(K\) (the examples) that he has received from the environment or other agents. We extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent. The corresponding protocol is applied to supervised concept learning. The resulting method SMILE (standing for _Sound Multi-agent Incremental LEarning_) is described and experimented here. Surprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.
later in the document, such as a subsection title. This limitation might be related to its narrow token context or a lack of familiarity with the paper's specialized area.
In order to comprehend the implicit knowledge contained in the document produced by ChatGPT, we requested that it provide us with a brief summary of the key concepts it covers. The resulting summary is displayed in Figure 13. However, the model incorrectly characterized the collaborative learning method outlined in the document as reinforcement learning. Despite this error, the model did correctly identify that the primary focus of the paper was artificial intelligence, indicating that its prediction was not entirely incorrect. Additionally, we directly queried the model regarding its understanding of the difference between collaborative learning and reinforcement learning, and the results are shown in 14. The model demonstrated a clear understanding of the distinction between the two terms, suggesting that its misidentification of the paper's content may have been an informed guess.
#### 5.2.3 Case Study 2.3
This instance relates to a long scientific document in the SemEval2010 evaluation dataset, designated as 'I-10'. The title and abstract are observable in Figure 15.
In this example, both models accurately identify the keyphrases covered in the document's title and abstract. However, as shown in Figure 16, neither can identify phrases located at crucial points in the document, such as 'Reputation-based adaption' or 'commitment-based semantics', the latter even being included in the conclusion. We speculate that this may be due to the lack of long-term relationship embedding mechanisms in both models.
It's worth noting that KeyBART outperforms ChatGPT in this sample. KeyBART can correctly identify all the keyphrases that ChatGPT predicts correctly and, in addition, correctly predicts the extractive keyphrase'state transition systems' and comes close to correctly predicting the abstractive keyphrase'social notion', with predictions such as'social science' or'social attitude'.
In Figure 17, we prompt ChatGPT to provide definitions of the keyphrases it did not predict by contextualizing the field they belong to. Surprisingly, ChatGPT can correctly recognize the field and provide accurate definitions for both phrases, despite not being able to identify them as keyphrases initially.
### Case Study 3: News Domain
#### 5.3.1 Case Study 3.1
In this study, we assess the model's ability to generalize across domains by analyzing a sample from the DUC2001 dataset with ID 'AP891006-0029'. As depicted in Figure 18, the article belongs to the sports news domain, which is markedly different from the scientific domain. This allows us to evaluate the model's capacity to handle a broad range of domains beyond scientific literature.
The drop in KeyBART's performance was expected given that the domain of the sample article is substantially different from its training data. However, the model was still able to accurately generate the scientific keyphrase 'anabolic
Figure 13: Case Study 2.2: ChatGPT’s Familiarity with the Paper
## Appendix A Appendix
Figure 14: Case Study 2.2: Examining ChatGPT’s Comprehension of Reinforcement and Collaborative Learning
Figure 15: Case Study 2.3: Long Scientific Document from SemEval2010 Test Dataset - Title & Abstract
## 1 Introduction
The field of **some communication language** (ACLs) research has long been designed by establishing and promoting [30, 11, 17]. Early workstaintic models that specify the amounts of people to its terms of personal communication requirements on mutual information to the per-subject communication system and the per-subject communication system is the product of the individual resources (or the number of agents per subject).
The field of **some communication language** (ACLs) research has long been designed by establishing and promoting [30, 11, 17]. Early workstaintic models that specify the amounts of people to its terms of personal communication requirements on mutual information to the per-subject communication system and the per-subject communication system is the product of the individual resources (or the number of agents per subject).
The **field of **some communication language** (ACLs) research has long been designed by establishing and promoting [30, 11, 17]. Early workstaintic models that specify the amounts of people to its terms of personal communication requirements on mutual information to the per-subject communication system is the product of the individual resources (or the number of agents per subject).
the world-class sprinder who was knocked off track and field's pedestal after testing positive for steroids, says it's wrong for athletes to use the muscle-building substance. "I got caught in Seoul! I lost my gold medal," the Canadian told reporters as legislation to classify anholic steroids as a controlled substance was introduced on Thursday. "Tm here to tell the people of this country it's wrong to cheat, not to take it." It's bad for your health." Watching Johnson was his chief memories: Carl Lewis, the man who was awarded the Olympic gold medal Johnson lost. "I think it's great," Lewis said of the legislation. "They're making a move and it's very positive. I'm happy to see it." However, the flamboyant, pony-tailed Lewis told reporters: "I don't understand why Ben Johnson's here." Lewis said he attended the news conference because he was working on his autobiography, and one of the chapters deals with steroids. He said he didn't intend to upstage Johnson. Reps. Mel Levine, D-Calif, Henry Waxman, D-Calif, and Benjamin Gilman, R-N.Y., invited Johnson to attend as they presented the legislation that would make anholic steroids a controlled substance similar to the designation given to cocaine and heroin. The lawmakers emphasized the increasing abuse of steroids by college, high school, and even junior high school athletes who believe the substance will enhance their performance. "America is about to have an adolescent time-bomb explode in its hands," Levine said. "But if we act quickly enough to restrict steroid distribution, and to increase the penalties for illicit distribution, we can prevent this plague from spreading." Levine referred to the abuse of steroids as "the silent side of the drug disease in this country." He applauded Johnson's courage for attending the news conference. Johnson later stepped up to the microphones and in a quiet voice with a slight stutter told other athletes not to make the same mistake he did, urging them "to come forward, to come clean." Lewis dismissed reporters' questions that he had used steroids, indicating he would be willing to run against Johnson "if he comes back and he's clean." While Lewis held the spotlight, Johnson slipped away to adjoining congressional offices. "He's not here to compete with Carl Lewis. I hope he will someday," said Ed Futerman, Johnson's lawyer.
The model's incorrectly generated keyphrases, such as 'biomedical and behavioral research' and 'human factors', demonstrate that it still approached the article as if it were in the scientific domain, which is coherent given that this is where the model was trained. Regarding ChatGPT, its multidomain training enables it to be more robust, as it can leverage the knowledge it acquired from various tasks to generate high-quality keyphrases in any domain that was included in its training.
As the article covers an important event in sports history, we can directly test the model's knowledge of it. Figure 19 shows that ChatGPT is able to describe the events described in the article by posing a simple question, indicating that the model has latent knowledge that goes beyond the textual content. This capability enables the model to identify relevant keyphrases more accurately.
#### 5.3.2 Case Study 3.2
This case study evaluates the model's ability to generalize across various domains by analyzing a specific instance from the DUC2001 dataset identified as 'WSJ910529-0003'. The article, which belongs to the gossip press domain and discusses a famous actress's medical condition, is shown in Figure 20.
As demonstrated, ChatGPT can accurately predict almost all the keyphrases, whereas KeyBART fails to predict any. KeyBART generates mostly abstractive keyphrases related to the medical domain, such as'medical records' and 'occupational safety', which is expected since it is constrained to generate keyphrases only from the domain included in its training. Therefore, it cannot adapt well to a new domain that is far from its training data.
We evaluated ChatGPT's knowledge by asking it directly about the actress and the events mentioned in the article. The results are shown in Figure 21, which reveals that the model has knowledge about the events, but it struggles with details such as monetary values and locations. This knowledge is obtained from ChatGPT's pre-training on other tasks, and it significantly enhances the model's performance in the KPG task.
#### 5.3.3 Case Study 3.3
This case study examines the model's capacity to generalize across different domains by analyzing a specific instance identified as 'LA103089-0070' from the DUC2001 dataset. The article, which belongs to the aviation or military news domain, is displayed in Figure 22 and is one of the closest articles to the scientific domain that can be found in the dataset.
Figure 18: Case Study 3.1: News from DUC2001 Dataset
The lawsuit which has been pending for nine months, arose out of two articles published by the Enquire in June 1990 reporting on Miss Taylor's condition and activities at St. John's Hospital Santa Monica, Calif., where she was treated last spring for pneumonia. The Enquire said that after gaining access to all of Miss Taylor's medical records, it is satisfied that the articles reporting on the actress's medical condition and the report that she was drinking were in error. The paper said it published the articles in good faith reliance on information provided to it, but the information was inaccurate. Iain Calder. Enquire editor, said in a statement, "we regret the inaccuracies in the articles but are pleased that this dispute has come to an amicable end." Miss Taylor said she feels completely vindicated," and that after the newspaper's management determined the articles were in error, the Enquire' acted promptly and in good faith." Miss Taylor initially sought damages of $20 million in Los Angeles Superior Court, according to Neil Papiano, her attorney. Although Mr. Papiano wouldn't specify the size of the settlement, he said we were persuaded that it was certainly large enough that we shouldn't go to trial." As previously reported, G.P. Group Inc., the Lantana, Fla., publisher of the Enquire and Star tabloids, plans to raise $350 million by offering 43% of the firm in an initial public offering.
Figure 19: Case Study 3.1: ChatGPT’s knowledge from the events described in the article
Figure 20: Case Study 3.2: News from DUC2001 Dataset
Figure 21: Case Study 3.2: ChatGPT’s Knowledge of the Actress and the Events Described in the Article
Figure 22: Case Study 3.3: News from DUC2001 Dataset
In this example, both models demonstrate a low level of performance, with ChatGPT's performance slightly superior to that of KeyBART's. KeyBART's predictions consist of both abstractive and extractive keyphrases from the scientific domain, indicating that the model is not resilient enough to function in an untrained domain. Some of its predictions, such as 'air traffic control' or'suicide prevention', are not relevant to the text. The keyphrases 'crash' and 'victim' are frequently repeated, but neither of the models can identify their importance.
In this sample, we directly asked ChatGPT to explain the reasons behind its poor performance in this specific article. The corresponding prompt and answer are depicted in Figures 23 and 24, respectively. The model attributes its low performance to the lack of a clear focus, limited context, and limited vocabulary, which are plausible explanations that have been repeatedly identified as challenges in KPG.
## 6 Conclusions and Future Work
As per the study, ChatGPT excels over its peers in all benchmarks, notably in handling lengthy documents and non-scientific domains. The model's superior performance is attributed to its augmented maximum input token limit and multidomain training, allowing it to gain shared in-domain knowledge from other tasks and utilize it for Keyphrase Generation. Additionally, the model benefits from a vast dataset, facilitating the training of a larger language model and increasing the maximum input tokens without any adverse impact on the KPG task's performance.
These results demonstrate that multitask and multidomain learning, with prompt-based learning playing a crucial role, can enhance keyphrase generation quality and overcome the domain adaptation and long document KPG challenges. Our study confirms that there is no specific solution for long document KPG, and all models experience significant performance degradation in this scenario. Given the increasing importance of long documents in real-world applications, it is crucial to develop more effective methods for embedding long-term relationships between words and give the model a holistic view of the document.
Figure 23: Case Study 3.3: Directly questioning ChatGPT about its low KPG performance in this article
## 7 Acknowledgments
We would like to express our gratitude to Debanjan Mahata, who served as our master and introduced us to the field of NLP and KPE. His guidance and patience have been invaluable throughout this research project, and we are grateful for his mentorship and support.
We would also like to thank Alejandro Perez and Sergio Gago for providing the computational resources that were essential for developing and testing the ideas presented in this paper. His generosity and support have been instrumental in the success of this research project.
Finally, we would like to acknowledge the countless individuals and organizations who have contributed to the field of NLP and Keyphrase Extraction, as their work has provided the foundation for this research. We are grateful for their ongoing efforts and dedication to advancing this field, and we hope that this paper will contribute to their ongoing work.
Thank you all for your contributions and support.
|
2305.14538 | Cascaded Beam Search: Plug-and-Play Terminology-Forcing For Neural
Machine Translation | This paper presents a plug-and-play approach for translation with terminology
constraints. Terminology constraints are an important aspect of many modern
translation pipelines. In both specialized domains and newly emerging domains
(such as the COVID-19 pandemic), accurate translation of technical terms is
crucial. Recent approaches often train models to copy terminologies from the
input into the output sentence by feeding the target terminology along with the
input. But this requires expensive training whenever the underlying language
model is changed or the system should specialize to a new domain. We propose
Cascade Beam Search, a plug-and-play terminology-forcing approach that requires
no training. Cascade Beam Search has two parts: 1) logit manipulation to
increase the probability of target terminologies and 2) a cascading beam setup
based on grid beam search, where beams are grouped by the number of
terminologies they contain. We evaluate the performance of our approach by
competing against the top submissions of the WMT21 terminology translation
task. Our plug-and-play approach performs on par with the winning submissions
without using a domain-specific language model and with no additional training. | Frédéric Odermatt, Béni Egressy, Roger Wattenhofer | 2023-05-23T21:48:02Z | http://arxiv.org/abs/2305.14538v1 | # Cascaded Beam Search: Plug-and-Play Terminology-Forcing For Neural Machine Translation
###### Abstract
This paper presents a plug-and-play approach for translation with terminology constraints. Terminology constraints are an important aspect of many modern translation pipelines. In both specialized domains and newly emerging domains (such as the COVID-19 pandemic), accurate translation of technical terms is crucial. Recent approaches often train models to copy terminologies from the input into the output sentence by feeding the target terminology along with the input. But this requires expensive training whenever the underlying language model is changed or the system should specialize to a new domain. We propose **Cascade Beam Search**, a plug-and-play terminology-forcing approach that requires no training. Cascade Beam Search has two parts: 1) logit manipulation to increase the probability of target terminologies and 2) a cascading beam setup based on grid beam search, where beams are grouped by the number of terminologies they contain. We evaluate the performance of our approach by competing against the top submissions of the WMT21 terminology translation task. Our plug-and-play approach performs on par with the winning submissions without using a domain-specific language model and with no additional training.
## 1 Introduction
Terminology translation is a key challenge in modern machine translation systems. While most translation systems are trained to be generalists, applications that require accurate translation of terminology are plentiful (e.g., in the bio-medical or legal domains). In addition, new terms and domains can emerge over time, rendering large pretrained language models outdated (e.g., COVID-19). Not only is it difficult and expensive to come by parallel corpora for specialized or emerging domains, but the periodic retraining or fine-tuning of machine translation models can also be energy intensive.
In this context, the use of word- or phrase-level terminology lists as also used by human professional translators can be an interesting resource to guide translation. Such lists can be created in a timely manner even for newly emerging domains and if used effectively, they offer the possibility for a flexible neural machine translation (NMT) pipeline that can become an expert in any domain.
NMT with terminology constraints has gained significant attention in recent years culminating in the WMT21 shared task: _Machine Translation using Terminologies_(Alam et al., 2021). Participants were asked to translate COVID-related sentences across five different language pairs1.
Footnote 1: english-french, english-korean, english-chinese, english-russian, czech-german
One of the most successful early approaches to terminology translation comes from Hokamp and Liu (2017) in the form of grid beam search (GBS). In this approach beam search is run on multiple levels in parallel, where each level contains the best beams with a given number of fulfilled constraints. This approach can be summarized as trying to place a constraint in every possible position until it finds a good one. Although very effective, GBS' runtime increases linearly with the number of constraints. To mitigate this issue Post and Vilar (2018) propose dynamic beam allocation, which reduces the computational overhead to a constant factor. More recently, Dinu et al. (2019) have argued that GBS can be brittle under realistic conditions and pro
Figure 1: Example of word-level terminology lists and their application in the translation setting.
pose instead to train a language model to copy target terminologies, after first adding these to the input. Indeed all the competitors in the WMT21 competition used some version of this approach.
We argue that it may be time to revisit constrained decoding methods. Our Cascaded Beam Search is based on GBS, but we show that with two important modifications one can achieve significant performance improvements: 1) allowing arbitrary constraint tokenizations and 2) using a cascade level per full constraint instead of per constraint token. These modifications lead to improvements of more than 6 BLEU points across all datasets tested.
To compare against the state-of-the-art in terminology translation, we also evaluate our approach on the WMT21 competition dataset with a generalist underlying multilingual model. Our decoding approach achieves a near \(100\%\) appearance of terminologies, while retaining the BLEU score of the underlying model. This result beats all competitors in terms of terminology appearance, but cannot surpass the BLEU scores of the fine-tuned, competition-winning models as our underlying model is limited by the weaker domain-specific translation quality of the underlying model.
## 2 Methodology
### Problem Definition
A terminology translation task consists of a set \(D\) of source and reference target sentence pairs \((s,r)\), and a terminology list \(T\) of source and target terminologies. A source terminology can have multiple target translations.
The goal is to translate text such that the output is both a) of high quality and b) incorporates the terminologies correctly. We follow Alam et al. (2021a,b) and use BLEU2(Papineni et al., 2002) to evaluate general translation quality, and we use terminology-specific scores to assess the appearance and placement of the terminologies. We give the most important terminology-specific scores below, with \(h\) denoting the translation candidate (or hypothesis).
Footnote 2: using sacrebleu with its default tokenizers; inputs are detokenized and true-cased
* Exact Match Accuracy (EMA) \[\text{EMA}=\frac{\text{\# matched source terms in }h}{\text{\# source terms}}\] (1)
* Lemmatized Match Accuracy (LMA) is a lemmatized version of EMA. where both the candidate and the target terminologies are lemmatized (Qi et al., 2020).
Figure 2: The Cascaded Beam Search setup. Cascaded beam search can be used with any underlying Language Model. It consists of an optional logit manipulation module and a level distributor followed by standard beam search. The logit module increases the probability of desired tokens, the level distributor passes the hypotheses to the correct level and beam search selects the best candidates per level.
### Beam Search
Given a language model, the goal of a decoding algorithm is to find the output sequence \(\mathbf{y}^{*}\) that maximizes the conditional probability:
\[\mathbf{y}^{*}=\text{argmax}_{\mathbf{y}\in\mathcal{Y}}p(\mathbf{y}\mid\mathbf{ x},\boldsymbol{\theta}) \tag{2}\]
where \(\mathbf{x}\) is the input sentence, \(\boldsymbol{\theta}\) are the parameters of the model, and \(\mathcal{Y}\) is the set of all sequences in the model vocabulary \(V\). Most models split the probability up along the decoding time steps, \(t\), producing a probability distribution that factors as follows:
\[p_{\boldsymbol{\theta}}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{|\mathbf{y}|}p_{ \boldsymbol{\theta}}(y_{t}|\mathbf{x},\mathbf{y}_{<t}) \tag{3}\]
On one hand is exploring all sequences in the exponential room \(\mathcal{Y}\) infeasible and on the other hand simple greedy choices at every timestep make locally optimal choices that are likely to result in a globally sub-optimal output. Therefore heuristic approaches such as Beam Search (Lowerre, 1976; Sutskever et al., 2014) have become the de-facto standard in the machine translation world.
Beam search is a pruned search that keeps a set of \(k\) top candidates, looks at all \(k\times|V|\) continuations and selects again the top \(k\) as candidates for the next time step.
### Grid Beam Search
Grid beam search is an extension of beam search that allows for imposing lexical constraints on the output, such as the appearance of specific terms. Given \(c\) constraints, grid beam search stores \(c+1\)_banks_ of candidates, \(\{B_{i}\}_{i=0,\dots,c}\). Each bank \(B_{i}\) contains the top \(k\) candidates that have fulfilled \(i\) constraints. Although effective, GBS' runtime increases linearly with the number of constraints; moreover the variable total beam size is bad for data parallelism. As a solution to these problems, Post and Vilar (2018) propose dynamic beam allocation, where a fixed total number of beams is distributed among the banks. Such decoding methods are also sometimes collectively referred to as constrained beam search.
Neither of these methods allow for a flexible setup where a constraint may be fulfilled by picking one constraint among a set of possible constraints. For example, one might want to let the system translate _cough_ as a noun (_toux_) or a verb (_tousse, tousses, toussons_,...) depending on the context. Li et al. (2021) introduce an extension of constrained beam search, _disjunctive positive constraints_, that allows for such constraints. In this paper we will use their decoding algorithm as a baseline, and for simplicity refer to it as grid beam search+ or GBS+, because terminology lists often contain multiple possible target translations for the same source term, as is the case in more recent datasets such as the dataset for the WMT21 shared task on terminology translation.
### Logit Manipulation
Pascual et al. (2021) introduce a logit modification approach for controlled text generation with keyword constraints. Their approach is aimed at semantically unconstrained text generation such as story writing. In order to encourage certain keywords, they add a vector of cosine similarities to the language model's output distribution so that words similar to the keywords receive a probability boost. As well as boosting the probability of keywords, using the cosine similarity can help generate a context in which the keyword can appear more naturally (i.e., with a higher probability score).
In this paper we take a similar approach, but we adapt it to the terminology translation setting. Instead of using cosine similarity for guidance, we use binary encoding to guide only towards the relevant terminologies. This makes more sense in our setting, as we would like exact matches in the output and there is enough context in the input sentence to encourage the model to generate a natural context for the desired terminology.
To encourage the terminologies to appear in the output we increase the score of any token that either
* continues a target terminology if we are currently producing a target terminology or
* starts a target terminology that has not yet appeared if we are not currently producing a target terminology
We modify the logits by adding \(\alpha/|T^{\prime}|\) to the desired tokens \(T^{\prime}\), and then re-normalize with softmax to get a probability distribution:
\[\text{probs}=\text{softmax}\Big{(}\text{softmax}(x)+\alpha\cdot\frac{\mathds{1 }_{T^{\prime}}}{|T^{\prime}|}\Big{)},\]
where \(x\) is the vector of logits from the language model, \(\mathds{1}\) is the indicator function, and \(\alpha\in\mathbb{R}\) is a constant that can be tuned. The approach is flexible with regards to the tokens that are included
in \(T^{\prime}\). Tokens that start or continue terminologies can be added to \(T^{\prime}\) to encourage their appearance. More analysis on logit manipulation and choices of tokens in \(T^{\prime}\) can be found in Section 4.1.
### Cascaded Beam Search
This paper proposes an inference-time only modified constrained beam search variant that can enforce terminology appearance during a translation task by keeping different sets of beams (so-called banks) for different levels of progress in generating the target terminologies. As visible in Figure 2, cascaded beam search is made up of three parts: the language model, a cascade level distributor and finally a classical beam search procedure per level. It can also be combined with an optional logit manipulation module.
**Language Model**: The proposed method can be applied in a plug-and-play manner to any autoregressive machine translation model that outputs probabilities over a fixed vocabulary.
**Logit Manipulation (optional)**: Cascade beam search can optionally be combined with logit modification as described in Section 2.4.
**Cascade Level Distributor**: The level distributor decides which level each hypothesis should be allocated to based on the next token. The nth level contains hypotheses with n terminologies, either with n complete terminologies or part-way through the nth terminology. Unlike GBS, we consider any token whose characters begin or continue a terminology to be successful. The Cascade Level Distributor is described in more detail in Figure 5 in the appendix. Differences to GBS are described in detail in Section 2.6.
**Beam search per cascade level**: After all continuations are distributed among the cascade levels we apply a standard beam search step per level: We pick the top \(k\) (\(k\): number of beams) hypotheses that don't end with an end of sequence token, <EOS>. We move any top-scoring hypotheses that do end with <EOS> at the highest level to a list of final hypotheses. We stop when we have \(k\) hypotheses that fulfill all the constraints in our list of final hypotheses, or when we reach the maximum sequence length. We then pick the highest scoring sequence amongst the final hypotheses fulfilling the most constraints as our output sequence.
**Complexity**: Since cascaded beam search requires one set of beams per cascade level, the total number of hypotheses grows linearly with the number of terminologies. The total number of beams is \(k(c+1)\). In practice the number of terminologies should be limited or dynamic beam allocation (Post and Vilar, 2018) should be used. For simplicity and better comparability we opt for using the grid beam search setup in our experiments, and we leave runtime optimization through dynamic beam allocation to future work.
Our approach is implemented within the transformers library's generate function and can be flexibly applied to any language model that is ported to this function. The code will be made publicly available at the time of publication.
### Differences to Grid Beam Search
Cascaded beam search has two key differences to grid beam search. Firstly, GBS uses a fixed tokenization of the terminology constraints. This restricts constraint fulfillment to generating the exact sequence of tokens produced by the tokenizer. In contrast to GBS, cascaded beam search allows for any possible tokenization of a terminology constraint. We achieve this by checking whether the alphanumeric characters that make up the token are a successful start or continuation of a terminology. We have considered the qualities that this approach brings. Of course, checking all the tokens that constitute alphanumeric continuations of a terminology carries additional computational burden. However, this can be implemented efficiently using Trie-based vocabulary representations. On the other hand, we argue that requiring a fixed tokenization of a lexical constraint can be a quality deprecating factor for multilingual language models that operate on huge vocabularies using subword tokenizers, such as WordPiece (Wu et al., 2016) or SentencePiece (Kudo and Richardson, 2018), where words can have a large amount of valid tokenizations.
Secondly, we use a single level per terminology constraint, rather than a new level per target terminology token. This is better aligned with the aim of the task and the appearance-based evaluation metrics, where each terminology constraint receives the same weight in the metric regardless of terminology token length. In addition this also reduces the total number of beams and therefore the runtime and memory requirements of our decoding algorithm.
## 3 Experimental Setup
### Datasets
First we compare cascaded beam search against an existing constrained beam search method Li et al. (2021) on the WMT17 German-English news translation task3. The terminology sets, \(T\), are taken from Dinu et al. (2019). They were extracted them from Wiktionary and IATE to create two versions of the terminology translation task.
Footnote 3: [https://www.statmt.org/wmt17/translation-task.html](https://www.statmt.org/wmt17/translation-task.html)
We also evaluate our approach on the WMT21 shared task on terminology translation Alam et al. (2021). As our method requires no retraining we evaluate our approach on the test set splits that were used for the competition. The dataset covers the bio-medical domain with a specific focus on new terminology that appeared during the COVID-19 pandemic. The dataset is annotated on the source side with possible terminology translations if a target terminology is found in the reference target translation. Note that considerable parts of the dataset do not contain terminologies at all.
### Base Models
As our decoding algorithm can be used in a plug-and-play fashion with any neural machine translation model with an autoregressive decoder, we evaluate the performance of cascaded beam search on a set of base models.
**M2M100**: M2M100 Fan et al. (2020) is a many-to-many multilingual translation model that works across 100 languages. Sentences are tokenized using SentencePiece Kudo and Richardson (2018), the model is a transformer based seq-to-seq model made up of an encoder and decoder module. The model is available in two different sizes, M2M100 small with 418M parameters and M2M100 large with 1.2B parameters. Of particular relevance to the WMT21 competition task, is that this model and the associated paper were released in late summer 2020. As a result, the model was exposed to little, if any, training data from after the COVID-19 outbreak. We therefore use M2M100 as a _pre-COVID_ reference model. This makes the WMT21 task, which is focused on translating COVID related texts, a particularly challenging and realistic scenario for terminology translation with this model. One simple confirmation that this is the case, is that the word COVID-19 is tokenized as [CO,V,ID,-,19] by M2M100's sentencepiece tokenizer, which is a comparably high number of tokens for a word that should appear frequently in data after the start of the pandemic.
**NLLB**: "No Language is Left Behind" NLLB Team et al. (2022) was released in July 2022 and can be seen as an extension of the M2M100 project. NLLB covers 200 languages, many of which are considered to be "low resource" languages. The translation model is a sparsely-activated MoE (Mixture of Experts) transformer-based model. As the dataset was mined more recently, it includes data from after the COVID outbreak, so we treat this model as a _post-COVID_ baseline for the WMT21 task. The model also outperforms M2M100 on various translation benchmarks, which should be taken into consideration when looking at the results. Interestingly, NLLB has reverted back to basic sampling methods instead of using beam search, and we report scores attained as such in Tables 4 and 5.
## 4 Experiments
### Logit Modification
Logit modification is a very flexible approach. There are many possible options for the set of guide tokens \(T^{\prime}\), and the strength of forcing parameter \(\alpha\), can also be varied. We compare three different options for \(T^{\prime}\):
* **push tokenizer** manipulates the tokens produced by the tokenizer (in order)
* **push longest** manipulates the _longest_ token that begins or continues a terminology
\begin{table}
\begin{tabular}{c|c|c c|c c c c} \hline \hline Language & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Size} & \multicolumn{5}{c}{Number of Terminologies} \\ Pair & & & 0 & 1 & 2 & \(\geqslant\)n3 & max \\ \hline \multirow{2}{*}{EN-DE} & WIKT & 727 & 0.0\% & 81.7\% & 15.3\% & 2.7\% & 4 \\ & IATE & 414 & 0.0\% & 91.3\% & 8.2\% & 0.5\% & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the WMT17 data split that contains IATE or WIKT terminologies. max refers to the maximum amount of annotated terminologies in a single sample.
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Language & \multirow{2}{*}{Size} & \multicolumn{5}{c}{Number of Terminologies} \\ Pair & & 0 & 1 & 2 & \(\geqslant\)n3 & max \\ \hline EN-FR & 2100 & 40.2\% & 30.0\% & 15.4\% & 14.4\% & 11 \\ EN-KO & 2100 & 66.9\% & 22.6\% & 6.6\% & 3.9\% & 9 \\ EN-RU & 2100 & 44.3\% & 30.1\% & 15.4\% & 10.2\% & 9 \\ CZ-DE & 3426 & 6.5\% & 61.2\% & 25.8\% & 6.5\% & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the test set splits for the WMT21 shared task on terminology translation dataset.
* **push all** manipulates _all_ tokens that begin or continue a terminology
Logit modification can be used without cascaded beam search as a decoding method on its own. We compare the above options under this setting for analysis. Figure 3 shows the BLEU and EMA scores on the Wiktionary dataset from WMT17 as the forcing strength is varied. The corresponding results for the IATE dataset from WMT17 can be seen in Figure 7 in the appendix.
### Cascaded Beam Search with Logit Modification
We now look at the combination of cascaded beam search with logit modification. In particular we analyse when and how much logit modification helps when applied on top of cascaded beam search. Figure 4 shows how the BLEU score and terminology appearance (EMA) change as we vary the strength of the logit modification, \(\alpha\), for different beam sizes per cascade level.
### Comparison to Constrained Beam Search
Using the WMT17 dataset, we compare cascaded beam search with existing constrained beam search implementations: grid beam search (GBS) (Hokamp and Liu, 2017), and a modification of GBS that allows for multiple targets for a single source terminology (GBS+) (Li et al., 2021). We also include a baseline using standard beam search. For a fair comparison, we use the same underlying translation model, namely M2M100large (Fan et al., 2020), and the same beam size (\(k=5\)) for all of the constrained decoding algorithms and we use \(5(c+1)\)4 beams for beam search. Cascade beam search is applied with logit modification with
Figure 3: Comparison of BLEU and exact match accuracy (EMA) scores for different logit modification scenarios, when using logit modification without cascaded beam search. _Push longest_ increases the probability of only the longest token that begins/continues a terminology, _push all_ increases the probability for all tokens that begin/continue a terminology and _push tokenizer_ increases the probability only for tokens produced by the tokenizer. All results are on the WMT17 dataset with the Wiktionary based terminology set.
\begin{table}
\begin{tabular}{l|l|c c c|c c c c} \hline \hline Model & Decoding Method & & EMA & LMA & BLEU & & EMA & LMA & BLEU \\ \hline \multirow{4}{*}{M2M100 large} & baseline & & 0.818 & 0.858 & **32.53** & & 0.810 & 0.854 & **32.43** \\ & logit only push longest (\(\alpha=0.1\)) & & 0.846 & 0.883 & 31.71 & & 0.836 & 0.878 & 31.32 \\ & grid beam search & WIKT & 0.977 & **1.000** & 24.99 & IATE & 0.984 & 0.998 & 23.93 \\ & grid beam search + & & 0.977 & **1.000** & 24.99 & & 0.984 & 0.998 & 23.93 \\ & cascaded & & **1.000** & **1.000** & 31.68 & & **1.000** & **1.000** & 31.24 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of translation quality on the WMT17 datasets using different decoding methods. The baseline is standard beam search, grid beam search + refers to the extended decoding algorithm taken from Li et al. (2021) and cascaded beam search is the method we propose. The highest scores are highlighted in bold.
a guidance strength of \(\alpha=0.2\). We also include logit modification only results for comparison. The results are shown in Table 3.
### Comparison to SOTA models on WMT21
We compare Cascaded Beam Search to the winning submissions of the WMT21 shared task on terminology translation: PROMT (Molchanov et al., 2021) for the English-French and English-Russian datasets, and Kakao Enterprises (KEP) (Bak et al., 2021) for the English-Korean and Czech-German datasets. We include M2M100 using standard beam search, to give a baseline for the general translation quality of this base model. Note that the winning submissions used translation models fine-tuned for the bio-medical setting of the task, so we do not expect our model to outperform these submissions in BLEU scores. Therefore the M2M100 with standard beam search serves as a much needed baseline.
Finally we include NLLB as a post-COVID baseline. M2M100 was specifically chosen as a pre-COVID model that would need help with the terminologies, but we also wanted to see how a more recent translation model would perform without any terminology guidance.
The results for the English-French and English-Russian datasets can be seen in Table 4. A full set of results can be found in Table 5 in the appendix.
## 5 Discussion
### Logit Modification
Figure 3 shows the results on the WMT17 Wiktionary dataset under different logit modification settings. Firstly, we can see on the right that logit modification has the desired effect: As we increase the forcing parameter \(\alpha\), the appearance rate of the terminologies goes up, even reaching \(100\%\). How
\begin{table}
\begin{tabular}{c l|c c|c} \hline \hline & System & EMA & LMA & BLEU \\ \hline \multirow{8}{*}{EN-FR} & PROMT.soft & 0.959 & 0.972 & 41.22 \\ & M2M100 small baseline & 0.813 & 0.838 & 35.38 \\ & M2M100 small cascaded & 0.985 & 0.990 & 35.21 \\ & M2M100 small GBS+ & 0.989 & 0.993 & 21.86 \\ & M2M100 small DBA/GBS new & 0.983 & 0.993 & 27.92 \\ & M2M100 large baseline & 0.880 & 0.899 & 40.15 \\ & M2M100 large cascaded & **0.991** & **0.994** & 40.51 \\ & M2M100 large GBS+ & 0.994 & 0.996 & 27.76 \\ & M2M100 large DBA/GBS new & 0.994 & 0.997 & 35.53 \\ \cline{2-5} & NLLB & 0.903 & 0.922 & **46.68** \\ \hline \multirow{8}{*}{EN-RU} & PROMT.soft & 0.862 & 0.913 & **31.22** \\ & M2M100 small baseline & 0.682 & 0.725 & 23.89 \\ & M2M100 small cascaded & 0.916 & 0.934 & 23.73 \\ & M2M100 small GBS+ & 0.963 & 0.969 & 12.01 \\ & M2M100 small DBA/GBS new & 0.958 & 0.967 & 18.85 \\ & M2M100 large baseline & 0.753 & 0.797 & 29.09 \\ & M2M100 large cascaded & **0.925** & **0.950** & 29.22 \\ & M2M100 large GBS+ & 0.973 & 0.982 & 16.56 \\ & M2M100 large DBA/GBA new & 0.970 & 0.981 & 27.67 \\ \cline{2-5} & NLLB & 0.782 & 0.827 & 27.76 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of translation quality between the WMT21 competition winners and our system. The highest scores are highlighted in bold.
Figure 4: Comparison of BLEU and exact match accuracy (EMA) scores under different settings when using cascaded beam search (CBS) with logit modification. We vary the strength of the logit modification, \(\alpha\), and the beam sizes per cascade level, \(k\). Note that \(\alpha=0\) corresponds to CBS with no logit modification. All results are on the WMT17 dataset with the Wiktionary based terminology set.
ever on the left we can see that this comes at the cost of a decreasing BLEU score. Despite the boost to BLEU from the appearance of the terminologies, the score still drops by about \(5\) points. There is also no clear sweet spot, the BLEU score decreases just as the EMA increases. _Push longest_ and _push tokenizer_ behave very similarly, but push all seems to lag behind both in increasing EMA and in decreasing BLEU. This makes sense intuitively; in the case of _push all_ the forcing is spread over a much larger set of token so each token in \(T^{\prime}\) receives less of a push overall.
### Cascaded Beam Search with Logit Modification
Figure 4 shows how cascaded beam search with logit modification performs on the WMT17 Wiktionary dataset. Logit modification has the greatest effect on EMA for small values of \(k\). Even with a very low \(\alpha\), we achieve almost \(100\%\) EMA when using only \(k=2\) beams. In contrast, without logit modification \(6\) beams are required to achieve a similar EMA score. In the left plot we see that especially for smaller values of \(k\) there is also an increase in the BLEU score when using logit modification. This higher BLEU score will partly be due to the higher appearance of the terminologies in the output, and not necessarily due to better general translation quality. However, increasing the parameter too far (e.g., \(\alpha=1.0\)) severely damages the BLEU score, especially for larger beam sizes. We show similar results for the IATE dataset from WMT17 in the appendix. Looking at both sets of results, we see that a beam size of around \(3\) to \(5\) and a guidance parameter of \(\alpha=0.1\) gives the best combined scores for these datasets.
These results assure us that the combination of these two decoding approaches can give the best terminology translation results, especially when a restricted computational budget is available.
### Comparison to Constrained Beam Search
Table 3 shows a comparison of cascaded beam search with GBS and a modified version referred to as GBS+. Note that since this dataset does not have multiple target translations for any terminology, the results for GBS and GBS+ are identical. The beam search baseline achieves an exact terminology appearance rate of around \(81\%\) and a BLEU score of around \(32.5\) on the datasets. We can see that GBS reaches an almost perfect EMA score on both datasets, but looses around \(7\) BLEU points compared to the baseline. On the other hand cascaded beam search reaches exactly a \(100\%\) appearance rate, and loses only around \(1\) BLEU point on the datasets, clearly outperforming GBS.
### Comparison to SOTA models on WMT21
The BLEU score of the M2M100 baselines is significantly lower for most language pairs when compared to the competition winning models. This is to be expected given that M2M100 has not been tuned to the task domain. For the EN-FR language pair NLLB reaches the impressive BLEU score of \(46.68\), but otherwise the competition winners have the highest BLEU scores overall. On the other hand, the exact match accuracy (EMA) and lemmatized match accuracy (LMA) are already relatively high for the baselines, but rise to values close to \(100\%\) for Cascaded Beam Search. Indeed cascaded beam search attains the highest EMA and LMA scores in all language pairs.
What is again striking, is that cascade beam search clearly outperforms GBS in terms of BLEU whilst also attaining higher terminology appearance rates. Moreover even when compared to the M2M100 baselines, a trend for slightly increased BLEU scores can be seen when using cascaded beam search.
## 6 Conclusion
We introduce two decoding methods for terminology translation: cascade beam search and logit modification. Both methods can improve terminology appearance on terminology translation datasets. However we show how they can be combined to clearly outperform existing decoding approaches. Cascade beam search with logit modification can be combined with a pre-trained generalist multilingual model and still achieve competitive results on domain-specific tasks. This makes cascade beam search a very versatile alternative to the current state-of-the-art methods that require fine-tuning. Finally, the analysis shows that even with very small beam sizes, logit modification helps cascade beam search attain its potential with lower computational requirements.
We feel that inference-time only decoder modifications, like cascaded beam search or grid beam search, might be overlooked by current research in terminology translation and can be used to greater effect.
### Limitations
Whilst we have given several reason that could explain why cascaded beam search performs significantly better than standard grid beam search, a detailed analysis and ablation study would help further our understanding.
In addition, like grid beam search, cascaded beam search also has an unfavourable runtime that grows linearly in the number of constraints. Although we restricted ourselves to this setting for simplicity and comparability, dynamic beam allocation should be added to improve the runtime. We leave this analysis to future work.
## Ethics Statement
We understand that terminology translation can be used for distributing misinformation more widely or generating harmful content. However, we believe that further research into automatic translation can equip us with the necessary tools to identify, correct and prevent malicious use of these methods. Moreover, terminology translation is a valuable tool for ensuring accurate translations and reducing inadvertent misinformation. Correct translations are especially important in specialized and critical domains, such as, translating COVID advice or heavy machinery instructions. Successful methods can also be a valuable asset for low resource languages, where terminology dictionaries are much easier to come by than large amounts of domain-specific training data. Finally, we consider the environmental impacts of our method. As with grid beam search, the approach can be runtime intensive, and we encourage further work to reduce this computational overhead. However on the positive side, the method is completely plug-and-play. This means it is able to make use of pre-trained language models that require extensive training. This can significantly reduce the energy requirements for setting up a terminology translation system.
|
2307.02523 | Fixed-point tensor is a four-point function | Through coarse-graining, tensor network representations of a two-dimensional
critical lattice model flow to a universal four-leg tensor, corresponding to a
conformal field theory (CFT) fixed-point. We computed explicit elements of the
critical fixed-point tensor, which we identify as the CFT four-point function.
This allows us to directly extract the operator product expansion coefficients
of the CFT from these tensor elements. Combined with the scaling dimensions
obtained from the transfer matrix, we determine the complete set of the CFT
data from the fixed-point tensor for any critical unitary lattice model. | Atsushi Ueda, Masahito Yamazaki | 2023-07-05T18:00:00Z | http://arxiv.org/abs/2307.02523v2 | # Fixed-point tensor is a four-point function
###### Abstract
Through coarse-graining, tensor network representations of a two-dimensional critical lattice model flow to a universal four-leg tensor, corresponding to a conformal field theory (CFT) fixed-point. We computed explicit elements of the critical fixed-point tensor, which we identify as the CFT four-point function. This allows us to directly extract the operator product expansion coefficients of the CFT from these tensor elements. Combined with the scaling dimensions obtained from the transfer matrix, we determine the complete set of the CFT data from the fixed-point tensor for any critical unitary lattice model.
_Introduction.--_ Renormalization group (RG) [1; 2; 3] is one of the most profound concepts in contemporary physics. RG theory has significantly deepened our understanding of the universality of critical phenomena [4; 5]. We now understand that each universality class is described by an RG fixed-point (FP) theory under the RG transformation, which theory can be represented [6; 7] as a conformal field theory (CFT) [8]. Universal behavior, such as critical exponents, can then be elucidated from the CFT data, which include central charges, scaling dimensions, and operator product expansion (OPE) coefficients [9; 10; 11]. It is therefore of paramount importance to identify this CFT data for a given ultraviolet (UV) theory (such as a lattice model). [12].
While the analysis of the real-space RG transformation has a long history [13], tensor network renormalization (TNR) [14; 15; 16; 17; 18; 19; 20] has recently emerged as a reliable numerical implementation of the real-space RG. The application of TNR has demonstrated that the tensor-network representation of the Boltzmann weights converges to a FP tensor, representing the RG fixed point.
There are several motivations for studying the FP tensors.
First, we expect that the FP tensor encodes the CFT data of the FP theory. Gu and Wen have established a method for calculating the central charge and scaling dimensions for fixed-point tensors, a procedure that has since become standard [21]. It remains an intricate and challenging problem, however, to compute the OPE coefficients of the FP CFT [22; 23; 24; 25].
Second, determination of the fixed-point tensor can facilitate concrete realizations of the RG flow. Recently, Kennedy and Rychkov initiated a rigorous study of the RG using tensor networks [26; 27]. Employing simple low-temperature and high-temperature fixed-point tensors, they successfully demonstrated the stability of the corresponding fixed points. Nevertheless, the application of similar arguments to critical fixed points remains unachieved, given that even their tensor network representations are not fully understood.
Third, precise expressions of the fixed-point tensors will serve as a robust benchmark for evaluating the precision of different tensor-network algorithms. A number of algorithms boasting increased accuracy have been developed to determine the FP tensor, but there remain uncertainties in selecting the superior option due to our limited understanding of the exact expression of the fixed-point tensor.
In this Letter, we introduce an exact tensor network representation of critical RG fixed points, thereby solving the problem of numerically determining the full defining data of the FP CFT. We anticipate that our findings will serve as a pivotal contribution in practical computations of the FP theory on the one hand, and towards the rigorous substantiation of RG theory, on the other.
_Fixed-point tensor.--_ To simulate two-dimensional statistical models, we use the tensor network methods, where the local Boltzmann weight is represented as a four-legged tensor \(T^{(0)}\). We obtain the transfer matrix in the \(y\)-direction if we contract \(L\) copies of the four-leg tensors along a circle in the \(x\)-direction; we obtain the partition function \(Z(L,T^{(0)})\) if we contract \(L\times L\) copies along the torus in the \(x,y\)-directions. We can also contract \(L\times L\) copies of \(T^{(0)}\) in the \(x,y\)-directions, but with endpoints un-contracted (as in the right-hand side of the Figure below). In the limit \(L\to\infty\), this contracted tensor converges to a universal rank-four tensor \(T^{*}\) with an infinite bond dimension that corresponds to the fixed-point of the RG transformation:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/2-1
This tensor \(T^{*}\) is called the FP tensor.
If the original tensor \(T^{(0)}\) has D\({}_{4}\) symmetry, \(T^{*}\) also respects it. This allows the decomposition of the FP tensor into a pair of two identical three-leg tensors \(S^{*}\):
(1)
The FP tensor \(T^{*}\) has gauge degrees of freedom that change the basis of each leg. The insertion of the gauge transformation (unitary operators) does not change the spectral property of the FP tensor. In the following, we fix the gauge so that each index of the FP tensor is labeled by the eigenstates of the Hamiltonian \(L_{0}+\bar{L}_{0}\) on a cylinder, where \(L_{n}\) (\(\bar{L}_{n}\)) are the standard generators of the left-moving (right-moving) Virasoro algebras. By the state-operator correspondence, we can label these states by a set of operators \(\phi_{\alpha}\), among which we will find the identity operator \(\phi_{1}\) with the lowest scaling dimension. [28] In tensor-network representations, the projector to this basis can be found by diagonalizing the transfer matrix as follows [21]:
(2)
In the following, we choose the states \(\alpha,\beta,\dots\) to be primary operators.
_Main Results.--_ Let us now state the main results of this paper. First, the three-leg tensor \(S^{*}\) is proportional to the three-point functions of the FP CFT on the complex plane:
\[\frac{S^{*}_{\alpha\beta\gamma}}{S^{*}_{111}}=\langle\phi_{\alpha}(-x_{S}) \phi_{\beta}(ix_{S})\phi_{\gamma}(0)\rangle_{\rm pl}. \tag{3}\]
Second, the four-leg FP tensor determines the four-point functions of the FP CFT as
\[\frac{T^{*}_{\alpha\beta\gamma\delta}}{T^{*}_{1111}}=\langle\phi_{\alpha}(-x_ {T})\phi_{\beta}(ix_{T})\phi_{\gamma}(x_{T})\phi_{\delta}(-ix_{T})\rangle_{ \rm pl}. \tag{4}\]
These equalities hold when we choose the values \(x_{S}=e^{\pi/4}\) and \(x_{T}=e^{\pi/2}/2\).
We can now reproduce the _full_ defining data for the FP CFT. Recall that we can extract the scaling dimensions \(\Delta_{\alpha}\) operators from Eq. (2). The remaining data is the OPE coefficients \(C_{\alpha\beta\gamma}\) of the operators \(\phi_{\alpha}\), which can be extracted by applying a conformal transformation to Eq. (3):
\[\frac{S^{*}_{\alpha\beta\gamma}}{S^{*}_{111}} =\frac{C_{\alpha\beta\gamma}}{x_{S}^{\Delta_{\beta}+\Delta_{ \gamma}-\Delta_{\alpha}}x_{S}^{\Delta_{\gamma}+\Delta_{\alpha}-\Delta_{\beta}} (\sqrt{2}x_{S})^{\Delta_{\alpha}+\Delta_{\beta}-\Delta_{\gamma}}},\] \[=\frac{2^{\Delta_{\gamma}}C_{\alpha\beta\gamma}}{(\sqrt{2}x_{S})^ {\Delta_{\alpha}+\Delta_{\beta}+\Delta_{\gamma}}}. \tag{5}\]
Equation (1) represents the equivalence of two different decompositions (\(s\)- and \(t\)-channels) of the four-point function into a pair of three-point functions, i.e. the celebrated crossing relation of the CFT.
To better understand Eqs. (3-4), we apply conformal transformations to the two equations to obtain
\[\frac{S^{*}_{\alpha\beta\gamma}}{S^{*}_{111}} =e^{-\frac{\pi}{4}(\Delta_{\alpha}+\Delta_{\beta}+\Delta_{\gamma} )}\langle\phi_{\alpha}(-1)\phi_{\beta}(i)\phi_{\gamma}(0)\rangle_{\rm pl}, \tag{6}\] \[\frac{T^{*}_{\alpha\beta\gamma\delta}}{T^{*}_{1111}} =\left(\frac{e^{\frac{\pi}{2}}}{2}\right)^{-\Delta_{\rm tot}} \langle\phi_{\alpha}(-1)\phi_{\beta}(i)\phi_{\gamma}(1)\phi_{\delta}(-i) \rangle_{\rm pl}, \tag{7}\]
where \(\Delta_{\rm tot}\equiv\Delta_{\alpha}+\Delta_{\beta}+\Delta_{\gamma}+\Delta_{\delta}\).
Equations (6-7) naturally arise from conformal mappings [29; 30]. Once we fix the basis for the fixed-point (FP) tensor, each index corresponds to the states of CFT. Utilizing state-operator correspondence, the normalized wave function of the first index of \(S^{*}\), for instance, is created by inserting \(\phi_{\alpha}\) in the future infinity of the cylinder as follows:
\[|\phi^{1}\rangle=\left(\frac{2\pi}{L}\right)^{-\Delta_{\alpha}}\lim_{z\to \infty}e^{2\pi z\Delta_{\alpha}/L}\phi_{\alpha}(\infty)|I^{\rm cyl}\rangle,\]
where \(|I^{\rm cyl}\rangle\) represents the ground state corresponding to the identity operator. Subsequently, the FP tensors \(S^{*}\) and \(T^{*}\) can be expressed by the path integral on the manifolds \(\Sigma_{S}\) and \(\Sigma_{T}\), respectively, as illustrated in Fig. 1. Then, the FP-tensor elements are
\[\frac{S^{*}_{\alpha\beta\gamma}}{S^{*}_{111}} =\langle\phi_{\alpha}(\infty)\phi_{\beta}(i\infty)\phi_{\gamma}(- (1+i)\infty)\rangle_{\Sigma_{S}}, \tag{8}\] \[\frac{T^{*}_{\alpha\beta\gamma\delta}}{T^{*}_{1111}} =\langle\phi_{\alpha}(-\infty)\phi_{\beta}(i\infty)\phi_{\gamma}( \infty)\phi_{\delta}(-i\infty)\rangle_{\Sigma_{T}}. \tag{9}\]
\(\Sigma_{S}\) and \(\Sigma_{T}\) can be mapped the complex plane \(w\) by using
(cf. [31]),
\[z_{S} =\frac{L}{2\pi}[-\ln(w-i)-i\ln(w+1)+(1+i)\ln w], \tag{10}\] \[z_{T} =\frac{L}{2\pi}\left[\ln\left(\frac{w+i}{w-i}\right)+i\ln\left( \frac{w-1}{w+1}\right)\right]. \tag{11}\]
Each operator in the \(z\)-coordinate transforms accordingly as
\[\frac{S^{*}_{\alpha\beta\gamma}_{\gamma}}{S^{*}_{111}} =\langle\phi_{\alpha}(-1)\phi_{\beta}(i)\phi_{\gamma}(0)\rangle_ {\text{pl}}\prod_{n\in(\alpha,\beta,\gamma)}|J_{n}|^{\Delta_{n}},\] \[\frac{T^{*}_{\alpha\beta\gamma\delta}}{T^{*}_{111}} =\langle\phi_{\alpha}(-1)\phi_{\beta}(i)\phi_{\gamma}(1)\phi_{ \delta}(-i)\rangle_{\text{pl}}\prod_{n\in(\alpha,\beta,\gamma,\delta)}|J_{n}|^ {\Delta_{n}},\]
where \(|J_{n}|=|\left(\frac{2\pi}{L}\right)^{-1}\lim_{z\to\zeta\infty}e^{2\pi zz \zeta^{\prime}/(L|\zeta|)}|w^{\prime}(z)|\), and \(\zeta\infty\) is the coordinate of the index in the originate manifold. The resulting \(|J_{n}|\) are \(e^{-\pi/4}\) and \(2e^{-\pi/2}\), respectively, being consistent with Eqs. (6-7). Detailed calculations are presented in the supplemental material.
_Numerical fixed point tensor.--_ Let us provide numerical confirmations of our main results using tensor renormalization group (TRG) [14]. TRG is a numerical technique devised to calculate effective \(L\times L\) tensor networks. In our study, our interest lies in computing those of large system sizes to obtain a tensor that is as close as possible to the FP tensor. However, performing an exact contraction is exponentially difficult, prompting us to focus on extracting low-lying spectral properties. TRG seeks to circumvent this issue by employing the principles of the renormalization group theory. Each coarse-graining step entails decompositions and recombinations as depicted in Fig. 2. Truncation, parameterized by the bond dimension \(D\), is performed to maintain the tractability of numerical computation. However, it is important to note that this scheme is considered _exact_ when \(D=\infty\), and thus, employing larger \(D\) improves the numerical accuracy. Additionally, we impose D\({}_{4}\) symmetry in TRG. The details can be found in the supplemental material.
_Tests on critical lattice models.--_Let us first test the value \(x_{S}=e^{\pi/4}\) in Eq. (6), by computing \(x_{S}\) from the critical Ising and 3-state Potts models. Given Eq. (6), we can numerically compute the OPE coefficients \(C_{\alpha\beta\gamma}\) from Eq. (5). We define \(x_{S}(L)\) by solving Eq. (5) to be
\[x_{S}(L)\equiv\frac{1}{\sqrt{2}}\left(\frac{2^{\Delta_{\gamma}}C_{\alpha\beta \gamma}}{S_{\alpha\beta\gamma}(L)}\right)^{1/(\Delta_{\alpha}+\Delta_{\beta} +\Delta_{\gamma})}. \tag{12}\]
Each model has a primary operator \(\epsilon\), called the energy and the thermal operator, respectively. Since \(C_{\epsilon\epsilon 1}=1\), \(x_{S}(L)\) can be computed from the finite-size three-leg tensor \(S_{\epsilon 1}(L)\).
Figure 3 shows the value of \(x_{S}(L)\) obtained from TRG at the bond dimension \(D=96\). The numerically-derived \(x_{S}(L)\)'s for both models converge to the theoretical value of \(e^{\pi/4}\). The noticeable increase in amplitude for the 3-state Potts model at \(L>10^{2}\) is attributed to the effect
Figure 2: The pictorial description of the tensor renormalization group. The decomposition is done so that \(T^{(n)}\) is a good approximation of the local Boltzmann weights of \(L=\sqrt{2}^{n}\). In the tensor network renormalization(TNR) scheme, filtering of local entanglement is introduced.
Figure 3: Estimation of \(x_{S}(L)\) from TRG at \(D=96\). The values of \(x(L)\) from both the Ising and 3-state Potts model converge to the theoretical value \(x_{S}=e^{\pi/4}\) denoted by a black dotted line. We plot \(x_{S}=2.23035\) obtained from Loop-TNR [17] on the critical 9-state clock model [23] with a lime dashed line. The 3-state Potts model exhibits a deviation for \(L>100\) because simulating systems with higher central charges involves larger numerical errors.
of the finite bond dimension. It is worth noting that our value for \(x_{S}\) deviates slightly from the value \(x_{S}=2.23035\) from a previous study on the 9-state clock model [23]. We speculate that this minor deviation is due to the finite bond-dimension effect because higher central charges lead to more pronounced numerical errors [24]. For the system size \(L=2048\) and bond dimension \(D=96\), we ascertain \(x_{S}=2.193257\) for the Ising model, a value remarkably close to \(e^{\pi/4}=2.193280\).
Once we are certain of the value \(x_{S}=e^{\pi/4}\), we can verify Eq. (6) for all the OPE coefficients, which are computed from the three-leg tensor \(S\) as
\[C_{\alpha\beta\gamma}(L)=(\sqrt{2}\,e^{\pi/4})^{\Delta_{\alpha}+\Delta_{\beta }+\Delta_{\gamma}}2^{-\Delta_{\gamma}}S_{\alpha\beta\gamma}(L). \tag{13}\]
The results are exhibited in Fig. 4. The finite-size effect originates from the twist operator at the branch points [29; 30], whose scaling is universal. The detailed analysis is discussed in the supplemental material.
We next computed four-point tensors \(T_{\alpha\beta\gamma\delta}\) and compared with the theoretical values from Eq. (7), where the explicit forms of the four-point functions are listed in the supplemental material. The result is consistent up to two digits for most tensor elements, as shown in Table 1. The exceptions are \(T_{\sigma\sigma\sigma\sigma}\) and \(T_{\sigma\sigma 11}\), whose numerical values deviate approximately 5% from the theoretical values. As for \(T_{\sigma\sigma\epsilon 1}\), the deviation is almost 24%. This discrepancy, however, can be attributed to finite-size effects and becomes negligible for infinite system sizes. To illustrate this, we define the finite-size deviation as
\[\delta T_{\alpha\beta\gamma\delta}\equiv T_{\alpha\beta\gamma\delta}^{*}-T_{ \alpha\beta\gamma\delta}(L).\]
Figure 5 presents the values of \(\delta T_{\sigma\sigma\sigma\sigma}(L)\), \(\delta T_{\sigma\sigma\epsilon 1}(L)\), and \(\delta T_{\sigma\sigma 11}(L)\) obtained from TRG calculations. A clear power-law decay with respect to the system size is observed, supporting the claim that the large deviations for those elements are finite-size effects. However, it is worth mentioning that the exponent closely approximates \(\sim L^{-1/3}\), hinting at the existence of an underlying theory that might account for this.
_Acknowledgement_-- We would like to thank Jacob Bridgeman, Clement Delcamp, Jutho Haegeman, Rui-Zhen Huang, Kansei Inamura, Andreas Lauchli, Laurens Lootens, Masaki Oshikawa, Slava Rychkov, Luca Tagliacozzo, Frank Verstraete and Yunqin Zheng for helpful discussions. A. U. is supported by the MERIT-WINGS Program at the University of Tokyo, the JSPS fellowship (DC1). He was supported in part by MEXT/JSPS KAKENHI Grants No. JP21J2052. M. Y. was supported in part by the JSPS Grant-in-Aid for Scientific Research (19H00689, 19K03820, 20H05860, 23H01168), and by JST, Japan (PRESTO Grant No. JPMJPR225A, Moonshot R&D Grant No. JPMJMS2061).
_Source Availability._- Our numerical data and analysis codes for the Ising fixed-point are publicly available at [https://github.com/dartsushi/TRG_D4_symmetry](https://github.com/dartsushi/TRG_D4_symmetry).
|
2306.12419 | A framework for statistical modelling of the extremes of longitudinal
data, applied to elite swimming | We develop methods, based on extreme value theory, for analysing observations
in the tails of longitudinal data, i.e., a data set consisting of a large
number of short time series, which are typically irregularly and
non-simultaneously sampled, yet have some commonality in the structure of each
series and exhibit independence between time series. Extreme value theory has
not been considered previously for the unique features of longitudinal data.
Across time series the data are assumed to follow a common generalised Pareto
distribution, above a high threshold. To account for temporal dependence of
such data we require a model to describe (i) the variation between the
different time series properties, (ii) the changes in distribution over time,
and (iii) the temporal dependence within each series. Our methodology has the
flexibility to capture both asymptotic dependence and asymptotic independence,
with this characteristic determined by the data. Bayesian inference is used
given the need for inference of parameters that are unique to each time series.
Our novel methodology is illustrated through the analysis of data from elite
swimmers in the men's 100m breaststroke. Unlike previous analyses of
personal-best data in this event, we are able to make inference about the
careers of individual swimmers - such as the probability an individual will
break the world record or swim the fastest time next year. | Harriet Spearing, Jonathan Tawn, David Irons, Tim Paulden | 2023-06-21T17:57:59Z | http://arxiv.org/abs/2306.12419v1 | A framework for statistical modelling of the extremes of longitudinal data, applied to elite swimming
###### Abstract
We develop methods, based on extreme value theory, for analysing observations in the tails of longitudinal data, i.e., a data set consisting of a large number of short time series, which are typically irregularly and non-simultaneously sampled, yet have some commonality in the structure of each series and exhibit independence between time series. Extreme value theory has not been considered previously for the unique features of longitudinal data. Across time series the data are assumed to follow a common generalised Pareto distribution, above a high threshold. To account for temporal dependence of such data we require a model to describe (i) the variation between the different time series properties, (ii) the changes in distribution over time, and (iii) the temporal dependence within each series. Our methodology has the flexibility to capture both asymptotic dependence and asymptotic independence, with this characteristic determined by the data. Bayesian inference is used given the need for inference of parameters that are unique to each time series. Our novel methodology is illustrated through the analysis of data from elite swimmers in the men's 100m breaststroke. Unlike previous analyses of personal-best data in this event, we are able to make inference about the careers of individual swimmers - such as the probability an individual will break the world record or swim the fastest time next year.
Keywords: Bayesian inference, elite swimming, extremal dependence, extreme value theory, longitudinal data, panel data, ranking, records, sports modelling.
## 1 Introduction
Traditional statistical techniques are designed to describe the behaviour of the "typical" data and many analyses involve the identification and removal of observations from the tails of the
data to improve robustness. But what if the data of most interest _are_ those observations in the tails? When considering natural disasters such as flooding, stresses or corrosion on a structure, financial crises, or sporting records, it is precisely these _extreme_ values that are most pertinent. _Extreme value theory_ (EVT) is a branch of statistics specifically designed to model such extreme or rare events, with the methods having a strong probabilistic framework based on asymptotic justifications. This paper presents novel methodology for the analysis of longitudinal data where the extreme values are of primary interest.
Early EVT methods describe the extremal behaviour of independent univariate random variables, possibly in the presence of covariates, with the book of Coles (2001) an accessible introduction. Since then, the extremal properties of ever more rich data structures have been studied. For univariate stationary processes the following features have been considered: long- and short-range dependence (Ledford and Tawn, 2003), Markov structure (Winter and Tawn, 2017), and hierarchical clustered data (Smith and Goodman, 2000; Dupuis et al., 2023; Momoki and Yoshida, 2023). For multivariate extreme value problems, structure has been identified and exploited through the use of graphical structures (Engelke and Hitz, 2020) and models for conditional structures through asymptotic independence (Heffernan and Tawn, 2004). Various approaches have also been developed for spatial, and spatial temporal extreme events, such as \(r\)-Pareto processes (de Fondeville and Davison, 2022), spatial conditional asymptotically independent processes (Wadsworth and Tawn, 2022), and for spatial mixture processes (Richards et al., 2023).
Currently there is no EVT methodology to model longitudinal (or panel) data. Such data comprises a number of _subjects_, with each subject recording a time series of responses (Diggle et al., 2002). Specifically, there are a set of subjects, \(\mathcal{I}\), with a subject \(i\) having responses \(\mathcal{J}_{i}\), for all \(i\in\mathcal{I}\). The response \(X_{i,j}\) belonging to subject \(i\), occurs at time \(t_{i,j}\in\mathbb{R}\), for all \(j\in\mathcal{J}_{i},\ i\in\mathcal{I}\). The typical assumptions made about the collection \(\{X_{i,j}:j\in\mathcal{J}_{i},\ \text{for}\ i\in\mathcal{I}\}\) are that: the \(X_{i,j}\) are independent over different \(i\in\mathcal{I}\), irrespective of \(j\), but they are potentially dependent across \(j\in\mathcal{J}_{i}\) for any given \(i\in\mathcal{I}\); there are a large number of subjects relative to the number of responses per subject; and the distribution of \(X_{i,j}\) varies with \(t_{i,j}\) similarly over subjects.
For analysing the extremes of longitudinal data, the _sample_\(\mathcal{I}\) comprises those subjects with at least one extreme observation within the observed time-frame. We distinguish between this sample of subjects \(\mathcal{I}\), and the _population_ of extreme subjects, which includes those subjects with extreme responses that are exclusively outside the observed time-frame; i.e., the subjects
may have either no responses at all, or have responses that are exclusively non-extreme. In applications where subjects exhibit non-stationarity, future extreme events change from being from subjects in \(\mathcal{I}\) to responses on subjects in the broader population.
Longitudinal data analyses arise most commonly in designed trials (e.g., in clinical or corrosion contexts) whereby multiple subjects (e.g., patients or material coupon samples) have a single quantity (e.g., blood pressure or corrosion, respectively) measured over time. There has been no extreme value modelling of clinical and corrosion data which captures the full specification of such data. For example, Southworth and Heffernan (2012) and Laycock and Scarf (1993) do not consider repeated measurements on the same subject. Fougeres et al. (2006) do consider multiple observations per coupon but assume that observations from the same coupon are IID. Further differences between our approach and papers which model extremes of longitudinal/panel data are outlined in the supplementary material. Our paper aims to be the first foray into developing broadly usable EVT methods for longitudinal data, with the flexibly to model both asymptotic dependent and asymptotic independent temporal extremal dependence structures and to capture trends in the means of subjects' responses over time.
Extreme value analysis of longitudinal data is important in athletics and swimming, with clear relevance for studying the progression of records and predicting who will be fastest next year. Athletes/swimmers (subjects) all strive to be fastest in their event, with their personal career progression having stages of improvement and decline with age, and with them competing at irregular and non-synchronised times. These subject-specific trends arise whilst overall performances by the elite athletes/swimmers are improving over time.
The application of EVT methods is not new for sports' data. EVT is used by Stephenson and Tawn (2013) to model athletics times data and by Strand and Boes (1998) to estimate the peak age of competitive 10K road race runners. Spearing et al. (2021) use EVT to model the evolution of elite swimming over time, including the effect of different swim-suit technologies, and combine data across different swimming strokes, gender categories and distances through the use of a data-based covariate. These models do not attempt to model dependence structure - either they assume that performances from the same subject are independent of each other, or only incorporate each subject's best performance into the data set. Each approach leads to incomplete inference: the former produces an underestimation of standard errors and confidence interval widths when the independence assumptions are invalidated; and the latter uses a smaller data set than is available, leading to inefficient inference. However, the true limitation of these
simplifications runs deeper. The lack of any longitudinal structure in these models means that no statistical inference can be conducted on any facet involving individual competitors.
We illustrate our novel EVT methodology for longitudinal data in the context of elite swimming, for the mens' 100m breaststroke (long course) event. A swimmer is defined as elite if they have ever produced a swim-time less than a certain threshold \(u\). The selection of this threshold \(u\), discussed in the supplementary material, is here taken as the 200th fastest personal-best swim-time in the mens' 100m breaststroke event, which is \(u=61.125\) seconds. In our approach (i) all the available recorded swims from each elite swimmer are modelled, irrespective of whether they are below or above \(u\), (ii) the swimmer who produced each swim-time is accounted for, as is their age at which it was achieved, (iii) the dependence between swim-times from the same swimmer is captured, with this dependence allowed to weaken as the inter-swim-time increases.
Figure 1 depicts the competition-best swim-times for five of the 200 elite swimmers who epitomise the range of career trajectories. Of these swimmers, Adam Peaty holds the current world record and so, the fastest personal-best (PB). Ilya Shymanovich has the 2nd fastest PB in the data, Sakci Hueseyin 8th, and Sakimoto Hiromasa the 101st. Takahashi has the 196th fastest PB, which is only just faster than \(u\) with that being their only swim faster than \(u\). The performances, and _career trajectories_ of the top two swimmers differ. Peaty is consistently fast, producing the seven fastest times of the competition-best dataset, and with all his performances faster than \(u\). Conversely, Shymanovich is in a clear progression stage of his career, moving from being slower than \(u\) to consistently faster. The figure illustrates there to be differing strategies for which, and how many, competitions swimmers compete in.
Figure 1: Data for swim-times (in seconds) plotted against the date when it was achieved for the mens’ 100m breaststroke (long course) event. All competition best performances are shown for five swimmers over time. The dashed line indicates the threshold \(u\).
Now consider the marginal distribution of the extreme swimming values, i.e., the values below \(u=61.125\). To motivate a possible model for these values we draw on EVT which provides an asymptotic justification for using the generalised Pareto distribution (GPD), however, we have no justifiable parametric model for observations slower than \(u\). In modelling the extremes of longitudinal data, it is desirable that the extreme data be the most influential. Therefore, observations slower than \(u\) are treated as censored at the level of the threshold. As a consequence, all but one of Takahashi's observations are censored, whereas all of Peaty's observations can be modelled with the GPD. Critically, the values slower than the threshold are not lost as they provide marginal information about the rate of performing better than \(u\) and they inform about the dependence structure for individual swimmers through information about patterns of better and worse performances relative to \(u\).
Conventional presentation of EVT pertains to the largest values - or equivalently the upper tail, yet the best swim-times are the smallest - or in the lower tail. By applying our methodology to _negative_ swim-times, standard EVT results can be utilised. So, throughout we present theory and methods for the upper extremes of longitudinal data. Section 2 presents the extensions of univariate EVT to cover the time series aspect of each subject's data and illustrates how the level of subject variation induces both _asymptotic dependence_ and _asymptotic independence_. Section 3 contains the main contribution of the paper - a novel approach to the modelling of the extremes of longitudinal data. Section 4 presents the general Bayesian inference framework and Section 5 details how this modelling and inference framework can be applied to the elite swimming data, and provides examples of particular inferences and predictions that are available using our methodology. A discussion and future work is in Section 6.
## 2 Motivating Theory
### Univariate extremes
In its simplest form, univariate extreme value theory (EVT) applies to independent and identically distributed (IID) random samples \(Y_{1},\ldots,Y_{n}\), where each variable has continuous distribution function \(F\). The block maxima and peaks over threshold methods are the two core approaches in univariate EVT (Coles, 2001). We are interested in formulating a theoretically justified marginal extreme value model for temporally dependent variables and describing the dependence structure induced by within- and across-subject observations for longitudinal
data. We also consider a stationary process, \(X_{1},\ldots,X_{n}\) which also has the marginal distribution function \(F\) but satisfies conditions such that its long-range dependence is restricted to behave as effectively independent, see Leadbetter et al. (2012) for their precise form and discussion of the limit results (1) and (2). Under such conditions, the following results hold. If \(M_{Y,n}:=\max\{Y_{1},\ldots,Y_{n}\}\) and there exist norming sequences \(a_{n}>0\) and \(b_{n}\), such that
\[\Pr\left\{\frac{M_{Y,n}-b_{n}}{a_{n}}\leq x\right\}=F^{n}(a_{n}x+b_{n})\to G(x ),\text{ as }n\to\infty, \tag{1}\]
where that the limiting distribution \(G(x)\) is non-degenerate, then \(G(x)\) must be a generalised extreme value (GEV) distribution, which has the form \(G(x)=\exp\left(-[1+\xi(x-\mu)/\sigma]_{+}^{-1/\xi}\right)\), where \(\mu,\ \xi\in\mathbb{R},\ \sigma\in\mathbb{R}^{+}\), are the location, shape and scale parameters respectively and with the notation \(y_{+}:=\max(y,0)\). Then for \(M_{X,n}:=\max\{X_{1},\ldots,X_{n}\}\), if \((M_{X,n}-b_{n})/a_{n}\) has a non-degenerate limit distribution, as \(n\to\infty\), it follows that
\[\Pr\left\{\frac{M_{X,n}-b_{n}}{a_{n}}\leq x\right\}\to[G(x)]^{\theta},\text{ as }n\to\infty, \tag{2}\]
where \(0<\theta\leq 1\) is the extremal index; a measure of extremal temporal dependence.
We are primarily interested in having an asymptotically motivated model for the upper tail behaviour of \(\{X_{t}\}\) and \(\{Y_{t}\}\). These models are derived directly from the limiting distribution of block maxima identified above. First, denote \(D_{G}:=\{x\in\mathbb{R}:0<G(x)<1\}\) and let both \(x\) and \(u\) be in \(D_{G}\) with \(x>u\). Then, as \(n\to\infty\), applying a Taylor series approximation to limit (1) gives, \(n[1-F(a_{n}x+b_{n})]\to-\log G(x)=[1+\xi(x-\mu)/\sigma]_{+}^{-1/\xi}\) and for \(Y\sim F\),
\[\Pr\{Y>a_{n}x+b_{n}|Y>a_{n}u+b_{n}\}\to\log G(x)/\log G(u)=:\bar{H}_{u}(x), \tag{3}\]
with \(\bar{H}_{u}(x):=1-H_{u}(x)\), and where the distribution function \(H_{u}\) is given by
\[H_{u}(x)=1-\left[1+\xi\left(\frac{x-u}{\sigma_{u}}\right)\right]_{+}^{-\frac{ 1}{\xi}}. \tag{4}\]
where \(\sigma_{u}=\sigma+\xi(u-\mu)\). The distribution function \(H_{u}\) is termed the generalised Pareto distribution (GPD), denoted GPD\((\sigma_{u},\xi)\), with threshold \(u\), shape parameter \(\xi\in\mathbb{R}\) and scale parameter \(\sigma_{u}\in\mathbb{R}_{+}\). For \(\xi<0\), there exists a finite value \(x^{H}=u-\sigma_{u}/\xi:\ H_{u}(x)=1,\ \forall x>x^{H}\), whereas for \(\xi\geq 0,\ H_{u}(x)<1,\ \forall x<\infty\). This GPD result is powerful as it holds as the limit distribution for a very broad class of continuous distributions \(F\).
The same GPD\((\sigma_{u},\xi)\) limit distribution holds for \(\Pr\{X>a_{n}x+b_{n}|X>a_{n}u+b_{n}\}\) as \(n\to\infty\) with \(X\sim X_{i}\). Additionally Leadbetter (1991) gives that for an arbitrary cluster maxima \(X_{C}\) of \(\{X_{t}\}\), then \(\Pr\{X_{C}>a_{n}x+b_{n}|X_{C}>a_{n}u+b_{n}\}\) as \(n\to\infty\), is also GPD\((\sigma_{u},\xi)\). This has
motivated the use of the generalized Pareto distribution as a statistical model for cluster maxima (Davison and Smith, 1990), but for our purposes shows the connection between the tail of the distribution for all swims and competition maxima.
In practice the limit distribution (3) is assumed to hold exactly for some finite \(n\), or equivalently for some fixed threshold \(a_{n}u+b_{n}\), corresponding to a high quantile of \(Y\) or \(X\). A consequence is that the limit distribution \(H_{u}\) gives an asymptotic model, determined by only two parameters, for the distribution of exceedances above a threshold \(u\), no matter the form of marginal distribution \(F\). To complete the description of the tail of the marginal distribution we define the marginal probability of an threshold exceedance, \(\lambda_{u}:=\Pr(X>u)\). The optimal choice of \(u\) is determined by bias-variance trade-off arguments (Scarrott and MacDonald, 2012).
### Extremal dependence: measures and modelling strategies
To account for dependence between the extreme responses from a given subject, we draw on knowledge of generic extremal dependence measures and the associated modelling strategies before considering the specific features that are unique to longitudinal data.
When modelling dependence between the extremes of two variables the typical approach involves first deciding on the _form_ of extremal dependence, and then looking for an appropriate model formulation subject to that form (Coles et al., 1999). For bivariate extremes, with continuous random variables \((X_{1},X_{2})\) with marginal distributions \(F_{1}\) and \(F_{2}\), respectively, the two forms of extremal dependence in the upper tail are determined by _the coefficient of asymptotic dependence_\(\chi:=\lim_{q\uparrow 1}\chi(q)\) where, for \(0<q<1\),
\[\chi(q):=\Pr\{F_{1}(X_{1})>q\ |\ F_{2}(X_{2})>q\}=\Pr\{F_{1}(X_{1})>q,F_{2}(X_{2 })>q\}/(1-q), \tag{5}\]
with _asymptotic dependence_ given by \(0<\chi\leq 1\) and _asymptotic independence_ by \(\chi=0\). In essence, asymptotic dependence allows the very largest values of \(X_{1}\) and \(X_{2}\) to occur together, unlike for asymptotic independence. This interpretation is made precise by looking at the limiting distribution of normalised componentwise maxima of IID vectors \(\{(X_{1i},X_{2i}):i=1,\ldots,n\}\), such that the marginal limiting distributions are non-degenerate. Then, the two variables are termed asymptotic dependent, or asymptotic independent, if that limiting distribution exhibits dependence, or independence, respectively. Variables may exhibit extremal dependence without asymptotic dependence, with this dependence measured by the _coefficient of asymptotic
_independence_, \(\bar{\chi}:=\lim_{q\uparrow 1}\bar{\chi}(q)\in(-1,1]\), where for \(0<q<1\),
\[\bar{\chi}(q):=\frac{2\log\Pr\{F_{2}(X_{2})>q\}}{\log\Pr\{F_{1}(X_{1})>q,F_{2}(X_ {2})>q\}}-1, \tag{6}\]
with independent variables giving \(\bar{\chi}=0\), and \(0<\bar{\chi}<1\) (\(\bar{\chi}<0\)) corresponding to a positive (negative) extremal dependence form of asymptotic independence respectively, and \(\bar{\chi}=1\) under asymptotically dependence. Both \(\chi\) and \(\bar{\chi}\) are invariant to the marginal distributions, so in terms of models for the joint distribution it is helpful to consider different copulas (Nelsen, 2007).
Fougeres et al. (2009) use the bivariate extreme value distribution copula with logistic(\(\alpha\)) dependence structure, which has \((\chi,\bar{\chi})=(2-2^{\alpha},1)\) for \(0\leq\alpha<1\) and \((\chi,\bar{\chi})=(0,0)\) when \(\alpha=1\). This copula is restrictive as it cannot capture positive dependence within the asymptotic independence case. The Gaussian copula has \((\chi,\bar{\chi})=(0,\rho)\) for correlation parameter \(-1<\rho<1\)(Coles et al., 1999), though not offering asymptotic dependence, gives flexibility and parsimony of asymptotic independence structures and it benefits from closed form conditional distributions for simulating the time series features of longitudinal data.
Given these properties, within-subject measurements were modelled via a Gaussian copula, see Section 3.2. This may appear restrictive, but we demonstrate in Section 2.3 that, due to the variation across subjects, any level of asymptotic dependence or asymptotic independence can be approximated for the longitudinal data using this copula. This flexibility is not possible if starting with an asymptotically dependent copula.
### Measures of longitudinal data extremal dependence
Consider a special case of the set up of Section 1, with a stationary continuous time process for each subject \(i\in\mathcal{I}\) being \(\{X_{i}(t)\}\) for all \(t\) which are observed at a set of identical and equally spaced time points across the \(n\) subjects. Denote \(X_{i,j}=X_{i}(t_{i,j})=X_{i}(t_{j})\), where \(t_{j}\) is the \(j\)th time point. We assume that the marginal distribution of the \(i\)th subject is \(F_{i}(\cdot)=F(\cdot;\alpha_{i})\) where \(F\) is a common continuous distribution function family with parameter \(\alpha_{i}\in\mathbb{R}\) which varies over \(i\in\mathcal{I}\). We term \(\alpha_{i}\) the _attribute_ of subject \(i\), with the property that \(F(x;\alpha_{i})>F(x;\alpha_{j})\) for all \(x\in\mathbb{R}\) for all \(\alpha_{i}>\alpha_{j}\). Increasing the attribute of a subject makes the quantiles of its response distribution larger. Given the potential heterogeneity between subjects, a basic application of the coefficient of asymptotic dependence for within-subject dependence at time-lag \(\tau\), for all \(\tau\in\mathbb{R}\) for each subject \(i\in\mathcal{I}\) is:
\[\chi_{i}(\tau):=\lim_{q\uparrow 1}\Pr(F(X_{i}(\tau);\alpha_{i})>q\mid F(X_{i} (0);\alpha_{i})>q), \tag{7}\]
or the equivalent asymptotic independence measure \(\bar{\chi}_{i}(\tau)\). These measures do not provide a global description of the dependence across all subjects in \(\mathcal{I}\), with two such measures being discussed in the supplementary material.
To study how subject attributes determine extremal dependence of longitudinal data, consider all \(n\) independent subjects having responses at only two time points - which are the same across subjects - and the responses per subject are independent, except for subject \(n\). Additionally all subjects have identical attributes except for subject \(n\). In the notation of Section 1, \(\mathcal{J}_{i}=\{1,2\}\) for all \(i\in\mathcal{I}\), \(X_{i,j}\sim N(0,1)\) for \(i=1,\ldots,n-1\) and \(j=1,2\) are mutually independent, while subject \(n\) has a potentially different mean, namely \(X_{nj}\sim N(\alpha_{n},1)\) for \(j=1,2\) and \((X_{n1},X_{n2})\) are bivariate Normal with correlation \(0\leq\rho<1\), which with standard margins has joint distribution function denoted by \(\Phi_{2}(\cdot,\cdot;\rho)\). Thus here \(F(x;\alpha_{i})=\Phi(x-\alpha_{i})\), with attributes \(\alpha_{1}=\ldots=\alpha_{n-1}=0\) and \(\alpha_{n}\).
The subject-specific dependence measures at lag \(\tau=1\), are \((\chi_{i1},\bar{\chi}_{i1})=(0,0)\) for subjects \(i=1,\ldots,n-1\) due to the independence assumption, and due to the bivariate Normal distribution for subject \(n\) we have \((\chi_{i1},\bar{\chi}_{i1})=(0,\rho)\). So there is asymptotic independence across subjects, although subject \(n\) is not independent. When studying the across population behaviour, we investigate two cases for \(\alpha_{n}\) (i) \((2\log n)^{1/2}/\alpha_{n}=o(1)\) as \(n\rightarrow\infty\) and (ii) \(\alpha_{n}/(2\log n)^{1/2}=o(1)\) as \(n\rightarrow\infty\), i.e., the latter includes both \(\alpha_{n}\rightarrow\infty\) as \(n\rightarrow\infty\) and \(\alpha_{n}=0\) for all \(n\). We will show that cases (i) and (ii) lead to results which are consistent with asymptotic independence and asymptotic dependence respectively.
Consider the dependence of the componentwise maxima \((M_{n,1},M_{n,2})\), over the two time points, i.e., \(M_{n,j}:=\max\left(\left\{X_{i,j}:i\in\mathcal{I}\right\}\right)\), for \(j=1,2\) and \(n\rightarrow\infty\) for case (i). For the two marginal maxima we have that, for any \(x\in\mathbb{R}\), \(\Pr\{M_{nj}-\alpha_{n}<x\}=\left[\Phi(\alpha_{n}+x)\right]^{n-1}\Phi(x) \rightarrow\Phi(x)\) as \(n\rightarrow\infty\), i.e., a non-degenerate Gaussian limit. This result follows from Section 2.1 since for \(\alpha_{n}\) in case (i), \(n[1-\Phi(\alpha_{n}+x)]\to 0\) for all \(x\in\mathbb{R}\). The reason for this convergence follows from univariate extreme value results for standard Gaussian variables, i.e., \(n[1-\Phi(a_{n}y+b_{n})]\rightarrow\exp(-y)\) for \(a_{n}=(2\log n)^{-1/2}\) and \(b_{n}=(2\log n)^{1/2}+o(1)\) for \(y\in\mathbb{R}\)(Leadbetter et al., 2012). Now consider the joint probability, for \((x,y)\in\mathbb{R}^{2}\), as \(n\rightarrow\infty\), given by
\[\Pr\{M_{n1}-\alpha_{n}<x,M_{n2}-\alpha_{n}<y\}=\left[\Phi(\alpha_{n}+x)\Phi( \alpha_{n}+y)\right]^{n-1}\Phi_{2}(x,y;\rho)\rightarrow\Phi_{2}(x,y;\rho), \tag{8}\]
where the non-degenerate limit arises using the same logic as for the marginal convergence. The joint maxima are asymptotically dependent when \(\rho>0\), with the limit not restricted to being a bivariate extreme value distribution as the variables are not identically distributed. Case (ii)
for the \(\alpha_{n}\) gives that \(\Pr\{(M_{nj}-b_{n})/a_{n}<x\}\to G(x)\), where \(G(x)=\exp[-\exp(-x)]\), and
\[\Pr\{(M_{n1}-b_{n})/a_{n}<x,(M_{n2}-b_{n})/a_{n}<y\}\to G(x)G(y)\]
as \(n\to\infty\). These limits show a change in the marginal limit distribution from Gaussian to Gumbel and independence of the limiting componentwise maxima, so asymptotic independence.
These two asymptotic regimes for longitudinal data illustrate that the nature of extremal dependence is different for this framework than for stationary series. Specifically, they demonstrate that asymptotic dependence per subject is not essential to achieve asymptotic dependence for longitudinal data; asymptotic dependence can be achieved by having subjects with a heavy tailed attribute distribution; and that both asymptotic dependence and asymptotic independence can be achieved from a simple Gaussian copula. Critical to the form of extremal dependence is the level of between-subject variation (via the attribute variation) relative to the within-subject variation. Here in case (i) \(\alpha_{n}\) dominates the maximum of the responses over all other subjects but not in case (ii).
## 3 Extremal Model for Longitudinal Data
### Population Marginal Model
When developing a marginal model for the population of longitudinal random variables \(\{(X_{i,j},t_{i,j}):j\in\mathcal{J}_{i},i\in\mathcal{I}\}\), we make a critical decision of ignoring the subject-specific nature of the data as is conventional in previous extremal analyses. We refer to this characteristic as _subject-ignorant_. Instead, the information regarding specific subjects is captured through our dependence modelling in Section 3.2. The reasons for this strategy are three-fold. Firstly, the number of observations per subject, e.g., \(\mid\mathcal{J}_{i}\mid\) for subject \(i\), is likely to be small in most applications and so a separate marginal model (see Section 2.1) per subject for the data in the tails is an unrealistic target, even with some pooling (Dupuis et al., 2023). Secondly, modelling the tail of a population using a single GPD enables inference to be made about trends in the population as a whole (Spearing et al., 2021). Thirdly, this enables application specific structure identified from previous GPD analyses, which ignore subject knowledge, to be exploited.
Given the above strategy, consider a generic pair \((X,t)\), written as \(X_{t}\). For a selected constant over time threshold \(u\), there are three features of the distribution of \(X_{t}\) we describe: the behaviour above the threshold \(u\), the probability of \(X_{t}\) exceeding \(u\), and the distribution of \(X_{t}\) being below \(u\). The latter is not typically studied in extremes of a univariate variable, but
keeping track of the behaviour below the threshold is important here for dependence modelling of within-subject data in Section 3.2.
Above the threshold \(u\) we assume that for \(x>0\), \(\Pr\{X_{t}-u<x|X_{t}>u\}\) has a GPD\((\sigma_{u}(t),\xi)\), as given by expression (4). Although \(X_{t}\) is potentially complex in its variation over \(t\), temporal variation is assumed only through \(\sigma_{u}\), a typical and pragmatic approach (Coles, 2001). The probability of exceeding the threshold \(\Pr\{X_{t}>u\}=:\lambda_{u}(t)\) is also allowed to vary with time. Literature on modelling approaches for how \((\sigma_{u}(t),\lambda_{u}(t))\) vary with \(t\) include parametric, see Section 5.2, non-parametric, or machine learning approaches, see (Richards and Huser, 2022).
The \(X_{t}\), conditionally on being below \(u\), are assumed to follow some unknown but continuous density function \(h_{t}:(\infty,u]\to\mathbb{R}_{+}\), with \(\int_{-\infty}^{u}h_{t}(s)\,\mathrm{d}s=1\), where \(h_{t}\) does not depend on \((\lambda_{u},\sigma_{u},\xi)\). Combining all these models gives the distribution function \(F_{X_{t}}\) of \(X_{t}\) as
\[F_{X_{t}}(x)=\begin{cases}1-\lambda_{u}(t)\left[1+\xi(x-u)/\sigma_{u}(t) \right]_{+}^{-\frac{1}{6}},&x>u,\\ \left[1-\lambda_{u}(t)\right]\int_{-\infty}^{x}h_{t}(s)\,\mathrm{d}s,&x\leq u.\end{cases} \tag{9}\]
As with the vast majority of extreme value modelling we avoid imposing a structure on the distribution of \(X_{t}<u\), i.e., the density \(h_{t}\) here. Even if a parametric model for \(h_{t}\) had no parameters in common with those in the GPD or \(\lambda_{u}\) models, there is a risk of bias from misspecifying \(h_{t}\) in the longitudinal setting due to the dependence between values \(X_{i,j}\) and \(X_{ij^{\prime}}\) for \(j^{\prime}\neq j\), where \(X_{i,j}<u<X_{ij^{\prime}}\). In such cases, errors in modelling below the threshold can induce errors above the threshold to compensate. Therefore, any actual value \(X_{i,j}\) below \(u\) is instead treated as censored, i.e., as a realisation of the event \(X_{i,j}<u\).
### Dependence Structure in a Latent Space
The focus now turns to modelling the dependence structure of random variables \(\{(X_{i,j},t_{i,j}):j\in\mathcal{J}_{i},i\in\mathcal{I}\}\). Specifically, we need to allow for temporal dependence between within-subject variables and independence between across-subject variables, so unlike in Section 3.1 knowledge of each subject's contribution to the data is accounted for. The formulation of these models builds on the findings of Section 2.3, which showed that multivariate Gaussian distributions for within-subject variations combined with an attribute distribution that has the capacity for both heavier and shorter tails than the within-subject Gaussian distribution, provide sufficient flexibility to allow for both extremal dependence forms.
The adopted modelling strategy bears likeness to that of Huser and Wadsworth (2019), i.e., focusing on the joint structure of variables, without concern for its implications on the marginals
at that stage. Subsequently, in Section 3.3, the marginal distributions of this model are linked to the formulation in Section 3.1. In particular, a model is adopted in terms of variables \(\{(Z_{i,j},t_{i,j}):j\in\mathcal{J}_{i},i\in\mathcal{I}\}\), where \(Z_{i,j}=T_{t}(X_{i,j})\) for a function \(T_{t}\) defined in Section 3.3, and we refer to the stochastic model for the \(\{Z_{i,j}\}\) as a model in the _latent space_.
In the latent space we develop a model for responses from the same subject, e.g., \(\{(Z_{i,j},t_{i,j}):j\in\mathcal{J}_{i}\}\) for subject \(i\). We follow standard Gaussian modelling assumptions of longitudinal data analysis (Diggle et al., 2002). The subject-specific model takes \(Z_{i,j}\), across \(j\in\mathcal{J}_{i}\), as realisations of a Gaussian process \(Z_{i}(t)\) over time \(t\in\mathbb{R}\) observed at the times \(\boldsymbol{t}_{i}:=\{t_{i,j};j\in\mathcal{I}_{i}\}\). Specifically,
\[Z_{i}(t)\sim\mathcal{GP}\left(\mu_{i}(t),\nu_{i}^{2}K_{\boldsymbol{\kappa}}( \cdot,\cdot)\right),\text{ for all }t\in\mathbb{R}, \tag{10}\]
where the _mean function_\(\mu_{i}(t):\mathbb{R}\rightarrow\mathbb{R}\) is a subject-specific time-dependent mean, \(\nu_{i}>0\) is a homogeneous subject-specific standard deviation, and \(K_{\boldsymbol{\kappa}}\) is a stationary kernel, which is shared over subjects, and which dictates the _subject-conditional correlation_ between the process at any times \(t\in\mathbb{R}\) and \(t^{\prime}\in\mathbb{R}\) with hyper-parameters \(\boldsymbol{\kappa}\). The term \(\mu_{i}(t)\) allows for the statistical properties of individual subjects to evolve over time separately from that of the population marginal model, as is the case for many applications in longitudinal analysis. To avoid overparametrisation over individuals it is reasonable to assume that
\[\mu_{i}(t;\boldsymbol{\theta}_{i},\boldsymbol{\gamma})=\alpha_{i}+\mu(t,\tau_ {i};\boldsymbol{\gamma}),\text{ for all }t\in\mathbb{R}, \tag{11}\]
for a subject-ignorant function \(\mu\leq 0\) with parameters \(\boldsymbol{\gamma}\), subject-specific parameters \(\boldsymbol{\theta}_{i}=(\alpha_{i},\tau_{i})\) and covariates (which are ignored in this formulation, but are used in Section 5.2). To ensure that \(\alpha_{i}\) is identifiable, the maximum of the function \(\mu\), over \(t\), is set to zero, i.e., \(\alpha_{i}=\max_{t\in\mathbb{R}}\mu_{i}(t;\boldsymbol{\theta}_{i})\). Then \(\alpha_{i}\) is the \(i\)th subject's _attribute_, as in Section 2.3. When \(\mu\equiv 0\) in model (10) the subject-specific dependence measures are \((\chi_{i\tau},\bar{\chi}_{i\tau})=(0,K_{\boldsymbol{\kappa}}(0,\tau))\), for all \(i\in\mathcal{I}\).
The form of the stationary kernel is application specific. A powered exponential is used
\[K_{\boldsymbol{\kappa}}(t,t^{\prime})=\exp(-\kappa_{0}|t-t^{\prime}|^{\kappa_ {1}}), \tag{12}\]
with \(\boldsymbol{\kappa}=(\kappa_{0},\kappa_{1})\in\mathbb{R}_{+}\times[0.5,2]\) in Section 5, where smaller \(\kappa_{0}\) gives less subject-conditional dependence (with the limit \(\kappa_{0}\rightarrow\infty\) giving subject-conditional independence); and \(\kappa_{1}\) influences the local smoothness of the process, with larger \(\kappa_{1}\) giving a smoother process, with the limit \(\kappa_{1}\to 2\) corresponding to a process which is infinity differentiable, and when \(\kappa_{1}=1\) the process is Markov. Other well-established kernels, e.g., the Matern family (Diggle et al., 2002), were
trialled in exploratory analysis for the application in Section 5 but made no practical differences due to having few observations per subject and none at short time lags.
Conditioning on the latent model parameters, the marginal distribution of \(Z\), an arbitrary observation from the longitudinal data in the latent space, with \(n_{i}:=|\mathcal{I}_{i}|\) and \(n=\sum_{i\in\mathcal{I}}n_{i}\), is
\[G_{Z}(z)=\frac{1}{n}\sum_{i\in\mathcal{I}}\sum_{j=1}^{n_{i}}\Phi\left(\frac{z- \mu_{i}(t_{i,j})}{\nu_{i}}\right). \tag{13}\]
So the marginal distribution of \(Z\) is a Gaussian mixture over subjects and observation times. The marginal variation across subjects, as in Section 2.3, is captured exclusively through the distribution of the attributes \(\{\alpha_{i}:i\in\mathcal{I}\}\). All \(\alpha_{i}\) are taken to be independent and identically distributed over subjects with \(\alpha_{i}\sim N(0,V_{\alpha}^{2})\) for all \(i\in\mathcal{I}\), for a given fixed value of \(V_{\alpha}>0\).
From Section 2.3, it is clear that ratio between the variance \(V_{\alpha}^{2}\) of the \(\{\alpha_{i}\}\) and the within-subject variance, i.e., \(\nu_{i}^{2}\) for subject \(i\), determines whether the longitudinal data exhibit asymptotic dependence or asymptotic independence. Hence \(V_{\alpha}\) can be fixed to any chosen value, since the \(\{\nu_{i}\}\) are estimated from the data, and so their values adapt proportionally to the choice of \(V_{\alpha}\). Thus the data determine the form of longitudinal data extremal dependence.
### Transforming Margins between Observed and Latent Spaces
The probability integral transform (14) links the observation scale of \(X\) to and from the latent space of \(Z\) defined in Sections 3.1 and 3.2 respectively. For \(F_{X_{t}}\) and \(G_{Z}\) defined by expressions (9) and (13), respectively the variables \(X_{i,j}\) and \(Z_{i,j}\), both at time \(t_{i,j}\), are linked by
\[G_{Z}(Z_{i,j})=F_{X_{t_{i,j}}}(X_{i,j}),\text{ so }Z_{i,j}:=T_{t}(X_{i,j})=G_{Z}^ {-1}\{F_{X_{t_{i,j}}}(X_{i,j})\} \tag{14}\]
for \(T_{t}\) as in Section 3.2. For \(X_{i,j}\) above the threshold on the original margins, the transform is
\[Z_{i,j}=G_{Z}^{-1}\left\{1-\lambda_{u}(t_{i,j})\left[1+\xi(X_{i,j}-u)/\sigma_{ u}(t_{i,j})\right]_{+}^{-\frac{1}{\xi}}\right\}, \tag{15}\]
whereas when these points are below the threshold,
\[Z_{i,j}=G_{Z}^{-1}\left\{\left[1-\lambda_{u}(t_{i,j})\right]\int_{-\infty}^{X _{i,j}}h_{t_{i,j}}(s)\,\mathrm{d}s\right\}.\]
The threshold \(u\) in the observation space becomes time-varying in the latent space, i.e., \(u_{Z}(t)=G_{Z}^{-1}\left\{1-\lambda_{u}(t)\right\}\). As the density function \(h_{t}\) is unknown and we do not want to model it, a censoring approach was proposed in Section 2.1. For this range of \(X_{i,j}\), the random variable \(V_{i,j}:=\int_{-\infty}^{X_{i,j}}h_{t_{i,j}}(s)\,\mathrm{d}s\) is uniform(0,1) distributed. So the auxiliary variable \(V_{i,j}\sim\text{Uniform}(0,1)\) is introduced into the transformation when \(X_{i,j}<u\), to give \(Z_{i,j}=G_{Z}^{-1}\left\{\left[1-\lambda_{u}(t_{i,j})\right]V_{i,j}\right\}.\)
For making joint inferences across marginal and dependence structure parameters the likelihood functions in Section 4 require the Jacobian terms for these transformations. In each term the marginal density in the latent space is required, i.e.,
\[g_{Z}(z;\boldsymbol{\theta},\boldsymbol{\gamma},\boldsymbol{\nu})=\frac{1}{n} \sum_{i\in\mathcal{I}}\sum_{j=1}^{n_{i}}\frac{1}{\nu_{i}}\phi\left(\frac{z-\mu_ {i}(t_{i,j};\boldsymbol{\theta}_{i},\boldsymbol{\gamma})}{\nu_{i}}\right),\]
where \(\boldsymbol{\nu}:=\{\nu_{i}:i\in\mathcal{I}\}\) and \(\boldsymbol{\theta}:=\{\boldsymbol{\theta}_{i}:i\in\mathcal{I}\}\). For a realisation \(x\) of \(X\) (or \(v\) of \(V\)) when the observation is above (or below) \(u\), respectively, the associated realised value \(z\) of \(Z\) is obtained using the transformations above. For \(\boldsymbol{\sigma}\) and \(\boldsymbol{\beta}\) being parameters of the model for \(\sigma_{u}(t)\) and \(\lambda_{u}(t)\) respectively, the Jacobian terms at time \(t\) for above (\(J_{+}\)) and below (\(J_{-}\)) the threshold are
\[J_{+}(x;t,\xi,\boldsymbol{\sigma},\boldsymbol{\beta},\boldsymbol{ \theta},\boldsymbol{\gamma},\boldsymbol{\nu}) = \frac{\lambda_{u}(t;\boldsymbol{\beta})}{\sigma_{u}(t;\boldsymbol {\sigma})g_{Z}(z;\boldsymbol{\theta},\boldsymbol{\gamma},\boldsymbol{\nu})}[1+ \xi(x-u)/\sigma_{u}(t;\boldsymbol{\sigma})]_{+}^{-\frac{1}{\xi}-1},\] \[J_{-}(v;t,\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{ \gamma},\boldsymbol{\nu}) = [1-\lambda_{u}(t;\boldsymbol{\beta})]/g_{Z}(z;\boldsymbol{\theta},\boldsymbol{\gamma},\boldsymbol{\nu}). \tag{16}\]
### Predicting future extreme events in longitudinal data
In accounting for the longitudinal structure, predictions of extreme events regarding individual subjects are ascertainable, e.g., a new record by a particular subject \(i\in\mathcal{I}\). Such inferences incorporate each subject's mean function over time and temporal dependence, with both aspects described by the Gaussian process model of Section 3.2, which gives analytical solutions to such probabilities via closed form conditional distributions. The supplementary material provides an example prediction, namely, the probability of a subject \(i\in\mathcal{I}\) breaking the current record response \(r\) in some future time period, with the probability derived under an idealised scenario.
The evaluation of such probabilities under any realistic scenario is most simply conducted through Monte Carlo methods, simulating over different realisations of the longitudinal process for the fitted model. When subject-specific mean functions are non-constant, decaying eventually over time, then in the longer-term the extreme events are more likely to be due to subjects not yet observed in \(\mathcal{I}\). However, in the short-term these future extreme events are most likely to be obtained by current subjects in \(\mathcal{I}\), followed by a transitional medium-term where extremes arise from a mixture of these populations of subjects. In the supplementary material we provide a simulation framework that integrates information across the three classes of future subjects: those subjects in \(\mathcal{I}\), indexed by \(\mathcal{I}^{c}\) with \(\mathcal{I}^{c}\subseteq\mathcal{I}\), which are still producing at least one response above \(u\) in the future time window; those subjects \(\mathcal{I}^{f}\), which produced responses exclusively below the threshold within the observed time-frame and so \(\{\mathcal{I}^{f}\cap\mathcal{I}\}=\emptyset\), but in the
future produce a response above \(u\); and those subjects \(\mathcal{I}^{n}\) with no recordings at all within the observed time-frame but which in the future period produce at least one response above \(u\).
## 4 Inference
The likelihood is constructed in two steps. First, the parameters \((\xi,\boldsymbol{\sigma},\boldsymbol{\beta})\) and the vector of auxiliary variables for the marginal variables in the observed space are assumed known, so only the parameters affecting the latent space need to be estimated. Then the uncertainty in these marginal parameters and auxiliary variables is accounted for. For deriving the likelihood in the latent space for a given subject \(i\) with observations \((\boldsymbol{Z}_{i},\boldsymbol{t}_{i}):=\{(Z_{i,j},t_{i,j}):j\in\mathcal{J}_{ i}\}\), we define the correlation matrix between all of subject \(i\)'s observations by the correlation matrix \(\Sigma_{\boldsymbol{\kappa}}^{i}:=K_{\boldsymbol{\kappa}}(\boldsymbol{t}_{i}, \boldsymbol{t}_{i})\), i.e., the \((j,k)^{th}\) entry \(\Sigma_{\boldsymbol{\kappa}}^{i,(j,k)}:=K_{\boldsymbol{\kappa}}(t_{i,j},t_{i,k})\) is the correlation between \(Z_{i,j}\) and \(Z_{i,k}\). As responses for a subject are from a multivariate Gaussian distribution and different subjects are independent, the likelihood in the latent space for responses \(\boldsymbol{z}:=\{\boldsymbol{z}_{i}:i\in\mathcal{I}\}\) is
\[L_{\ell}\left(\boldsymbol{z};\boldsymbol{t},\boldsymbol{\theta},\boldsymbol{ \gamma},\boldsymbol{\nu},\boldsymbol{\kappa}\right)\propto\prod_{i\in \mathcal{I}}\nu_{i}^{-n_{i}}|\Sigma_{\boldsymbol{\kappa}}^{i}|^{-1}\exp\left( -\frac{1}{2}\tilde{\boldsymbol{z}}_{i}^{T}\Sigma_{\boldsymbol{\kappa}}^{i} \,\tilde{\boldsymbol{z}}_{i}\right) \tag{17}\]
where \(\boldsymbol{t}:=\{\boldsymbol{t}_{i}:i\in\mathcal{I}\}\) and \(\tilde{\boldsymbol{z}}_{i}:=\{[z_{i,j}-\mu_{i}(t_{i,j};\boldsymbol{\theta}_{i},\boldsymbol{\gamma})]/\nu_{i}:j\in\mathcal{J}_{i}\}\) for all \(i\in\mathcal{I}\). The full likelihood requires the Jacobian terms, from expression (16), which control the transformations between the two spaces and account for parameters for the margins in the observational space being unknown. Let the sets of observations which are below and above the threshold be \(\mathcal{L}_{-}:=\{(i,j):X_{i,j}\leq u:j\in\mathcal{J}_{i},i\in\mathcal{I}\}\) and \(\mathcal{L}_{+}:=\{(i,j):X_{i,j}>u:j\in\mathcal{J}_{i},i\in\mathcal{I}\}\) respectively. The full likelihood of parameters \(\boldsymbol{\Theta}:=(\xi,\boldsymbol{\sigma},\boldsymbol{\beta},\boldsymbol{ \theta},\boldsymbol{\gamma},\boldsymbol{\nu},\boldsymbol{\kappa})\) and auxiliary variables is
\[L(\boldsymbol{x},\boldsymbol{v};\boldsymbol{t},\boldsymbol{ \Theta})\propto L_{\ell}\left(\boldsymbol{z};\boldsymbol{t},\boldsymbol{ \theta},\boldsymbol{\gamma},\boldsymbol{\nu},\boldsymbol{\kappa}\right)\times\] \[\left(\prod_{(i,j)\in\mathcal{L}_{-}}J_{-}(v_{i,j};t_{i,j}, \boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\gamma},\boldsymbol{\nu}) \right)\left(\prod_{(i,j)\in\mathcal{L}_{+}}J_{+}(x_{i,j};t_{i,j},\xi, \boldsymbol{\sigma},\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{ \gamma},\boldsymbol{\nu})\right), \tag{18}\]
where \(\boldsymbol{v}:=\{v_{i,j}:(i,j)\in\mathcal{L}_{-}\}\) and \(\boldsymbol{z}\) is a function of \(\boldsymbol{x}\) and \(\boldsymbol{v}\), as identified in Section 3.3.
With two parameters per subject, limited data per subject, and many subjects, a asymptotic-based likelihood inference and its associated uncertainty evaluation is not supported. Avoiding such asymptotics via bootstrap sampling also has complications due to the auxiliary variables, and since subjects with limited data are likely to be omitted in replicate samples. So, we adopt a Bayesian inference framework, which provides full uncertainty quantification of all parameters and auxiliary variables simultaneously.
Let the parameters \(\mathbf{\Theta}\) have prior distribution \(\pi_{\mathbf{\Theta}}\), and let the prior \(\pi_{V_{i,j}}(v)\) for all \((i,j)\in\mathcal{L}_{-}\) be uniform \((0,1)\) distributed and to be independent across these variables. Then, the full posterior distribution is \(\pi\left(\mathbf{\Theta},\mathbf{v}|\mathbf{x},\mathbf{t}\right)\propto\pi_{\mathbf{\Theta}}\left( \mathbf{\Theta}\right)L(\mathbf{x},\mathbf{v};\mathbf{t},\mathbf{\Theta})\). In Section 5.3 we present the prior \(\pi_{\mathbf{\Theta}}\) for our analysis of elite swimming data.
Inference and diagnostics were conducted using the Python package _PyMC_(Salvatier et al., 2016), with the supplementary material containing more extensive computational details. To attain inference for future predictions the full prediction uncertainty is propagated through the inference. Given future simulated time-stamps \(\mathbf{t}_{i}^{*}\) of responses by a subject \(i\), which are randomly generated by the process described in Section 3.4, the variables \(Z_{i}(t)\sim\mathcal{GP}\left\{\mu(t;\mathbf{\theta}_{i},\mathbf{\gamma}),K_{\mathbf{\kappa }}(\cdot,\cdot)\right\}\), are simulated jointly for \(t\) over \(\mathbf{t}_{i}^{*}\), for each random sample from the joint posterior \(\pi\left(\mathbf{\Theta},\mathbf{v}|\mathbf{x},\mathbf{t}\right)\). The sample is then transformed back to its original margins.
For an observation below the threshold - which is by definition not extreme - the actual value on the original margins is unimportant for inference of extreme events. Only the time of occurrence and the knowledge that they are below the threshold are relevant. However, for visualisation purposes it is useful to have some estimate of non-extreme values on the original scale, see Figure 4. In this case the empirical CDF is used, though it is acknowledged that this does not include the uncertainty in the distribution on the original margins.
## 5 Application
### Data
The data analysed constitutes mens' 100m breaststroke results in FINA competitions in the period 2012-2019, obtained from the FINA website. Strategic decisions were made about which data to analyse. Only each swimmer's best time swam per competition was selected, i.e., one swim per competition; we chose to analyse negative swim-times, and then negate any estimated quantiles in order to provide results for actual swim-times; the threshold was selected as the 200th fastest personal best (PB) over the period 2001-18, giving the (negative) extreme threshold as \(u=-61.125\) seconds; and we excluded data from all swimmers with \(m\leq 7\) swims. The reasons for these choices are discussed in the supplementary material. The resultant data that we used for analysis contained 120 swimmers, with 1435 total responses.
### Modelling applied to swimming
From findings in Spearing et al. (2021), the conditional distribution of extreme swim-times \(\Pr\{X<x|X>u\}\) for large \(u\) can be treated as identically distributed over time, and so we take \(\sigma_{u}(t)=:\sigma_{u}\in\mathbb{R}_{+},\ \forall t\), i.e., \(\boldsymbol{\sigma}=\sigma_{u}\). The common temporal trend across the population of elite breaststroke swimmers can then be captured through the probability of exceeding the threshold \(\lambda_{u}\), via a smooth monotonically increasing function for \(\lambda_{u}\). A logit-linear functional form for \(\lambda_{u}\) was found appropriate for the change in \(\lambda_{u}\) over \(t\). Specifically, for a swim-time in year \(t\in\{2012,\ldots,2020\}\) and parameters \(\boldsymbol{\beta}:=(\beta_{0},\beta_{1})\in\mathbb{R}^{2}\), we take
\[\lambda_{u}(t;\boldsymbol{\beta})=\exp(\beta_{0}+\beta_{1}t)/[1+\exp(\beta_{0 }+\beta_{1}t)]. \tag{19}\]
In elite swimming, the subject-specific trend captures a swimmer's _career trajectory_ - the tendency for athletes to enter elite sports as relatively inexperienced, improve until some individual _peak_ ability, and then decline before leaving the sport. Swimmers tend to improve rapidly towards their peak mean performance \(\alpha_{i}\), at an age of \(\tau_{i}\), as they mature physically, and then stop competing within a few years of reaching this peak. Here we allow the time at which peak mean performance is achieved to vary over swimmers to allow for their differences in maturity. The lack of data in the decline of the career trajectory enables the parsimonious assumption of a symmetric career trajectory about the peak. From what can be identified from the data, after transformation to the latent space, a quadratic mean trend in age of swimmer, with curvature \(\gamma<0\), seems a reasonable approximation to this mean performance progression. By including the covariate \(b_{i}\in\mathbb{R}\), of swimmer \(i\)'s birth date, we have \(t-b_{i}\), for \(t>b_{i}\), as the age at which swimmer \(i\) at time \(t\). Thus, the mean function in latent space is
\[\mu_{i}(t;b_{i},\boldsymbol{\theta}_{i},\gamma)=\alpha_{i}-\gamma(t-b_{i}-\tau _{i})^{2},\ \text{for all}\ t\in\mathbb{R},\]
for all \(i\in\mathcal{I}\), where \(\boldsymbol{\theta}_{i}=(\alpha_{i},\tau_{i})\in\mathbb{R}\times\mathbb{R}_{+}\), and here \(\boldsymbol{\gamma}=\gamma>0\). We have no swimmer-specific parameter for \(\gamma\) given the limited number of swims per swimmer. There was no evidence for variation over swimmers in their across-swim variability, so we took \(\nu_{i}=:\nu\in\mathbb{R}_{+},\ \forall i\in\mathcal{I}\). A different \(\mu_{i}\) per swimmer seems sufficient to capture the across-swimmer effects.
### Prior specification
The supplementary material gives the DAG for the model for this swimming application. The priors are assumed to be mutually independent across all components of \(\boldsymbol{\Theta}\), i.e.,
\[\pi_{\boldsymbol{\Theta}}\left(\boldsymbol{\Theta}\right)=\pi\left(\xi\right)\pi (\sigma_{u})\pi\left(\boldsymbol{\beta}\right)\left(\prod_{i\in\mathcal{I}}\pi \left(\alpha_{i}\right)\pi\left(\tau_{i}\right)\right)\pi\left(\gamma\right) \pi\left(\nu\right)\pi\left(\boldsymbol{\kappa}\right). \tag{20}\]
We now explain our choices of these marginal priors in the sequence shown in expression (20).
Discussion on priors for GPD parameters goes back to Coles and Tawn (1996). The shape parameter prior being \(\text{logit}\left(\xi+1\right)\sim\mathcal{N}(\text{logit}(0.8),0.3)\) restricts the domain of the shape parameter to \(-1<\xi<0\). The constraint \(\xi>-1\) avoids estimates of the GPD implying the best possible time has already been achieved, whilst \(\xi<0\) imposes a finite limit on the fastest possible performance. Analysis of 2001-2019 elite swimmers' PB data found strong evidence of a common negative shape parameter for all swimming distances, strokes and gender categories (Spearing et al., 2021). For the GPD scale parameter prior we exploit knowledge from Spearing et al. (2021) that this parameter, estimated using PB data, was close to 1: so \(\sigma_{u}\sim\text{Gamma}(25,25)\) enforces positivity, has the required mean, and a standard deviation of 0.2. For the threshold exceedance rate parameters \(\boldsymbol{\beta}\), the priors \(\beta_{0}\sim N(0,0.5)\), and \(\beta_{1}\sim\text{Gamma}(0.1,0.1)\) are imposed. The latter reflects the improvement of elite swimmers (Spearing et al., 2021), and when combined with the former gives exceedance rates in the range \((0.1,0.9)\).
Considering the priors for the latent space parameters, we take \(\alpha_{i}\)\(\sim\)\(N(0,V_{\alpha}^{2})\), with \(V_{\alpha}=6\). The priors \(\tau_{i}\sim N\left(25,2.5^{2}\right)\) reflect that a swimmer's peak age is roughly 25 years, with a high probability of being in the interval \((17.5,32.5)\). The prior \(\gamma\sim\text{Gamma}(0.5,0.5)\) provides weak information with a preference for \(\gamma\) to be close to 0, to ensure that a posterior with \(\gamma>0\) is not a prior artefact. As it is anticipated there is greater variance between swimmers than within any swimmer's performances, so we take \(\nu\sim\text{Gamma}(1,1)\) which has a smaller variance than \(V_{\alpha}^{2}\). For the kernel parameters, taking \(\kappa_{0}\sim\text{Gamma}(0.5,0.5)\) and \(\text{logit}(\kappa_{1}-0.5)/1.5)\sim N(\text{logit}(1),2)\) enforces \(\kappa_{0}>0\) and allows exploration over \(\kappa_{1}\in(0.5,2)\).
### Results
#### 5.4.1 Subject-specific Inference
The within-subject features of the model provide information about individual swimmers as well as playing a key role in determining the dependence structure across of the elite breast-stroke swimmers. As identified in Section 2.3, there are two features of the subject-specific
behaviour which affect the extremal dependence of these data: the subject-specific variation in the attributes \(\{\alpha_{i}:i\in\mathcal{I}\}\); and the within-subject dependence, given by the Gaussian process.
The marginal posterior distributions of the parameters \(\boldsymbol{\theta}_{i}\) are shown in Figure 2 for the top ten swimmers, as defined in Section 5.4.4, a ranking that strongly correlates with the ten largest posterior mean \(\alpha_{i}\) values. With the exception of the posterior for Adam Peaty's \(\alpha_{i}\), there is considerable overlap between the other nine posteriors, with Peaty's having both a larger mean and 50% of the variation of the others. The larger mean is not surprising as Peaty holds the 7 fastest times, and 11 of the top 20, for the competition-best data, together with all the top 20 times over all swims. The posteriors for the \(\tau_{i}\) for these swimmers are broadly more self-consistent, with almost all posterior mass for the peak performance age in the range \((25,35)\) years, though both Peaty and Andrew Michael have lower peak ages, with Peaty almost certainly peaking before the age of 30 (he is 29 at the time of writing).
What is intriguing is that the posterior of \(\alpha_{i}\) for Nicolo Martinenhi has upper quantiles which exceed the same quantiles for Peaty's \(\alpha_{i}\), despite his median being smaller than Peaty's. We explored three possible causes for this. Firstly, it could be that Martinenhi produced highly variable swim times, indicating that he is capable of better swims than Peaty; this is unlikely as only two of Peaty's swim-times are slower than Martinenhi's PB. Secondly, the posterior uncertainty of Martinenhi's \(\alpha_{i}\) could be due to having less swims in the database relative to Peaty, but he has 14 better than the threshold, which is comparable to Peaty's 17. The most likely, is that Martinenhi is relatively young - five years younger than Peaty - being aged 20 years in his most recent database entry. For younger swimmers it is difficult to disentangle between peak age and attribute, which is evidenced by Martinenhi having the largest posterior correlation, of 0.89, between his \((\alpha_{i},\tau_{i})\) of the top ten swimmers, e.g., for Peaty this is 0.50. Martinenhi's large uncertainty in peak age is contributing to the uncertainty in his attribute; his peak is still to come - but we are uncertain in its level.
The posterior 95% highest posterior density interval (HPDI) for the subject-specific quadratic trend curvature \(\gamma\) is \((0.015,0.029)\), showing that there is strong evidence of a rising and falling career trajectory, especially given the prior favours \(\gamma\) being arbitrarily close to 0. The 95% HPDI for the ratio of within-subject to across subject variation, i.e., \(\nu/V_{\alpha}\), is \((0.17,0.18)\), so the majority of the variation in the extremes of these longitudinal data is explained by swimmer identification. Furthermore, with Peaty having by far the largest \(\alpha_{i}\), Section 2.3 indicates there will be asymptotic dependence, irrespective of the within-subject dependence \(\rho(\tau)\) at lag \(\tau\)
The posterior mean and pointwise 95% (HPDI) are shown in Figure 2 (right) for the measure of subject-specific asymptotic independence \(\bar{\chi}_{i,\tau}=\rho(\tau)\), for lag \(\tau\in[5,365]\) days. This inference indicates that at 50 days there is reasonable dependence per swimmer and even at 6 months lag there is non-negligible subject-conditional dependence.
#### 5.4.2 Subject-ignorant Marginal Inference
The joint posterior inferences for the subject-ignorant marginal distribution parameters for the GPD and tail exceedance probabilities \((\sigma_{u},\xi,\boldsymbol{\beta})\) are derived from the full model joint posterior. The posterior mean of \(\xi\) and its 95% HPDI are \(-0.22\)\((-0.25,-0.20)\), they provide strong evidence for a negative shape parameter. For \(\beta_{1}\) these values are \(0.13\)\((0.09,0.16)\), showing that the rate of achieving extreme elite performances by swimmers indexed \(\mathcal{I}\) is increasing over the time window, with the posterior mean and 95% HPDI for \(\lambda_{u}(t)\) being \(0.34\)\((0.30,0.38)\) for 2012 and \(0.55\)\((0.51,0.59)\) for 2019, a substantial difference in behaviour.
As described in Section 2, when \(\xi<0\) there is an estimated upper endpoint \(x_{H}=u-\sigma_{u}/\xi\), which for swimming is the best performance humanly possible, given the current technology, in the event (Huub and Trultens, 2005; Nevill et al., 2007). Figure 3 shows the posterior distribution of \(x_{H}\), and the closeness of Peaty's current world record to this. The posterior places the endpoint closer to the current record than a similar analysis of PB data (Spearing et al., 2021), with that analysis pooling information across events.
The expected value of the next world record swim-time is obtained by exploiting the _threshold-stability_ property of a GPD (Coles, 2001). Since the (negative) current world record
Figure 2: Subject-specific posterior inferences. For the top 10 swimmers, the posteriors of these swimmers’ attributes \(\alpha_{i}\) (left) and peak ages \(\tau_{i}\) (middle). The colours identify swimmers as defined in Figure 5 (left). The mean posterior and 95% HPDI for the subject-specific asymptotic independence measure \(\bar{\chi}_{i,\tau}\) against time lag \(\tau\) in days (right).
\(r=-56.88>u\), exceedances above \(r\) follow a GPD, i.e., letting \(X_{r_{+}}:=\{X:X>r\}\), then \(X_{r_{+}}-r\sim\text{GPD}(\sigma_{r}=\sigma_{u}+\xi(r-u),\xi)\) and the expected next world record time is \(\mathbb{E}[X_{r_{+}}]=r+\sigma_{r}/(1-\xi)\). Figure 3 (left) shows the posterior distribution of \(\mathbb{E}[X_{r_{+}}]\). Although it has some overlap with the posterior of \(x_{H}\), the posterior of \(\mathbb{E}[X_{r_{+}}]\) is much nearer Peaty's current record than \(x_{H}\). The simplicity of \(\mathbb{E}[X_{r_{+}}]\) arises as both \(\xi\) and \(\sigma_{u}\) are constant over time and the expectation is not conditional on the current swimmers' performances, with the latter considered in Section 5.4.4. An indication about when this next record is likely to be achieved is given in Figure 3 (right), where we present the posterior for the rate \(\lambda_{r}(t)\) per future year \(t\) of swims by elite swimmers beating Peaty's record \(r\). Here \(\lambda_{r}(t)=s_{t}\lambda_{u}(t)[1+\xi(r-u)/\sigma_{u}]^{-1/\xi}\), where \(s_{t}\) is number of total swims per year by elite swimmers. The posterior mean and 95% HPDI are shown for \(\lambda_{r}(t)\) over the window \(2023-30\), with \(s_{t}=s_{2019}\) for \(t>2019\).
#### 5.4.3 Model Diagnostics
Diagnostics for the marginal GPD element of our model are well-established, so here novel diagnostics for the subject-specific characteristics of the data are presented. The diagnostics are shown on the observed scale, so observations can be compared with predictive distributions for the associated swim-dates. Figure 4 shows the observations over time for six top swimmers, identified in Figure 5. All these swimmers have performances that are generally improving over time, and with some slower than the threshold. As such slow swims are treated as censored at the threshold, modelling these precise values is not of great importance, with the prime focus
Figure 3: (Left: the posterior distributions for the expected next record swim-time (blue) and ultimate swim-time (orange) for the mens’ 100m breaststroke in seconds. Peaty’s current record time (black vertical line). Right: posterior mean (solid line) and 95% HPDI (dashed lines) of the rate \(\lambda_{r}(t)\) of swims by elite swimmers of beating Peaty’s current record in year \(t\).
concerning swim-times better than the threshold.
A sample size of 400 was generated from the posterior predictive sample for each past date of a swim for each of these swimmers. Figure 4 presents these samples under-laying the corresponding observations. In a well-fitting model, each observation should appear as a representative member of these samples. The posterior predictive samples indicate that the model fits well, as most observations are reasonably central to their associated distribution for all swims better than the threshold, and even for the swims not as good as the threshold. They also capture the career trajectory evident in the data. Maybe to be expected, Peaty's three best swim-times, each world records when achieved, are into the tails of their associated predictive distributions. For weaker swims, Martinenghi and Shymanovich have performances which are unexpectedly slow relative to what our model would anticipate.
Figure 4 also shows samples for these predictive distributions in the future, as the points from 2020-32, obtained under a stochastic model for the number and dates of future swims assuming that the swimmers continue to compete at current rates (see the supplementary material for details). As most of these future samples improve or stay reasonably static over time, this illustrates that these swimmers are early in their careers. In contrast, for Peaty there is a decay of performances from 2024. On this figure are the posterior mean and 95% HPDI for each swimmer's \(\tau_{i}\), which cover the period where the predictive samples plateau.
#### 5.4.4 Subject-specific Predictions for Current Swimmers
Here we make predictive inference for future extreme events linked to specific swimmers, thus illustrating the novelty of inferences that are possible using our longitudinal extreme value model. Section 3.4 identified three groups \((\mathcal{I}^{c},\mathcal{I}^{f},\mathcal{I}^{n})\) of swimmers to consider when predicting future extreme events, and the supplementary material sets out the Monte Carlo strategies for the evaluation of the corresponding posterior distributions. To avoid the extra assumptions that are required to study groups \(\mathcal{I}^{f}\) and \(\mathcal{I}^{n}\), only swimmers in \(\mathcal{I}^{c}\) who have recordings in the most recent year of data are studied. From our model and posterior predictive inference, standard extreme value properties, e.g., the distribution of the annual maxima, are simple to derive; however in sport, extreme events are mostly concerned with breaking records. Therefore, we focus on beating the current world record and setting PB times. Throughout, the future behaviour of swimmers is assumed consistent with the past data, so illness or sudden retirement are not accounted for, e.g., we ignore that Peaty has absences from the sport since 2021.
First consider the beating of the current world record. The joint posterior predictive distributions in Figure 4, provide samples of future longitudinal data for the swimmers. There is a posterior predictive probability of 0.53 that the world record is beaten by a swimmer in \(\mathcal{I}^{c}\) in the next 12 years. The record will be found to be broken with a larger probability in this window if we also account for the groups \(\mathcal{I}^{f}\) or \(\mathcal{I}^{n}\). Figure 5 (left) splits this probability to show the posterior predictive probability for swimmer \(i\) beating the record, for the 10 most likely swimmers in \(\mathcal{I}^{c}\). This gives a novel _ranking_ method for swimmers within an event, as it focuses on the future potential of swimmers (through accounting for their future career trajectory) more than their past achievement (the exclusive focus of typical ranking methods). Perhaps unsurprisingly, Figure 5 (left) shows that Peaty is ranked the highest, i.e., the most likely to first beat his own world record of the swimmers in \(\mathcal{I}^{c}\), with a predictive probability of 0.24. Martinenghi is ranked second, as expected given Figure 2 (middle), with a predictive probability of 0.09.
To assess how soon these swimmers can first beat the current record, Figure 5 (middle) shows the predictive distribution of the year in which a swimmer will be the first of the current
Figure 4: Within-subject diagnostics for six top swimmers: observed swim-dates and performance in seconds (black dots); posterior predictive distributions samples (coloured dots) for the dates of their swims in the past, and for future simulated swim dates. The threshold \(u\) is the horizontal line and the posterior mean and 95% HPDIs for the peak age \(\tau_{i}\) are vertical lines.
swimmers to beat the record. These posteriors are shown for the top six ranked swimmers in Figure 5 (left). These results show that if Peaty does break his record, it is most likely to happen within the next four years, due to his age exceeding his peak age subsequently. In contrast, Martinenghi is most likely to beat the current record in 4-10 years. Figure 5 (right) shows the posterior distribution of the best time for each swimmer in the future window. These distributions show that there is a reasonable chance of each swimmer beating their current PB. Peaty is less likely to do this than the other five swimmers shown, who all have a high posterior probability of beating their current PBs. This finding is not surprising, as swimmers that are currently near their peak have a limited chance of beating their PBs whereas younger swimmers have the largest chance of setting new PBs as they are still improving.
## 6 Discussion
This article proposes the first analysis for extreme values of data arising from a longitudinal structure comprising multiple subjects, each with a time series of responses. Although much new asymptotic theory remains to be developed, as the number of subjects and the lengths of their time series tend to infinity at potentially different rates, our focus has been in terms of putting down the framework for statistical modelling and associated inference. Furthermore, we have exhibited that this framework provides a basis for novel analysis of elite swimming data, and have illustrated the additional challenges that arise in practice, e.g., non-stationarity over
Figure 5: Left: predictive probability that each swimmer will be the next swimmer in \(\mathcal{I}^{c}\) to beat the current world record for the 10 most likely. Middle: the posterior distributions for each swimmer for the time at which they are the first the swimmers in \(\mathcal{I}^{c}\) to beat the current record. Right: the posterior distributions of the expected PBs of all future times (vertical lines showing current PBs). Swimmers are identified from the colours in left panel.
subjects, subjects with very limited data, and the need to model subjects not in the data.
This generic framework for longitudinal data analysis involving extreme values contains a set of modelling decisions which are application specific. Core examples are the choice of functional forms for the subject-specific mean function \(\mu_{i}\) for all \(i\in\mathcal{I}\), the threshold exceedance rate function \(\lambda_{u}\), and the GPD scale parameter function \(\sigma_{u}\). In our swimming application, fully parametric functional forms were established from prior application-specific knowledge. For the period of data we analysed, \(\lambda_{u}\) was modelled to be monotonically increasing, reflecting knowledge that the quality of swimmers has been improving generally in this period. However, if data prior to 2010 were used, a monotonic form would be inappropriate due to the phasing out of performance-enhancing full-body swim-suits, see Spearing et al. (2021).
For swimmers with less than \(m\) measurements, a decision must be made between including them all or discarding them from the analysis, at the cost of high computational inefficiency or bias respectively. Although we developed a pragmatic compromise, another possibility could cluster each subject with \(m\) or less responses with a subject with more than \(m\) responses. Subjects in the same cluster would have a common \((\alpha_{i},\tau_{i})\) but different ages and performances. This approach benefits from using all data for inference, but it is still likely to be computational demanding given the complexity of cluster allocation when no simple rule is available.
An entirely novel aspect of our inference has been the subject-specific features such as the variation across subjects being modelled through attributes \(\{\alpha_{i}:i\in\mathcal{I}\}\). Although Gaussian marginals are leveraged on the grounds of the parsimony of conditional and unconditional Gaussian processes, this choice is rather unimportant to the outcomes of the inference. This is due to the weak common prior across attributes, resulting in a posterior which is driven by the data. The resulting posterior for a new subject's \(\alpha_{i}\) is a Gaussian mixture model; where it is recognised that this reflects only subjects capable of achieving measurements above a high threshold, and is not applicable to the population as a whole. Despite this restriction to the extreme subjects, our analysis shows that the variation between attributes for swimmers is substantially larger than natural variation of extreme times for any selected swimmer. The analysis has disentangled the variations of the longitudinal data to better inform future inference for extremes and records, both unconditionally and conditionally, for the current elite swimmers.
## Acknowledgements
Spearing gratefully acknowledges funding of the EPSRC funded STOR-i Centre for Doctoral Training (grant number EP/L015692/1), and ATASS Sports.
|
2307.04152 | Star formation history of $\rm{0.1\leq\,\textit{z}\,\leq\,1.5}$
mass-selected galaxies in the ELAIS-N1 Field | We measure the specific star formation rates of \textit{K}-band selected
galaxies from the ELAIS-N1 by stacking GMRT data at 610 MHz. We identify a
sample of SFGs, spanning $\rm{0.1\leq\,\textit{z}\,\leq\,1.5}$ and
$\rm{10^{8.5}<\,{\textit{M}_{\star}}/{\textit{M}_{\odot}}<10^{12.4}}$, using a
combination of multi-wavelength diagnostics obtained from the deep LoTSS
multi-wavelength catalogue. We measure the flux densities in the radio map and
estimate the radio SFR in order to probe the nature of the galaxies below the
noise and confusion limits. The massive galaxies in our sample have the lowest
sSFRs which is in agreement with previous studies. For the different
populations, we show that the sSFR-mass relation steepens with redshift, with
an average slope of $\rm{\langle \beta_{All} \rangle\,=\, -0.49\pm0.01}$ for
the whole sample, and $\rm{\langle \beta_{SFG} \rangle\,=\, -0.42\pm0.02}$ for
the SFGs. Our results indicate that galaxy populations undergo 'downsizing',
whereby most massive galaxies form their stars earlier and more rapidly than
low mass galaxies. Both populations show a strong decrease in their sSFR toward
the present epoch. The sSFR evolution with redshift is best described by a
power law $\rm{(1\,+\,\textit{z})^\textit{n}}$, where $\rm{\langle
\textit{n}_{ALL}\rangle\sim4.94\pm0.53}$ for all galaxies, and $\rm{\langle
\textit{n}_{SFG}\rangle \sim3.51\pm0.52}$ for SFGs. Comparing our measured
sSFRs to results from literature, we find a general agreement in the
\textit{sSFR-M$_{\star}$} plane. | E. F. Ocran, M. Vaccari, J. M. Stil, A. R. Taylor, C. H. Ishwara-Chandra, Jae-Woo Kim | 2023-07-09T11:06:35Z | http://arxiv.org/abs/2307.04152v1 | # Star formation history of \(0.1\leq z\leq 1.5\) mass-selected galaxies in the ELAIS-N1 Field
###### Abstract
We measure the specific star formation rates of \(K\)-band selected galaxies from the ELAIS-N1 by stacking GMRT data at 610 MHz. We identify a sample of SFGs, spanning \(0.1\leq z\leq 1.5\) and \(10^{8.5}<M_{\star}/M_{\odot}<10^{12.4}\), using a combination of multi-wavelength diagnostics obtained from the deep LoTSS multi-wavelength catalogue. We measure the flux densities in the radio map and estimate the radio SFR in order to probe the nature of the galaxies below the noise and confusion limits. The massive galaxies in our sample have the lowest sSFRs which is in agreement with previous studies. For the different populations, we show that the sSFR-mass relation steepens with redshift, with an average slope of \(\langle\beta_{\rm{All}}\rangle=-0.49\pm 0.01\) for the whole sample, and \(\langle\beta_{\rm{SFG}}\rangle=-0.42\pm 0.02\) for the SFGs. Our results indicate that galaxy populations undergo 'downsizing', whereby most massive galaxies form their stars earlier and more rapidly than low mass galaxies. Both populations show a strong decrease in their sSFR toward the present epoch. The sSFR evolution with redshift is best described by a power law \((1+z)^{\eta}\), where \(\langle n_{\rm{ALL}}\rangle\sim 4.94\pm 0.53\) for all galaxies, and \(\langle n_{\rm{SFG}}\rangle\sim 3.51\pm 0.52\) for SFGs. Comparing our measured sSFRs to results from literature, we find a general agreement in the _sSFR-\(M_{\star}\)_ plane.
keywords: Galaxy: evolution -- radio continuum: galaxies.
## 1 Introduction
Radio surveys have now reached sufficient areal coverage that they are now dominated by the same galaxies detected by infrared (IR), optical and X-ray surveys and have become increasingly important in studies of galaxy evolution. The galaxy populations that lie beneath the sensitivity limits of the current deepest surveys has become an important area of study in recent years (e.g. see White et al., 2007; Garn & Alexander, 2009; Dunne et al., 2009; Stil et al., 2014; Zwart et al., 2014, and references therein). Deep radio surveys are able to probe the galaxy star formation rate (SFR) due to cosmic ray and synchrotron emission that originates from the accelerated electrons in the magnetic fields of supernova remnants which are the result of massive star formation (Helou et al., 1985).
The relation between SFR and 1.4 GHz luminosity is calibrated to the far-infrared-radio correlation (e.g. Condon, 1992; Haarsma et al., 2000; Yun et al., 2001; Condon et al., 2002). At radio-wavelengths, observations are not obscured by dust and their higher angular resolution, as compared to infrared surveys, significantly reduces source confusion. However, emission from active galactic nuclei (AGN) represent a significant source of contamination (see Zwart et al., 2014). Ocran et al. (2021) compared the SFR derived from the IR luminosity and the radio power to show that the two are equivalently good tracers of star formation in non-active star-forming galaxies (SFGs) and also for the host galaxies of radio quiet (RQ AGN). They studied the correlation between galaxy SFR and stellar mass at different redshifts for SFGs, RQ, and radio loud (RL) AGN and found that the vast majority of our sources lie on the star formation main sequence (hereafter, MS) when using infrared star formation rates.
The MS of star forming galaxies is a fundamental relation in galaxy evolution which relates galaxy star formation to their stellar mass (see, Noeske et al., 2007; Elbaz et al., 2007; Pannella et al., 2009; Oliver et al., 2010; Reddy et al., 2012; Whitaker et al., 2012, 2014; Popesso et al., 2019, 2019). However, in the literature on there is no common agreement on the form of the MS. There is contention whether the MS is linear across all redshifts (see, Wuyts et al., 2011, 2014; Speagle et al., 2014; Schreiber et al., 2015; Pearson et al., 2018), or has a flattening or turn-over at stellar masses \(\log_{10}(M_{\star}/M_{\odot})>10.5\)(see, Whitaker et al., 2014; Lee et al., 2015; Leslie et al., 2020; Thorne et al., 2021). The specific SFR (SFR divided by stellar mass, sSFR) provides a measure of the current star formation activity related to the past history (Sandles et al., 2022). Studies have shown that the galaxy stellar mass function at high masses evolves fairly slowly up to \(z\sim 0.9\), and then more rapidly up to at least \(z\sim 2.5\), suggesting that the majority of stellar mass assembly took place at \(z\gtrsim 1\)(see, Feulner et al., 2007; Pozzetti
et al., 2007). At low masses, Cassata et al. (2007) showed that the mass of a galaxy plays an important role in star formation, at \(z\ \lesssim\ 1\). However, at \(z\lesssim\ 0.2\), ongoing star formation in massive galaxies is almost entirely absent (see, Thorne et al., 2021). The evolution of the slope \(sSFR-M_{\star}\) plane as a function of redshift, on mass-dependent timescales, has been found to decline significantly but smoothly (see, Speagle et al., 2014). Moreover, the sSFR plateaus at higher redshifts, \(z\gtrsim 3\), and continues to increase with a relatively shallow slope as noted in Behroozi et al. (2013). Studies like Davidzon et al. (2018) have used the differential evolution of the galaxy stellar mass function to infer the sSFR evolution of galaxies.
The sensitivities achieved by SKA pathfinders and eventually the SKA itself will have a huge impact on our understanding of star formation in galaxies and its co-evolution with supermassive black holes (Padovani, 2011). Improvements in both depth and sky coverage is being made with these surveys, with narrow, but very deep surveys such as the MeerKAT MIGHTEE (Jarvis et al., 2016) and wide-area radio data such as Evolutionary Map of the Universe (EMU) (Norris, 2011). These new surveys are probing unexplored volume of the Universe. Studies have shown that at the faintest radio flux densities (\(\mathrm{S_{1.4}}<10\,\mathrm{mJy}\)), conflicting results emerge regarding whether there is a flattening of the average spectral index between a low radio frequency (325 or 610MHz) (see, Randall et al., 2012). More comprehensive observations of the shape of the radio spectrum, extending to lower frequencies, will ensure a maximum scientific return by combining the deep radio continuum data at GHz frequencies.
Stacking is a common tool which has been used to investigate the star formation properties of galaxies at far greater sensitivity by combining many observations of individual galaxies at the expense of any specific knowledge of the individual galaxies that make up the stack. For example, Dunne et al. (2009) used stacking of 610 MHz and 1.4 GHz data from the VLA and the Giant Metrewave Radio Telescope (GMRT) to investigate the star formation history of \(BzK\)-selected galaxies from the UKIRT Infrared Deep Sky Survey (UKIDSS-UDS) and computed stellar masses using the absolute \(K\)-band magnitude. Karim et al. (2011) calculated stellar masses using SED fitting from their photometric-redshift fitting by selecting galaxies at 3.6 \(\mu\)m, and stacked 1.4 GHz VLA data (A and C arrays), with a noise of 8 \(\mu\)Jy at the centre of their 1.72 deg\({}^{2}\) map. They found a good agreement in their radio-derived sSFR-redshift evolution between their studies and that of Dunne et al. (2009), however the dependence of sSFR on stellar mass was found to be much shallower for the UKIDSS data than for COSMOS. Zwart et al. (2014) stacked deep (17.5 \(\mu\)Jy) VLA radio observations at the positions of \(\mathrm{K_{s}}\)-selected sources in the VIDEO field for \(K_{\mathrm{s}}<\ 23.5\) and sensitive to \(0\ <\ z\ \lesssim\ 5\). They found that sSFR falls with stellar mass, in agreement with the 'downsizing' paradigm. Leslie et al. (2020) measured the MS using mean stacks of 3 GHz radio continuum images to derive average SFRs for \(\sim\)200,000 mass-selected galaxies at \(z>0.3\) in the COSMOS field. They described the MS by adopting a new model that incorporates a linear relation at low stellar mass (\(\log(M_{\star}/M_{\odot})<10\)) and a flattening at high stellar mass that becomes more prominent at low redshift (i.e. \(z\ <\ 1.5\)).
We present an independent stacking analysis of radio data from the GMRT surveys of the ELAIS-N1 region. We stack by mass and redshift bins respectively, for sources drawn from the rich LOFAR Two-metre Sky Survey (LoTSS) (Shimwell et al., 2017) deep field multi-wavelength ancillary data available in the field. We calibrate 610 MHz rest-frame luminosity as a SFR indicator following Garn et al. (2009), allowing us to turn radio luminosity estimates into SFR function estimates. We provide a coherent, uniform measurement of the evolution of the logarithmic specific star formation rate (sSFR)-stellar mass (\(M_{\star}\)) relation, for star forming and all galaxies out to \(z\sim 1.5\). Using median stacked images at 610 MHz, we derive average SFRs and sSFRs for \(\sim 77,047\) mass-selected galaxies, spanning \(0.1\leq z\ \leq\ 1.5\) and \(10^{8.5}<M_{\star}/M_{\odot}<10^{12.4}\) in the ELAIS-N1. We aim to answer how the sSFRs change as a function of stellar mass and redshift with regards to a deep 610 MHz low-frequency continuum survey, which are complimentary to sSFRs derived at the high frequency observations.
The paper is arranged as follows: The datasets used in this work are described in Section 2. In Section 3 we describe the prescription we used in selecting the sample for our analyses. Section 4 presents the analyses and results from our stacking experiment. We compare the synergies between our work and findings from the literature in Section 5.1. We then provide our discussions and a summary of our work in Sections 5 and 6 respectively. We assume a flat cold dark matter (\(\Lambda\)CDM) cosmology with \(\Omega_{\Lambda}=0.7\), \(\Omega_{\mathrm{m}}=0.3\) and \(\mathrm{H_{\mathrm{\alpha}}}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\mathrm{S_{\nu}}\propto\nu^{\alpha}\) for calculation of intrinsic source properties.
## 2 Datasets
In this section, we discuss the different datasets we use for our investigation. These data are all taken from publicly available catalogs.
### Radio Datasets
We employ the 610 MHz wide-area survey (Ishwara-Chandra et al., 2020) of the ELAIS-N1 (European Larg Area ISO Survey North 1) (Oliver et al., 2000) region, using the GMRT. This data is in hexagonal configuration centred on \(\alpha=16^{\mathrm{h}}\ 10^{\mathrm{m}}\ 30^{\mathrm{s}}\), \(\delta=54^{\circ}\ 35\ 00^{\prime\prime}\) The 610 MHz wide-area survey consists of 51 pointings, mosaicked to create an image of ELAIS-N1 covering \(\sim 12.8\) deg\({}^{2}\). The FWHM of the synthesized beam varies between 4.5 and 6 arcsec. Before mosaicking, the image from each field was smoothed to a circular Gaussian beam with FWHM of 6 arcsec. The final rms in the total intensity mosaic image is \(\sim 40\mu\)Jy beam\({}^{-1}\). Ishwara-Chandra et al. 2020 indicated that this is equivalent to \(\sim 20\ \mu\)Jy beam\({}^{-1}\) rms noise at 1.4 GHz for a spectral index of -0.75, which is several times deeper than the VLA FIRST survey at similar resolution. The resulting mosaic is about \(3.6\times 3.6\) deg\({}^{2}\)(see, Ishwara-Chandra et al., 2020). The criterion Ishwara-Chandra et al. (2020) used to distinguish point sources from resolved sources resulted in about 60 per cent of sources being are unresolved by using \(\mathrm{S_{total}/S_{peak}}<1\)(see Prandoni et al., 2001), and an extra term to fit the envelope. By considering a total to peak flux ratio <1.5, \(\sim\)75 per cent of sources were found to be unresolved.
### The LOFAR science-ready multi-wavelength catalogue of the ELAIS-N1
The LOW Frequency ARray (LOFAR; van Haarlem et al., 2013) Two-metre Sky Survey (LoTSS) deep field multi-wavelength data we use is only briefly described here, for much greater detail, the reader is referred to Shimwell et al. (2017, 2019, 2022), Kondapally et al. (2021) and subsequent data release papers. LoTSS is an ongoing sensitive, high-resolution \(120-168\) MHz survey of the entire northern sky for which the first full-quality public data release covers 424 square degrees with a median rms noise of 71 \(\mu\)Jy at 150 MHz (Sabater et al., 2019; Williams et al., 2019). The second data release covers 27% (i.e. split into two regions spanning 4178 and 1457 square degrees) of the northern sky with a central frequency of 144 MHz down to a median
rms sensitivity of \(83\,\mu\)Jy beam\({}^{-1}\) (see, Shimwell et al., 2022). The ELAIS-N1 field is the deepest of the LoTSS deep fields to date and one of the areas that have the most extensive ancillary data (Sabater et al., 2021).
The LOFAR team has provided science-ready multi-wavelength data in three fields along with the full complimentary optical/IR catalogue presented by Kondapally et al. (2021). The photometric redshift estimates for all plausible counterparts were produced from a complete, homogeneous sample of objects measured across optical to IR wavelength. This is achieved by building a forced, matched aperture, multi-wavelength catalogue in each field spanning the UV to mid-infrared wavelengths using the latest deep datasets. The full details of the photo-\(z\)-estimation, are presented in a companion release paper (see, Duncan et al., 2021, for more details).
### Stellar masses
Galaxy stellar masses were obtained from science-ready multi-wavelength catalogue (see Duncan et al., 2021, for more details). This is the total stellar mass of a galaxy in units of solar mass and was estimated using the Python-based SED fitting code previously used by Duncan et al. (2014, 2019). Stellar population synthesis models of Bruzual & Charlot (2003) for a Chabrier (2003) initial mass function (IMF) were generated for composite stellar population models with three different stellar metallicities of \(\mathrm{Z}_{\odot}~{}=~{}0.1,0.4,1.0\). Duncan et al. (2021) used a grid of star-formation histories based on the double power-law model with the priors on the range of power-law slopes and turnover ages taken from Carnal et al. (2019). They argued this provides sufficient flexibility to accurately describe the star-formation histories of a wide range of possible formation and quenching mechanisms. A simple prescription for nebular emission is included in the model SEDs. Further details of the assumed emission line ratios for Balmer and metal lines, as well as the nebular continuum prescription, can be found in Duncan et al. (2014). They also incorporate dust attenuation following the two-component dust model of Charlot & Fall (2000). The ELAIS-N1 field is complete to significantly lower masses when using K band to select samples at \(z\)! 1, where deeper NIR observations are provided by UKIDSS Deep Extragalactic Survey (DXS) (Lawrence et al., 2007). From simple estimations of the galaxy stellar mass functions (SMFs) within the ELAIS-N1 field and comparison with the literature, Duncan et al. (2021) validated that the stellar masses provide reliable and self consistent estimates suitable for statistical studies across the whole field.
Following Duncan et al. (2021), we empirically estimate the stellar mass completeness (Pozzetti et al., 2010; Ilbert et al., 2013; Davidzon et al., 2013; Laigle et al., 2016; Davidzon et al., 2017). This is determined by the \(3\sigma\) magnitude limit, \(\mathrm{K}_{\mathrm{lim}}~{}=~{}22.7\) mag. In Figure 1 we show the distribution of stellar mass (\(M_{\star}\)) with the redshift (\(z\)) for the galaxies in the ELAIS-N1 field. For each redshift bin, we estimate the stellar mass completeness \(M_{\mathrm{lim}}\) within which \(90\) per cent of the galaxies lie. The measured stellar masses for the sample are scaled to the magnitude limit: \(\log_{10}M_{\mathrm{lim}}=\log_{10}M-0.4(K_{\mathrm{lim}}-K)\) and the mass completeness limit derived from \(95\)th percentile of the scaled \(M_{\mathrm{lim}}\) mass distribution. The black circles in Figure 1 represent the mass limit in each redshift bin and the solid green curve represents the fit to \(M_{\mathrm{lim}}\) the mass limit. Table 1 presents the calculated mass limits for each redshift bin in this study of ELAIS-N1 LoTSS Deep Field for the full sample. Our sources should be largely complete above the cuts. The distributions are generally consistent among different fields, supporting the self-consistency of our results.
## 3 Sample selection
In this section, we describe the sample and, the multi-wavelength diagnostics employed to obtain a census of galaxies showing evidence of hosting an AGN within this sample.
Previous studies of the sSFR-\(M_{\star}\) plane, indicate that galaxies reside in two populations. The first is the SFG population whose SFR is positively correlated with stellar mass out to redshifts, \(z\sim 4\). (see, Karim et al., 2011; Schreiber et al., 2015; Tomczak et al., 2016; Leslie et al., 2020). The second population consists of quiescent galaxies that are not actively forming stars and typically reside at the high-mass end (i.e. they have lower sSFR) (Renzini & Peng, 2015). In this Section, we describe how we separate quiescent galaxies, which are systems with little or no ongoing star-formation and large surface stellar mass density, from SFGs (see, Patel et al., 2011, 2012). This method is most efficient in excluding quiescent galaxies (see Leja et al., 2019, 2019). These quiescent galaxies (QGs) have very low SFR by definition, and they are preferentially found at high \(M_{\star}\) (see, Schreiber et al., 2015).
The sources classified as SFGs are those sources in our redshift and mass selected bins satisfy the color cuts in the diagnostics employed in this study.
\begin{table}
\begin{tabular}{c c} \hline Bin & \(M_{\mathrm{lim}}^{\mathrm{min}}\) \\ \hline \(0.1-0.3\) & \(8.96\) \\ \(0.3-0.5\) & \(9.42\) \\ \(0.5-0.7\) & \(9.75\) \\ \(0.7-0.9\) & \(10.01\) \\ \(0.9-1.1\) & \(10.25\) \\ \(1.1-1.5\) & \(10.47\) \\ \hline \end{tabular}
\end{table}
Table 1: Mass limits for each redshift bin in this study of ELAIS-N1 LoTSS Deep Field, for the full sample.
Figure 1: Observed stellar mass distribution as a function of redshift for the ELAIS-N1 LoTSS Deep Field. The background density plot shows the mass distribution of sources with the solid black circles showing the 90 per cent mass completeness limit of the \(K\) reference band in the ELAIS-N1 field derived empirically (see, Pozzetti et al., 2010) at the median redshift in each bin. Here we a plotting \(\sim 80\) per cent of the data points. The solid green curve represents the fit to \(M_{\mathrm{lim}}\), the mass limit
### The Sample
Extensive details of the photometric redshift and stellar mass (limited to \(z<1.5\)) estimation included in the LOFAR science-ready multi-wavelength data are outlined in Duncan et al. (2021). We followed the prescription by Jarvis et al. (2013) in order to remove sources that could be spectroscopically and photometrically flagged as stars. This star galaxy separation criterion clearly segregates the two types of objects in rest-frame \(J\) - \(K\) versus \(u\) - \(J\) color space. We found that galaxies dominate the catalog at \(K>22.7\) mag.
We then applied the prescription below to select our sample:
\[\left(\frac{z_{1,\rm max}\ -\ z_{1,\rm min}}{1+z_{1,\rm median}} \right)\times 0.5<0.1\,\&\,\,\left(\frac{S_{\rm K}}{S_{\rm Kerr}}\right)>5\, \&\,\,(z_{\rm best}\leq 1.5)\] \[\&\,\,(10^{8.5}<M_{\star}/M_{\odot}<10^{12.4}) \tag{1}\]
Where the best available estimate is \(z_{\rm best}\), including spectroscopic redshifts when available and photometric redshift (photo-\(z\)) estimates otherwise. \(z_{1,\rm median}\) is the primary redshift peak used when calculating the photo-\(z\). Whereas \(z_{1,\rm min}\) and \(z_{1,\rm max}\) are the lower and upper bounds of the primary 80% highest probability density (HPD) credible interval (CI) peak respectively. The \(S_{\rm K}\) and \(S_{\rm Kerr}\) represent the NIR UKIDSS Deep Extragalactic Survey (DXS) (hereafter UKIDSS-DXS) DR10 \(K\)-band flux and flux error respectively. This NIR data covers a maximum area of around 8 deg\({}^{2}\) with a 3- and 5-\(\sigma\) magnitude depths of 22.7 mag and 22.1 mag respectively, in ELAIS-N1. Following Equation 1, we select \(\sim 77\),047 sources which constitute our sample. Further selection cuts (i.e. SFG/AGN /Quiescent galaxy separation) applied to the sample are described in subsequent sections.
### AGN removal using IRAC colour diagnostics
AGN are known to show flux variability over all observable timescales and across the entire electromagnetic spectrum hence a combination of multi-wavelength diagnostics are usually employed (see Villarroel and Korn, 2014).
We applied the original AGN flag given in the LoTSS multi-wavelength catalogue (from Best et al., 2023) in order to remove galaxies showing evidence of hosting an AGN. This AGN flag in the catalogue incorporates:
* optAGN : sources included in the Million Quasar Catalog compilation or spectroscopically identified AGN.
* IRAGN : when a source satisfies Donley et al. (2012) IR AGN criterion given by: \[x\ =\ \log_{10}\left(\frac{f_{5.8\mu m}}{f_{3.6\mu m}}\right),\ \ y=\log_{10} \left(\frac{f_{8.0\mu m}}{f_{4.5\mu m}}\right)\] (2)
\[x\ \geq\ 0.08\ \wedge\ y\geq 0.15\] \[\wedge y\geq(1.21\times x)-0.27\] \[\wedge y\leq(1.21\times x)+0.27\] \[\wedge f_{4.5\mu m}>f_{3.6\mu m}>f_{4.5\mu m}\wedge f_{6.0\mu m}>f_ {5.8\mu m}\]
* XrayAGN : when a source has X-ray counterpart.
Following the _optAGN_, _IRAGN_ and _XrayAGN_ flags, we select 428 sources satisfying equation 1 as candidate AGN. It is important to note that every technique for selecting AGN is affected by selection biases, and these ones are no exception. The color selection means that objects whose observed mid-infrared colours are not dominated by thermal emission from AGN will be missing from the sample. Figure 2 presents IRAC colour-colour diagram showing the separation between AGN (red scatter contour) and SFG (blue scatter contour). For comparison, we show the Donley et al. (2012) colour selection criterion given by Equation 3, and represented by the solid black wedge in the left panel. The solid grey and dotted dashed grey lines indicate the Lacy et al. (2004) and Lacy et al. (2007) wedges respectively. We also compare the AGN selection based to the Stern et al. (2005) IRAC [3.6] - [4.5] versus [5.8] - [8.0], in Vega magnitudes shown in the right panel of Figure 2.
### SFG/Quiescent galaxy separation
We aim to create well-defined, unbiased samples of the SFG population. Following Schawinski et al. (2014), one can separate red, green valley, and blue cloud populations using the dust-corrected colour-mass diagram for all redshift bins using: \(u\) - \(r(M_{\star})=-0.24+0.25\times M_{\star}\) and \(u\) - \(r(M_{\star})=-0.75\) + 0.25 \(\times\)\(M_{\star}\). Figure 3 presents the \(u\) - \(r\) color-stellar mass dust-corrected diagram for our sample. Galaxies with
Figure 3: The \(u\) - \(r\) galaxy color-mass diagram in bins of redshift as density contours. The galaxy color-mass diagram showing blue, star-forming galaxies are at the bottom, in the blue cloud region. The red, quiescent/passively evolving galaxies are at the top, in the red sequence. The “green valley” is the transition zone in between. The dotted vertical lines indicate mass completeness limits for each redshift bin.
Figure 2: IRAC colour–colour diagram showing the separation between AGN (red scatter contour) and SFG (blue scatter contour). The solid black wedge in the left panel indicates the Donley et al. (2012) wedge. Also shown are solid grey lines in the left and right panels indicating colour selection wedges of the Lacy et al. (2004) and Stern et al. (2005) respectively. The dotted dashed grey line in the left panel indicate the Lacy et al. (2007) wedge.
"green" or intermediate colors are those galaxies in which star formation is in the process of turning off, but still have some ongoing star formation - indicating the process only shut down a short while ago, \(\sim 10^{8}\) years (Bell et al., 2004). The "green valley" region represents the crossroads of galaxy evolution. Galaxies that constitute this population are between the blue star-forming galaxies (the "blue cloud") and the red, passively evolving galaxies (the "red sequence"). The colour bimodality is weakly evident in the redshift bins at 0.1 - 0.3 and 0.3 - 0.5 of the rest-frame \(u\) - \(r\) color distribution. Subsequent redshift bins exhibit a unimodal distribution peaking in the blue (these are the main sequence star-forming galaxies). In particular, the colour-mass or colour- magnitude diagrams do not exhibit strong colour bimodality seen in of the _UVJ_ or _NUVrJ_ diagrams (see, Williams et al., 2009; Muzzin et al., 2013; Straatman et al., 2014, 2016).
Similarly, observational results have also been presented by Borch et al. (2006) and Brammer et al. (2009) who used the \(U-V\) color-mass relation to separate red galaxies from blue galaxies at \(0.2\leq z\leq 1.0\) and \(0.\leq z\leq 2.5\), respectively. Peng et al. (2010) used the \(U-B\) with redshift evolution extrapolated to \(z=1\) to split into red and blue galaxies. More recently, Powell et al. (2017) used the rest-frame \(U-R\) color \(0.7\leq z\leq 1.3\) to distinguish between red and blue sequences galaxies.
For simplicity, we consider only two states for galaxies, "blue star-forming" and "red passive" based on a dividing rest-frame \(u\) - \(r\) color. Obviously, this approach is partly simplistic, but is in accordance with our approach to identify the most basic features of the SFG population. Since the \(V\) and \(r\) bands are not equivalent, we therefore adhere to using only the Schawinski et al. (2014), \(u\) - \(r(M_{\star})=-0.24\) + \(0.25\times M_{\star}\) line to separate red, quiescent/passively evolving galaxies from blue star-forming (i.e. potentially including green valley) ones. Thus, the grey shaded area in each panel of Figure 3 represents the region in which we classify a source as an SFG. We indicate the corresponding percentage of SFGs (i.e., green and blue galaxies) for each redshift bin in Figure 3 from the first to the last redshift bin. Appendix A provides more discussions on our colour-mass selection.
Following the AGN diagnostics in subsection 3.2, and the separation of quiescent/passively evolving galaxies from candidate SFGs, we employ our final sample selected for the subsequent analyses is a follows:
1. All Galaxies: the original 77,047 sources that satisfying equation 1.
2. SFGs: sources from the original 77,047 sample that are classified as SFGs based on the \(u-r\) galaxy color-mass diagnostics and sources that are not labeled as _optAGN_, _IRAGN_ and _XrayAGN_ from the LoTSS multi-wavelength catalogue. Removing these flags, we obtain a subsample of 51 124 as SFGs.
Table 2 presents a summary of the results of subsequent analysis of the average galaxy mass in each stellar mass range. For a given stellar mass range, we show the median stellar across the entire redshift range (i.e, \(0.1\leq z\leq 1.5\) ) of this work, for the corresponding total and SFG populations. The lower and upper bounds represent the 16/84th percentile.
## 4 Analysis and Results
### Stacking Methodology
The direct detection of the radio point source population is complicated by source confusion. Confusion is the blending of faint sources within a telescope beam. Hence statistical techniques such as stacking, which are not strongly effected by confusion noise, can be a powerful tool for reaching below the noise. Stacking is a tool to average together data for a given set of objects. For an input sample of N galaxies its background noise level in a stacked image should reduce to \(\sim 1/\sqrt{\rm N}\) of the noise measured in a single radio image. Stacking is at the expense of knowledge of the individual galaxies, but with careful application of criteria when binning the galaxies, and with a large enough sample, it can reveal properties of galaxies below the noise and confusion levels. The technique has been used to great effect many times in the literature (see, Serjeant et al., 2004; Ivison et al., 2007; Bourne et al., 2011, for example).
We choose six bins with a stellar mass and redshift range of \(10^{8.5}<M_{\star}/M_{\odot}<10^{12.4}\) and \(0.1\leq z\leq 1.5\) respectively. Out of the 77,047 sources we select as all galaxies, 51 124 sources are SFGs. We stack the \(K\)-band mass selected positions from the LOFAR multi-wavelength catalogue of the ELAIS-N1 on the 610 MHz wide radio map (Ishwara-Chandra et al., 2020) of the ELAIS-N1. Stacking was done with the Python Astronomical Stacking Tool Array (PASTA) (Keller and Silk, 2018) program1 which measures the flux in a map from selected sources (usually at another wavelength) and then builds a distribution of map-extracted fluxes for the sample (White et al., 2007). We choose fixed bin sizes and non-overlapping (statistically independent) bins in stellar mass and redshift space. This allows for a statistically robust number of sources of in each bin and allows us to achieve a high signal-to-noise ratio (SNR). Notice that there is a \(\sim 0.5\) dex in mass, i.e \(M_{\star}\) bin size, up to \(M_{\star}(M_{\odot})=11.0\), beyond which the bin size is increased to \(\sim 1.4\) dex in order to cover the full mass range. Conversely, there is a \(\sim 0.25\) dex redshift bin size over the entire redshift range. The advantage of stacking technique is the gain in the signal-to-noise ratio, as combining many sources reduces the random noise while maintaining the average level of the signal. Median stacking analyses are less susceptible to contamination from radio AGN, which constitute a minority of the population of faint radio fluxes as compared to mean stacking (Smolcic et al., 2017; Algera et al., 2020).
Footnote 1: [https://github.com/bwkeller/PASTA](https://github.com/bwkeller/PASTA)
Our stacking work is summarized as follows:
* An input list of coordinates is created for the number of galaxies to be stacked, taking into consideration the selection criteria. An input image in FITS format.
* PASTA reads the source list and FITS file with the number of pixels specified (i.e. \(30\times 30\) pixels cutouts for our 610 MHz image). The program proceeds to extract "stamps", square sections of the source image with a source centered within them and generating 2-dimensional matrices of the median and mean output images. The detection threshold is improved by stacking images centered on the object coordinates.
\begin{table}
\begin{tabular}{c c c|c c} \hline \hline \(M\) range & \(\rm{N_{AL,pinpin}}\) & \(\rm{\left(log\frac{10}{M_{\star}}\right)}_{\rm{ALL,GAL,GALES}}\) & \(\rm{N_{SFG}}\) & \(\rm{\left(log\frac{10}{M_{\star}}\right)}_{\rm{SFG}}\) \\ \hline \(8.5<M_{\star}<9.0\) & 1793 & \(8.6^{+0.17}_{-0.18}\) & 1314 & \(8.8^{+0.17}_{-0.18}\) \\ \(9.0<M_{\star}<9.5\) & 4919 & \(9.37^{+0.18}_{-0.18}\) & 4522 & \(9.36^{+0.18}_{-0.18}\) \\ \(9.5<M_{\star}<10.0\) & 12853 & \(9.84^{+0.27}_{-0.18}\) & 10812 & \(9.82^{+0.27}_{-0.18}\) \\ \(10.0<M_{\star}<10.5\) & 20580 & \(10.30^{+0.21}_{-0.21}\) & 14343 & \(10.28^{+0.11}_{-0.22}\) \\ \(10.5<M_{\star}<11.0\) & 24766 & \(10.76^{+0.20}_{-0.18}\) & 13085 & \(10.20^{+0.21}_{-0.21}\) \\ \(11.0<M_{\star}<12.4\) & 12536 & \(11.16^{+0.20}_{-0.21}\) & 6328 & \(11.15^{+0.21}_{-0.27}\) \\ \hline \end{tabular}
\end{table}
Table 2: Table showing the summary of the results of subsequent analysis of the average galaxy properties in each stellar mass range.
* The integrated and peak flux densities are computed by running PyBDSF source finder (Mohan & Rafferty, 2015) on the median stacked images. PyBDSF fits a 2D Gaussian to any significant emission in the center of the stack.
The median estimator is more robust to outliers than the mean, and we will demonstrate that the median is the most appropriate choice for our analysis. The median image provides a compelling visual impression of the statistical significance of the sample median compared to nearby off positions. The premise of median stacking a survey is that the radio emission is unresolved, and that the central pixel represents the flux density of the sources in the stack. White et al. (2007) performed detailed calculations that show that a median stacking analysis is superior to a mean stacking, since it is robust to small numbers of bright sources, and it does not require any maximum allowed flux density cutoff prior to stacking. It also shows patterns like the side lobes of the dirty beam that must be present around real sources of any flux density in the image.
Figure 4 shows the binning scheme in stellar mass and photometric redshift for the entire (left) and the SF (right) sample. The top number in each box is the total number of galaxies in each bin. The middle number is the total number of galaxies used in the radio stack; the bottom number shows the signal-to-noise (SNR) ratio achieved in the radio stack. Ideally, one could roughly estimate stellar mass completeness limits by visual inspection of Figure 4. Figure 5 presents the stacked images of total intensity for all galaxies. The columns indicate the median stacked 610 MHz total intensity radio images for total galaxies within the range \(z\in[0.1-0.3]\), \([0.3-0.5]\), \([0.5-0.7]\), \([0.7-0.9]\), \([0.9-1.1]\), \([1.1-1.5]\) for the \(K\)-band magnitude mass selected sample. The rows indicate mass range, \(M_{\bullet}\in[11.0-12.4]\), \([10.5-11.0]\), \([10.0-10.5]\), \([9.5-10.0]\), \([9.0-9.5]\), \([8.5-9.0]\) respectively, from top to bottom. All images have a size of \(\sim 36\times 36\) arcsec\({}^{2}\) respectively. The image-scale ranges between 1 and 100 \(\mu\)Jy beam\({}^{-1}\). The stacked images (GMRT, 610 MHz) of total intensity for star-forming galaxies is shown in Figure 6. In contrast to this, Figure 7 shows the mean stacked images for the same redshift and stellar mass bins for the total (top) and the SFG (bottom) populations,respectively. All images are notably noisier than their median equivalents, and bright sources away from the centre of the cut-out images have a much greater effect on the stacked images. Thus, the mean images are strongly biased by a few bright radio sources and as such not a good representation of the typical sources within each flux density bin with the noise level of the mean stacked images generally \(\sim\)1.5 times the noise of the median stacked images.
Figure 8 shows the stacked median axial ratio (angular size) \(B_{\rm maj}/B_{\rm min}\) as a function of median redshift for all galaxies (left) and star forming galaxies (right). The errors represent the difference between the maximum Gaussian fit to a source and best fit Gaussian that encompasses the full source at the center of the median stacked image. The fitted angular size is overall closely consistent with the original beam size of the 610 MHz image. At the first redshift range for SFGs (i.e. \(z\in[0.1-0.3]\) ), the size of the gaussian fits to the median stacked image with \(M_{\bullet}\in[11.0-12.4]\), seem higher than the size of the synthesized beam \(B_{\rm maj}/B_{\rm min}=1\) (see the horizontal solid green line in Figure 8). The rest of the mass bins are consistent with the beam when compared with the horizontal solid green line. Differences may occur for various reasons, for example, a Gaussian fit to a source convolved with a non-Gaussian point spread function can give rise to systematic errors. Errors in the positions of the input source catalog can lead to blurring of the stacked image. The same effect can occur if the radio emission is systematically offset from the IR emission, in some cases. Fitting of source sizes is a simple test one can run on the results of a stack. This can be used as a test of both the positional accuracy, and in testing that the sources stacked are indeed unresolved. Since our stacked image produces a source that is almost the same size of the beam, this confirms that the positional accuracy is sufficient, and that the stack is dominated by unresolved sources.
### Estimating the Radio Star Formation Rates (SFR, \(\Psi\))
Here, we calculate the radio luminosity (radio power) of the median stacked images and use it to estimate the radio based SFRs.
The radio spectrum can be assumed to follow a simple power law (S\({}_{\nu}\propto\nu^{\alpha}\)) resulting from the sum of the non-thermal synchrotron and thermal bremsstrahlung components; the power-law index is typically \(\alpha\approx-0.8\) for SFGs (Condon, 1992; Galvin et al., 2016). AGN-dominated sources may have steeper spectral indices (Ibar et al., 2009).
The observed stacked fluxes were converted to rest-frame (emitted) monochromatic luminosities using Equation 4, which contains a bolometric \(K\)-_correction K(z)_:
\[L_{610}=4\pi\mathrm{d}_{L}^{2}S_{610}K(z)[1+z]^{-1} \tag{4}\]
Following the approach by Bourne et al. (2011), \(K(z)\), which accounts for the shift of the spectrum in relation to the receiver, assuming a simple power-law spectrum to a monochromatic flux is given \(K(z)=[1+z]^{-\alpha}\) where \(\alpha=-0.8\).
Bell (2003) estimated the SFR, (\(\Psi\)), from 1.4 GHz luminosity of galaxies, calibrated from the total infrared SFR for galaxies with \(L\leq L^{\bullet}\) (defined as having an infrared luminosity \(L_{\rm IR}\sim 2\times 10^{10}L_{\odot}\)). We followed Garn et al. 2009, and converted this relationship to a 610 MHz equivalent:
\[\left(\frac{\Psi}{M_{\odot}\mathrm{yr}^{-1}}\right)=\,2.84\times 10^{-22}\left( \frac{L_{610}}{\mathrm{MHz}^{-1}}\right) \tag{5}\]
For \(L_{610}>L_{\rm c}\) (where \(L_{\rm c}=3.3\times 10^{21}\,WHz^{-1}\) is the luminosity at 610 MHz of a\(\sim L_{\bullet}\) galaxy, with \(\Psi\simeq 1M_{\odot}yr^{-1}\)), we can rewrite Equation 5 as:
\[\left(\frac{\Psi}{M_{\odot}\mathrm{yr}^{-1}}\right)=\,\frac{2.84\times 10^{-22}}{ 0.1+0.9(L_{610}/L_{\odot})^{0.3}}\left(\frac{L_{610}}{\mathrm{MHz}^{-1}}\right) \tag{6}\]
The left and right panels of Figure 9 shows the distribution of the stacked total and SFGs as a function of redshift, \(M_{\bullet}\), and stacked radio power (\(L_{610}\) MHz). The x, y axes represent the \(z-M_{\bullet}\) plane, while the \(z\)-axis sets the \(L_{610}\) MHz colour-coded by their derived SFR\({}_{\rm radio}\). The relationship given by Equation 5 and 6 is used to compute the stacked SFR\({}_{\rm radio}\). \(\Psi\).
Since the SFR is correlated with the stellar mass, a useful quantity to describe the SF regime of a galaxy is its specific SFR (sSFR), i.e. the SFR divided by the median stellar mass of the galaxies in the bin.
\[\mathrm{sSFR}\equiv\frac{\Psi}{M_{\star}} \tag{7}\]
To explore the specific star-formation we used the measured radio SFR from the median radio stack divided by the stellar mass. We follow the stellar mass and redshift bins described in subsection 4.1.
### Separation of sSFR dependence
The MS for SFGs correlation, reveals interesting mechanisms of the star formation history (Brinchmann et al., 2004; Salim et al.,
2007). The MS for SFGs has near-constant slope but shifts towards higher SFRs as the redshift increases (see e.g. Rodighiero et al., 2011; Johnston et al., 2015). We quantify the relationship between the sSFR and each of \(M_{\star}\) and \(z\), following Karim et al. (2011).
\[{\rm sSFR}(M_{\star},z)\propto{\rm sSFR}(M_{\star}\,{\rm\pm}){\rm sSFR}(z|M_{ \star})\ =\ M_{\star}^{\beta}(1+z)^{n} \tag{8}\]
We fit the stacked sSFR \(-\) log\({}_{10}\)\(M_{\star}\) relation with these two separate functions of \(M_{\star}\) and \(z\).
\[{\rm sSFR}(M_{\star}|z)=\ \alpha_{M}(z)M_{\star}^{\beta} \tag{9}\]
We refer to the index \(\beta\) also as a slope since the relation is commonly shown in log space.
\[{\rm sSFR}(z|M_{\star})={\rm c}_{z}(M_{\star})(1+z)^{n} \tag{10}\]
In subsequent sections, we examine the relationship between sSFR, \({\rm M_{\star}}\) and \(z\). We performed bootstrap linear regression fits to each sample 2. The dashed lines in Figures 10 and 11 depict the best fit to the data in the mass representative \(\beta\) and redshift n regimes. In our bootstrap linear regression, we do not account for uncertainties associated with the SFR calibration, the photometric redshift, and stellar mass estimates as the large number of objects stacked for each data point ensures that even the joint error budget is statistically reduced to a low level that would not substantially enhance our uncertainty ranges (see Karim et al., 2011). We resampled the dataset 1000 times to create new datasets with the same size as the original, and then fitting a linear regression model to each of the resampled datasets.
Footnote 2: A resampling method used to estimate the variability of statistical parameters from a dataset which is repeatedly sampled with replacement (Lopes et al., 2019)
### Dependence on Stellar Mass
In Figure 10, we show the dependence of sSFR on stellar mass for all galaxies and star-forming galaxy samples. The mass evolution of the sSFRs is well described by a power law (\(M_{\star}|z\)) \(\propto\)\(M_{\star}^{\beta}\), as depicted by the solid dashed lines in Figure 10. We first consider the whole sample which we refer to as all galaxies and show the redshift-dependent radio-based sSFRs that are distributed in the logarithmic sSFR \(-\)\(M_{\star}\) plane. We utilise sources that are above the mass completeness limit in our fitting. At a given redshift, the sSFR declines with increasing stellar mass for all galaxies. For \(z\in[0.1-0.3]\), the value of the slope \(\beta_{\rm ALL}=-0.47\,\pm\,0.01\). For the redshift bin \(z\in[0.3-0.5]\), we measure the values of the slopes to be \(\beta_{\rm ALL}=-0.58\,\pm\,0.01\). At \(z\in[0.5-0.7]\), we measure the values of the slopes to be \(\beta_{\rm ALL}=-0.51\,\pm\,0.02\). For \(z\in[0.7-0.9]\), the value of the slope is \(\beta_{\rm ALL}=-0.41\,\pm\,0.02\). For \(z\in[0.9-1.1]\), we measure \(\beta_{\rm ALL}=-0.41\,\pm\,0.02\).
For the SFG population, we measure \(\beta_{\rm SFG}=-0.32\,\pm\,0.05\) and \(\beta_{\rm SFG}=-0.50\,\pm\,0.01\) for the first and second redshift bins, respectively. For redshift bins \(z\in[0.5-0.7]\) and \(z\in[0.7-0.9]\) we measure \(\beta_{\rm SFG}=-0.42\,\pm\,0.02\) and \(\beta_{\rm SFG}=-0.41\,\pm\,0.01\), respectively. We measure \(\beta_{\rm SFG}=-0.47\,\pm\,0.03\) for the fifth redshift bin. We can also infer from the second panel of the plot that the general trend of sSFR decreases with increasing stellar mass for the SFG population.
The'mass gradient' for all galaxies and the SFG sample, i.e. \(\beta_{\rm ALL}\) and \(\beta_{\rm SFG}\), is negative in all cases. The steepness of the sSFR with stellar mass is higher at high redshifts for both populations. If we ignore the highest redshift bin, for which the slope is poorly constrained, there is a consistent indication that the slope of the relation between sSFR and stellar mass becomes steeper with increasing redshift for
Figure 4: Binning scheme in stellar mass and photometric redshift for the entire (left) and the SF (right) sample. The top number in each box is the total number of galaxies in each bin. The middle number is the total number of galaxies used in the median radio stack; the bottom number shows the signal-to-noise (SNR) ratio achieved in the median radio stack. The gray and blue shading traces the mass completeness limits derived for all galaxies in subsection 2.3 (see Table 1)
both the total and SFG population. Our results of the individual fits to our data yielding the parameter \(\beta\) for all and SF galaxies are presented in Table 3. Fits have only been applied if more than two data points remained above the mass limit where the individual sample is regarded mass representative.
We observe that the sSFR is only weakly dependent on stellar mass, with sSFR decreasing as stellar mass increases which is consistent with previous work. Our radio-derived SFR provides a better match to the observed trends in sSFR versus stellar mass in the lowest mass bins, and also in reproducing the low redshift sSFR seen in other wavebands.
### Dependence on redshift
Observational values for sSFR \(\propto~{}(1~{}+~{}z)^{n}\) may vary from \(n~{}=~{}2~{}-~{}5\) (see, Salim et al., 2007; Karim et al., 2011; Popesso et al., 2019). Results from non-stacked sample by Speagle et al. (2014), indicates that the sSFR evolves with redshift according to a factor of \((1~{}+~{}z)^{2.5}\) (we provide a detailed comparison with sSFRs derived from other studies in subsection 5.1). By plotting the same data as a function of redshift rather than in stellar mass classes an evolutionary trend is readily apparent. Figure 11 indicates how sSFRs for our samples evolve with redshift. It is immediately evident that there is a dramatic increase sSFR by a factor of \(>100\) over the interval \(0.1\leq z\leq 1.5\). This is evident at all masses apart from the lowest mass bin which has poor statistics and spans a limited redshift range. The redshift evolution of the sSFRs is well described by a power law \((z|M_{\star})\propto(1+z)^{n}\) as depicted by the solid dashed lines in Figure 11 for a given mass bin. The simple power-law fit provides a good description of the data in most cases. The most massive galaxies have the lowest sSFR, at all epochs for both the total and SFG population. We do not show the fits for mass range \(M_{\star}\in[8.5-9.0]~{}\&~{}[9.0-9.5]\) because on incompleteness in these bins. For \(M_{\star}\in[9.5-10.0]\), the value of the slope \(n_{\rm ALL}=5.33~{}\pm~{}1.51\), whereas \(n_{\rm SFG}=4.50~{}\pm~{}1.36\). At \(M_{\star}\in[10.0-10.5]\), the slope value increases to \(n_{\rm ALL}=5.69\pm 0.26\) and \(n_{\rm SFG}=3.70~{}\pm~{}0.15\) respectively. For \(M_{\star}\in[10.5-11.0]\), we measure \(n_{\rm ALL}=4.51~{}\pm~{}0.27\) and \(n_{\rm SFG}=2.71~{}\pm~{}0.25\) respectively. At the last mass bin, \(M_{\star}\in[11.0-12.4]\), we measure slopes of \(n_{\rm ALL}=4.25~{}\pm~{}0.07\) and \(n_{\rm SFG}=3.13~{}\pm~{}0.34\), respectively. The
Figure 5: Stacked images (GMRT, 610 MHz) of total intensity for all galaxies. The columns indicate the median stacked 610 MHz total intensity radio images for total galaxies within the range \(z\in[0.1-0.3]\), \([0.3-0.5]\), \([0.5-0.7]\), \([0.7-0.9]\), \([0.9-1.1]\), \([1.1-1.5]\) for the K-band magnitude mass selected sample. All images have a size of \(\sim 36\times 36\) arcsec\({}^{2}\). The rows indicate mass range, \(M_{\star}\in[11.0-12.4]\), \([10.5-11.0]\), \([10.0-10.5]\), \([9.5-10.0]\), \([9.0-9.5]\), \([8.5-9.0]\) respectively, from top to bottom. All image-scale ranges between 1 and 100 \(\mu\)Jy beam\({}^{-1}\)
Figure 6: Stacked images (GMRT, 610 MHz) of total intensity for star-forming galaxies. The empty space in the last column (i.e \(z\in[1.1-1.5]\)) represents redshift range where no median stacked 610 MHz image was produced for the selected star-forming galaxy population. See Figure 5 for more details.
Figure 7: Mean stacked 610 MHz radio images for the same redshift and stellar mass bins for all galaxies (top) and the SFG (bottom) populations, respectively. See Figure 5 for more details.
redshift-evolution parameter \(n\) is slightly higher for all galaxies than for the SFG sample (i.e. \(n_{\rm ALL}>n_{\rm SFG}\)). This implies that at a given stellar mass, redshift evolution is stronger for the full sample than for the SFG sample. Our results of the individual fits to our data yielding the parameter a \(n\) for all and SF galaxies are presented in Table 4. Fits have only been applied if more than two data points remained above the mass limit where the individual sample is regarded mass representative.
## 5 Discussion
Our results on the sSFR-mass relation steepening with redshift are in broad agreement with those based on far-infrared stacking experiments that found almost flat relations up to \(z\sim 1.5\).
On the whole, there is a good agreement between our fits and recent work, mostly probing high-\(z\) observations. This is justification that our fits as functions of redshift with a power-law do provide nearly as good fits compared to the literature. We are not able to definitively rule out a possible "plateauing" of the sSFR in the redshift range
Figure 8: Stacked median axial ratio \(B_{\rm maj}/B_{\rm min}\) as a function of median redshift for all galaxies (left) and star forming galaxies (right) represented as open black circles in each redshift bin. The horizontal solid green line represent the original axial ratio of the 610 MHz image, where \(B_{\rm maj}/B_{\rm min}=1\). The corresponding stellar mass bins (see Figures 5, 6 and 4) in each redshift range are shown as numbers (i.e. 1 - 6, from low to high stellar mass bins) in the middle of each open black circle. The sources shown here are from the median stacked images which show a clear detection at their center from which we obtain Gaussian fits using PyBDSF source finder. Notice that these sources coincide with having a median stacked flux density with SNR \(\geq 5.0\) and in most cases above the mass completeness limit
Figure 9: Left: Distribution of the total galaxies as a function of \(M_{\bullet}\), redshift and stacked radio power at 610 MHz (\(L_{610,{\rm MHz}}\)), colour coded by the derived stacked SFR\({}_{\rm radio}\). Right: Distribution of the star-forming driven sources as a function of \(M_{\bullet}\), redshift and stacked radio power at 610 MHz (\(L_{610,{\rm MHz}}\)), colour coded by the derived stacked SFR\({}_{\rm radio}\).
explored here. In inference, our results seem to favor a scenario where the sSFRs will continue to increase until at least \(z\sim 3\), as found by studies from the literature, if we were to probe a broader redshift range. Redshift dependence of the \(sSFR-M_{\bullet}\) relation is more uncertain (Ilbert et al., 2013; Schreiber et al., 2015; Pearson et al., 2018) and studying the redshift evolution of these different populations through stacking thus provide complementary insights into the host properties of these sources. In this section, we discuss our results and compare with sSFRs derived from other studies including those from radio stacking experiments.
### Comparison with \(\beta\) and \(n\) of sSFRs derived from previous studies
We compare our results to those previous studies conducted at 1.4 GHz and the authors have considered more than one SFR indicator. In subsections 4.4 and 4.5 we found that sSFR decreases with stellar mass (downsizing; see Figure 10) and increases with redshift (see Figure 11) for all galaxies and SFG populations. Additionally, \(\beta_{\rm ALL}\) and \(\beta_{\rm SFG}\) for the dependence on stellar mass are all negative whereas their values become steeper with increasing redshift. Radio-based measurements of the \(sSFR-M_{\bullet}\) relation have been studied by previously in the literature.
Figure 11: Radio-stacking based measurement of the sSFR as a function of stellar mass at a given redshift for all galaxies, (left) and star-forming galaxies (right). Mass ranges from \(10^{8.5}<M_{\bullet}/M_{\odot}<10^{12.4}\). Two-parameter fits of the form \(\mathrm{c}\times(1+z)^{n}\) are applied the open circles which are representative samples for the underlying galaxy population. The error bars follows that of Figure 10.
Figure 10: Radio-stacking based measurement of the sSFR as a function of redshift at a given stellar mass for all galaxies, (left) and star-forming galaxies (right). Redshift ranges from \(0.1\leq z\leq 1.5\). Dashed lines are two-parameter fits of the form \(\mathrm{c}\times(M_{\bullet}/10^{11}M)^{\beta}\) to the mass-representative depicted by open circles. Horizontal bars indicate the width of those bins, while the vertical error bars simply reflect the Poisson uncertainties using the prescription of Gehrels (1986). The linear regression line that we get from each bootstrap replicate of the fit to each population as a function of mass are shown as black, grey, blue, red and brown lines following each redshift bin.
We compare with our measurements with the mass dependent slope estimates from Karim et al. (2011) and Zwart et al. (2014). The top panel of Figure 12 presents the comparison of gradient \(\beta\) against stellar mass as a function of redshift for the total (left) and the star-forming galaxy population (right). These studies form the literature and our work share some methodological similarities (e.g., the use of a mass and \(K\)-selected samples and a radio-stacking approach) and should therefore be directly comparable. However, there are some technical differences in the exact implementation of the image stacking as already discussed (see subsection 4.1). It is important to also point out that the calibration of the individual radio star formation rates and binning of this work are different from these previous studies. However, in terms of the evolution of the sSFR sequence both studies show a reasonable agreement with our work. The differences that arise may be attributed to our study tracing 610 MHz rather than 1.4GHz, which these past studies were conducted.
Karim et al. (2011) concluded that the sSFR sequence itself tends to flatten toward lower masses and \(z>1.5\). They inferred this might be explained by an upper limiting threshold where average SF systems already reach levels of star formation that qualify them to double their mass within a dynamical time. Their data show that there is a tight correlation with power-law dependence, sSFR \(\propto M_{\star}^{\beta}\), between sSFR and stellar mass at all epochs. Excluding quiescent galaxies from their analysis, a shallow index \(\beta_{\rm SFG}\approx-0.4\) fits the correlation for star-forming sources. For their total population the sSFR-mass gradient \(\beta\) becomes steeper with \(\beta_{\rm ALL}\approx-0.67\). The sSFR-mass gradients \(\beta\) found by Zwart et al. (2014) become less steep with redshift (from \(\beta\approx-0.75\) to \(\beta=-0.25\) out to \(z\approx 2\)) for the full and elliptical samples, but show no dependence with redshift (\(\beta\approx-0.5\)) for the starburst and irregular galaxies for their stacked deep (17.5 \(\mu\)Jy) VLA radio observations.
Studies based on IR SFRs such as Rodighiero et al. (2010) found that the sSFR-mass relation steepens with redshift for all galaxies, becoming almost flat at \(z<1.0\) and reaching a slope of \(n=-0.50^{+0.13}_{-0.16}\) at \(z\sim 2\). Moreover, they also show that the most massive galaxies have the lowest sSFRs at any redshift. Further implying that they have formed their stars earlier and more rapidly than their low mass counterparts which corresponds with our findings. Oliver et al. (2010) in their analysis of sSFR activity of galaxies and their evolution near the peak of the cosmic far-infrared background at 70 and 160\(\mu\)m found a trend sSFR \(\propto M_{\rm star}^{\beta}\) with \(\beta\sim-0.38\). They found a stronger trend for early type galaxies (\(\beta\sim-0.46\)) than late type galaxies (\(\beta\sim-0.15\)).
The bottom panel of Figure 12 shows a comparison of the redshift evolution parameter recorded of \(n\) sSFR as a function of stellar mass for all galaxies (left) and the SFG (right) population derived from Figure 11. Although we observe that all measured sSFRs (i.e. total galaxies and SFG) increase with redshift, massive galaxies have the lowest sSFRs. The sSFRs span a smaller range at high redshift, with massive galaxies evolving faster compared to low mass galaxies, decreasing their sSFR at earlier epochs. Our results are in broad agreement with those based on radio-stacking which find almost flat relations up to \(z\sim 2\)(see, Dunne et al., 2009; Pannella et al., 2009). Karim et al. (2011) noted that at redshift \(0.2<z<3\) both populations show a strong and mass-independent decrease in their sSFR toward the present epoch where \(n\sim 4.3\) for all galaxies and \(n\sim 3.5\) for star-forming sources. Zwart et al. (2014) reported that the redshift evolution of sSFR is much faster for their full sample than their starburst sample. Oliver et al. (2010) found that the sSFR evolves as \((1+z)^{n}\) with \(n=4.4\pm 0.3\) for galaxies with \(10.5<\log_{10}M_{\star}/M_{\odot}<12.0\). For early type galaxies, they found that the average evolution in this mass range is stronger (\(n\sim 5.7\)) but decreases to higher mass.
Our SFG sample comprises sources that do not satisfy quiescent galaxy criterion, spectroscopically identified as AGN, satisfies the Donley et al. (2012) IR AGN criterion, and have an X-ray counterpart. This does not explicitly make our sample immune from AGN contamination. It is expected that AGN reside in massive star-forming galaxies (see, Kauffmann et al., 2003; Mullaney et al., 2012; Juneau et al., 2013; Rosario et al., 2013). As such, AGN contamination could be a major reason to doubt that 610 MHz radio emission might be a reliable star formation tracer. Padovani (2017) emphasize that for
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{All Galaxies} & \multicolumn{2}{c}{Star-forming Galaxies} \\ \(\vartriangle\log(M_{\star}[M_{\odot}])\) & \(\log(C_{\rm z,All}\left[1/{\rm Gyr}\right])\) & \(n_{\rm ALL}\) & \(\log(C_{\rm z,SFR}\left[1/{\rm Gyr}\right])\) & \(n_{\rm SFG}\) \\ \hline \(8.5-9.0\) & – & – & – & – \\ \(9.0-9.5\) & – & – & – & – \\ \(9.5-10.0\) & -1.21 \(\pm 0.15\) & \(5.33\pm 1.51\) & -1.06 \(\pm 0.12\) & \(4.50\pm 1.36\) \\ \(10.0-10.5\) & -1.50 \(\pm 0.03\) & \(5.69\pm 0.26\) & -1.06 \(\pm 0.04\) & \(3.70\pm 0.15\) \\ \(10.5-11.0\) & -1.63 \(\pm 0.04\) & \(4.51\pm 0.27\) & -1.09 \(\pm 0.05\) & \(2.71\pm 0.25\) \\ \(11.0-12.4\) & -1.81 \(\pm 0.01\) & \(4.25\pm 0.07\) & -1.44 \(\pm 0.09\) & \(3.13\pm 0.34\) \\ \hline \(\langle n\rangle=\) & 4.94 \(\pm\) 0.53 & \(\langle n\rangle=\) & 3.51 \(\pm\) 0.52 \\ \hline \end{tabular}
\end{table}
Table 4: Table summarizing the two parameter fits to the redshift evolution of the sSFR. We applied a power-law fit of the form \(\mathrm{c}\times\langle 1+z\rangle^{n}\) (Equation 10) to the radio-stacking-based sSFRs as a function of redshift within any mass bin.
\begin{table}
\begin{tabular}{c c|c|c|c} \hline \hline & \multicolumn{2}{c}{All Galaxies} & \multicolumn{2}{c}{Star-forming Galaxies} \\ \(\vartriangle\log(M_{\star}[M_{\odot}])\) & \(\log(C_{\rm z,All}\left[1/{\rm Gyr}\right])\) & \(\beta_{\rm ALL}\) & \(\log(C_{\rm z,SFR}\left[1/{\rm Gyr}\right])\) & \(\beta_{\rm SFG}\) \\ \hline \(0.1-0.3\) & -1.34 \(\pm 0.10\) & -0.47 \(\pm 0.01\) & -1.08 \(\pm 0.53\) & -0.32 \(\pm 0.05\) \\ \(0.3-0.5\) & -1.07 \(\pm 0.06\) & -0.58 \(\pm 0.01\) & -0.77 \(\pm 0.09\) & -0.50 \(\pm 0.01\) \\ \(0.5-0.7\) & -0.83 \(\pm 0.11\) & -0.51 \(\pm 0.02\) & -0.66 \(\pm 0.11\) & -0.42 \(\pm 0.02\) \\ \(0.7-0.9\) & -0.57 \(\pm 0.09\) & -0.45 \(\pm 0.01\) & -0.50 \(\pm 0.05\) & -0.41 \(\pm 0.01\) \\ \(0.9-1.1\) & -0.47 \(\pm 0.12\) & -0.41 \(\pm 0.02\) & -0.45 \(\pm 0.07\) & -0.47 \(\pm 0.03\) \\ \(1.1-1.5\) & – & – & – & – \\ \hline & \multicolumn{2}{c}{\(\langle\beta\rangle=-0.49\pm 0.01\)} & \multicolumn{2}{c}{\(\langle\beta\rangle=-0.42\pm 0.02\)} \\ \hline \end{tabular}
\end{table}
Table 3: Table summarizing the two parameter fits to the mass dependence of the sSFR. We applied a power-law fit of the form \(\mathrm{c}\times\langle M_{\star}/10^{11}M\rangle^{\beta}\) (Equation 9, see also Karim et al. (2011)) to the radio-stacking-based sSFRs as a function of mass within any redshift slice. All the slopes have been computed in a mass complete range.
flux density \(\lesssim 1\) mJy, the faint radio sky is populated by both non-jetted AGN and a quickly decreasing fraction of jetted AGN (see also Padovani, 2016). Studies by Daddi et al. (2007) conducted at mid- and far-IR to submillimeter, radio, and rest-frame UV wavelengths, measured contamination to SFRs from X-ray-emitting AGN. They used radio stacking to investigate trends for radio undetected sources and found that the \(L_{\rm IR}\) estimated from 24 \(\mu m\) exceeds on average by an order of magnitude the same quantity derived from radio. This was attributed to the additional radio emission from an AGN, as suggested also by Donley et al. (2007), mostly in low redshift galaxies. Ji et al. (2022) highlighted the importance AGN selection effects on the distributions of host galaxy properties. They combined a study of X-ray and IR AGN at \(z\approx 2\) and compared the star formation and morphological properties of AGN and non-AGN host galaxies. Their studies revealed that non-AGN SFGs on the main sequence and X-ray AGN have similar median star formation properties. Classification of sources as either SFG or AGN, appears to be a more complex problem and in reality, not all sources will give unambiguous results over all criteria. Hence, we do not reject AGN contamination contribution argument to the radio emission in the selected SFGs. Studies by Ito et al. (2022) reveal that the frequency of AGN hosted by transitional, from SFGs to quiescent galaxies, depends significantly on how the AGN are selected.
In summary, our measurements for both the sSFR-mass and sSFR-redshift evolution are largely consistent between the VIDEO (i.e. Zwart et al., 2014), COSMOS (i.e. Karim et al., 2011) and our 610 MHz GMRT data set of the ELAIS-N1. Although for the sSFR-redshift evolution parameter, n, we measure is slightly higher in our data. We also measure slightly steeper mass gradient, \(\beta\).
### Comparison of our MS for SFGs to previous studies
The UV, IR and radio wavelengths have been used to characterize the star formation properties for different classes of sources in literature, by investigating their star formation rates. We provide in Figure 13 a comparison of the our radio-stacked based measurement of sSFR evolution with other works in literature. We compare the evolution of the sSFR derived for five different redshift bins to the MS trends observed by other studies in the literature. These measurements from previous studies were conducted in the _SFR-M\({}_{\star}\)_ plane. To illustrate the scientific value of our stacked data, we convert to the _sSFR-M\({}_{\star}\)_ plane in order to easy comparison. The solid grey vertical lines in each panel represents the mass completeness limit, \(M_{\rm lim}\), above which we perform the fitting in subsections 4.4 and 4.5. Lee et al. (2015) used rest-frame color-color diagram (\(NUV\,-\,r\)) versus \((r\,-\,K)\), to study the MS in the COSMOS field at \(0.3<z<1.3\). Schreiber et al. (2015) conducted stacking analysis of \(UVJ\) selected galaxies in the deep Herschel PACS maps of the CANDELS fields. They demonstrated that galaxies at \(z=4\) to 0 of all stellar masses follow the MS for SFGs. Tomczak et al. (2016) performed stacking analysis of \(UVJ\) selected galaxies combining mean IR luminosity with mean \(NUV\) luminosity to derive SFR. We compare their fits, solid green curves, in similar redshift range to our stacked points. Pearson et al. (2018) selected SFGs following the \(UVJ\) selection as in Whitaker et al. (2014) and traced the MS over \(0.2\leq z<6.0\) and \(10^{9.0}<M_{\star}/M_{\odot}<10^{11.0}\). Their simple two-parameter power law fits are shown as solid violet lines in each panel in Figure 13. Thorne et al. (2021) used multiwavelength photometry from the _Deep Extragalactic Visible Legacy Survey_ (DEVILS, Davies et al., 2018) and measured stellar masses and SFR for galaxies in the COSMOS field mapping the evolution of the _SFR-M\({}_{\star}\)_ relation for \(0<z<4.25\) redshift range. Their fits, which is obtained by adapting the parameterisation from Lee et al. (2015) and adding an slope to freely model SFR at high stellar masses are shown as solid brown curves. Cooke et al. (2023) investigated the relationship between _SFR-M\({}_{\star}\)_ of SFGs in the COSMOS field from \(0<z<3.5\). The fitted MS curves measured from their construction of _FUV-FIR_ SEDs of stellar mass-selected sample are shown in the individual panels of Figure 13 as dash red curves. We adopt the Chabrier (2003) initial mass function (IMF) since all but the Schreiber et al. (2015) data points used this IMF. In the case of Schreiber et al. (2015) who applied a Salpeter (1955) IMF, we multiplied by constant factors of \(0.62M_{\star,\rm S}\) to convert to Chabrier. Deep and wide-area radio surveys, such as our 610 MHz data, are powerful tools to study a range of source populations. These comparisons demonstrate that the measurements from the literature are consistent with our derived radio-stacked measurements for SFGs. Similar to the IR, the radio emission at 610 MHz is equivalently a good tracer of the SFR in SFGs.
## 6 Conclusions and Future Work
We combined deep multi-wavelength optical and infrared observations from the LofS survey with deep 610 MHz GMRT observations to conduct a stacking analysis of star-forming galaxies between \(0.1\leq z\leq 1.5\). The depth of our 610 MHz data represents a potentially very useful tool to address the role of SFGs in galaxy evolution. We have stacked deep, below the \(\sim\)40 \(\mu\)Jy sensitivity of the 610 MHz GMRT radio observations at the positions of \(K\)-selected sources in the ELAIS-N1 field (for \(K\)-band \(<22.7\), sensitive to \(0.1\leq z\leq 1.5\)). We remove sources quiescent galaxies, and suspected of hosting active AGN from all samples based on optical, X-ray and IR indicators.
Figure 12: Top: Comparison of gradient \(\beta\) against stellar mass as a function of redshift for the total (left) and the star-forming galaxy population (right). The open red squares represents measurements from Karim et al. (2011), whereas the open blue squares represent measurements from Zwart et al. (2014). Bottom: Comparison of redshift evolution parameter of \(n\) sSFR as a function of stellar mass for all galaxies (left panel) and the SFG population (right panel).
Using median image stacking technique that is best applied in the radio regime where the angular resolution is high, we have measured stellar mass-dependent average star formation rates in the redshift range \(0.1\leq z\leq 1.5\). Our principal findings can be summarized as follows:
* \(r\) color, optical spectroscopy, X-ray information, and IR colours to separate quiescent galaxies and AGN-driven sources from SFGs of redshift and mass-selected galaxies.
* We used median single-pixel stacking, converting the stacked radio fluxes to SFRs. We apply the Bell (2003) relationship between radio luminosity and SFR, calibrated from local galaxies, and successfully apply it to high-redshift, high-SFR galaxies, and for the first time study the relationship between radio stacked sSFR, stellar mass and redshift using deep 610 MHz data.
* We subdivided our sample into stellar-mass and redshift bins and fit the sSFRs as a separable function of stellar mass and redshift in each bin. We found that sSFR falls with stellar mass for both our full and SFG samples. Hence the 'downsizing' scenario is supported by our 610 MHz data because we measure \(\beta<0\), implying that galaxies tend to form their stars more actively at higher redshifts.
* We report an average of mass slope \(\langle\beta_{\rm All}\rangle=-0.49\pm 0.01\) for all galaxies and \(\langle\beta_{\rm SFG}\rangle=-0.42\pm 0.02\) for the SFG population.
* We report a strong increase of the sSFR with redshift, for a given stellar mass, that is best parameterized by a power law \(\propto(1+z)^{4.94}\) for all galaxies. The SFG population is is best parameterized by a power law \(\propto(1+z)^{3.51}\).
* The sSFR appears to flatten at \(z>1.0\) for \(M_{\star}>10^{10.5}M_{\odot}\). The sSFR \(-M_{\star}\) relation that is steeper at low masses than at high-masses (i.e. a flattening is present). Furthermore, that most massive galaxies in both the full sample and the SFG sample consistently exhibit the lowest sSFRs at all redshifts.
* We compare our stacked sSFR estimates to previous measurements in the _sSFR-\(M_{\star}\)_ plane, and the evolution of the MS. We find good agreement with these previous measurements. This result opens the possibility of using the radio bands at low frequency to estimate the SFR even in the hosts of quiescent galaxies and bright AGN.
In view of the wealth of multi-wavelength information provided by the LoTSS catalogue, there are still exist significant opportunities to expand this work. A more comprehensive science analysis through stacking will include:
* Surveys at low frequencies, where extensive surveys exist, present different and complementary views on radio sources to that of high frequency surveys. We aim to further compare our findings at 610 MHz, with results from LOFAR 150 MHz.
* To undertake the exploitation of the radio luminosity functions (RLFs) of these distinct galaxy populations measured above and below the detection threshold of these surveys, using a Bayesian model-fitting technique. Extending this technique to study the cosmic star formation rate density (SFRD) at high redshifts.
Future radio surveys will be dominated by galaxies substantially fainter than those in this current sample. The prospects for studying the faint radio sky are very bright, as we are being rapidly flooded with survey data by SKA pathfinders. In conjunction with other multi-wavelength facilities, such as Euclid (Amendola et al., 2018) and the Vera C. Rubin Legacy Survey of Space and Time (LSST; Ivezic et al., 2019), these projects that will survey the sky vastly faster than it is possible with existing radio telescopes.
## Acknowledgements
We would like to thank the anonymous referee for their careful comments which led to a highly improved paper. This research was supported by the Korea Astronomy and Space Science Institute under the R&D program (Project No. 2022186804) supervised by the Ministry of Science and ICT. JMS acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), 2019-04848. CHIC acknowledges the support of the Department of Atomic Energy, Government of India, under the project 12-R&D-TFR-5.02-0700. EFO would like to acknowledge the hospitality of the Inter-University Institute for Data Intensive Astronomy (IDIA) which is a partnership of the University of Cape Town, the University of Pretoria, the University of the Western Cape and the South
Figure 13: Comparison of the radio-stacked based measurement of sSFR for SFGs to the MS trends observed by Lee et al. (2015), Schreiber et al. (2015), Tomczak et al. (2016), Pearson et al. (2018), Thorne et al. (2021), Cooke et al. (2023), shown in each panel as dash blue curves, solid grey curves, solid violet lines, solid green curves, solid brown curves, and dash red curves respectively. The solid grey vertical lines in each panel represents the mass completeness limit, \(M_{\rm lim}\).
African Radio Astronomy Observatory. MV acknowledges financial support from the Inter-University Institute for Data Intensive Astronomy (IDIA), a partnership of the University of Cape Town, the University of Pretoria, the University of the Western Cape and the South African Radio Astronomy Observatory, and from the South African Department of Science and Innovation's National Research Foundation under the ISARP RADIOSKY2020 Joint Research Scheme (DSI-NRF Grant Number 113121) and the CSUR HIPPO Project (DSI-NRF Grant Number 1212910). We acknowledge the use of the lifit cloud computing facility-[https://www.lifit.ac.za](https://www.lifit.ac.za), partnership between the University of Cape Town, the University of the Western Cape, the University of Stellenbosch, Sol Plaatje University, the Cape Peninsula University of Technology and the South African Radio Astronomy Observatory. The lifit facility is supported by contributions from the Inter-University Institute for Data Intensive Astronomy (IDIA - a partnership between the University of Cape Town, the University of Pretoria, the University of the Western Cape and the South African Radio Astronomy Observatory), the Computational Biology division at UCT and the Data Intensive Research Initiative of South Africa (DIRISA). We thank Ben Keller for sharing his PASTA stacking code with us before its public release and for his helpful advice on the installation of the code on the lifitu cloud computing facility at the Inter-University Institute for Data Intensive Astronomy (IDIA).
## Data Availability
The derived data generated in this research will be shared upon reasonable request to the corresponding author. The LOFAR science-ready multi-wavelength data is available from [https://lofar-surveys.org/deepfields_public_en1.html](https://lofar-surveys.org/deepfields_public_en1.html).
## Software
This work relies on Python programming language ([https://www.python.org/](https://www.python.org/)) The Python Astronomical Stacking Tool Array (PASTA) program Developed at the University of Calgary by Ben Keller and Jeroen Stil is available at [https://github.com/bwkeller/PASTA](https://github.com/bwkeller/PASTA). We used astropy ([https://www.astropy.org/](https://www.astropy.org/); Astropy Collaboration et al. (2013, 2018), numpy ([https://numpy.org/](https://numpy.org/)), matplotlib ([https://matplotlib.org/](https://matplotlib.org/)).
|
2301.09920 | Cancellation effects as a fingerprint of quantum collapse models at
atomic scale | In this work the spontaneous electromagnetic radiation from atomic systems,
induced by dynamical wave-function collapse, is investigated in the X-rays
domain. Strong departures are evidenced with respect to the simple cases
considered until now in the literature, in which the emission is either
perfectly coherent (protons in the same nuclei) or incoherent (electrons). In
this low-energy regime the spontaneous radiation rate strongly depends on the
atomic species under investigation and, for the first time, is found to depend
on the specific collapse model. | Kristian Piscicchia, Sandro Donadi, Simone Manti, Angelo Bassi, Maaneli Derakhshani, Lajos Diosi, Catalina Curceanu | 2023-01-24T11:06:10Z | http://arxiv.org/abs/2301.09920v2 | Surprising results of a refined dynamical collapse spontaneous radiation study: CSL and gravity related collapse have distinctive features at atomic orbits wavelength scale
###### Abstract
The experimental search of spontaneous radiation signal in the \(\gamma\)-Rays range produced strong bounds on the models of dynamical wave function collapse, in particular on the Continuous Spontaneous Localization and on the Diosi-Penrose. Ongoing and future experiments are moving the investigation to the X-Rays domain, also motivated by the introduction of non-Markovian modifications of the original theories, which require a scan of the spontaneous emission phenomenon for decreasing energies. In this work the spontaneous radiation rate, for an atomic system, is generalized to contemplate photons' wavelengths of the order of the atomic orbits size, i.e. photons' energies in the X-Rays range. The simple high-energy limit of the rate undergoes a strong correction, due to a complex interplay among the photon wavelength, the distances among the emitting particles, and the characteristic correlation lengths of the models. Moreover the spontaneous radiation rate energy distribution is found to depend on the specific collapse mechanism, thus opening a new experimental perspective to discriminate among the theories.
## I Introduction
The experimental investigation of the spontaneous radiation emission, induced by the process of dynamical wave function collapse, was performed for both the Continuous Spontaneous Localization (CSL) [1] and the Diosi-Penrose (DP) [2] models in the energy domain of the \(\gamma\)-Rays, by comparing the measured radiation spectrum by high purity germanium crystals with the spontaneous emission rate predicted by the models for the atomic systems which constitute the experimental setup. The obtained strong bounds, combined with constraints provided by other experimental tests and theoretical considerations, lead to consider refined dynamical reduction models embedding dissipative and non-Markovian effects, in order to counteract the runaway energy increase. In particular, non-Markovian models require the introduction of
a cutoff frequency in the noise spectrum. For this reason, a systematic scan of the spontaneous radiation phenomenon as a function of the decreasing energy is mandatory. The search for spontaneous radiation emitted by Germanium crystals was performed in the X-Rays domain in [3] (for \(E\in(15-50)\) keV), and more recently in [4] (for \(E\in(19-100)\) keV).
In [3] a formula for the expected spontaneous radiation rate from quasi-free electrons was applied, the expected radiation depends on the photons energy as \(1/E\) and is proportional to the number of quasi-free electrons [5]. This formula is not suitable to describe the more complex phenomenology of the spontaneous radiation emitted by the whole atomic system, as we will discuss below. More refined calculations are presented in [1; 2], where the CSL and the DP rates are calculated for an atomic system, in the limit in which the spontaneous photon wave-length \(\lambda_{\gamma}\) is intermediate between the nuclear dimension and the mean radius of the lower laying atomic orbit. The rate results to be proportional, for both CSL and DP, to \(1/E\cdot(N_{p}^{2}+N_{e})\), where \(N_{p}\) and \(N_{e}\) are, respectively, the number of protons and electrons of the atom under study. In this regime different collapse models share the same expected shape for the energy distribution of the spontaneous emission rate, the scaling factor being proportional to combinations of constants of nature with the characteristic parameters of the models, which are \(\lambda\) and \(r_{C}\) (strength and correlation length of the collapse inducing noise) for the CSL and the correlation length \(R_{0}\) for the DP (the role of the strength being played by the gravitational constant \(G\) in the DP model). The latter theoretical rates were also applied in [4], where the X-Rays spectrum measured Germanium crystals is analyzed in the energy range (19-100) keV, in which \(\lambda_{\gamma}\) is comparable with the mean radii of the atomic orbits of Germanium.
Given the importance to move the search of the spontaneous radiation signal from the high-energy domain of the \(\gamma\)-Rays to the X-Rays, we derive in this work the rates formulas in this range. Preliminary analyses suggest that the semiclassical approach we will use in this work is valid above energies of the order of 1 keV, while for lower energies a fully quantum mechanical analysis is required, see for example the analysis of the hydrogen atom in [6]. We will see that, contrary to the cases considered until now in the literature, where the emission is either perfectly coherent (protons in the same nuclei) or incoherent (electrons), in this regime an intermediate behaviour arises.
In Section II the radiation emission rate predicted for the CSL model in [1] is specified to also describe spontaneous emission in the regime \(\lambda_{\gamma}\sim\rho_{o}\), with \(\rho_{o}\) a typical atomic orbit mean radius, for both white and non-Markovian scenarios. The general rate is found to exhibit a non-trivial energy dependence, which is strongly influenced by the interplay among \(\lambda_{\gamma}\), \(\rho_{o}\) and \(r_{C}\). When \(\lambda_{\gamma}\) becomes comparable with the atomic size, the balance among coherent - i.e. quadratic - emission by the electrons, and cancellation among the protons and electrons emission, gives rise to a strong energy dependence, _departing_ from the \(1/E\) behaviour. This is determined by the specific atomic structure, i.e. a different spontaneous emission rate is expected
for different atomic number materials.
In Section III the analogous formulas for the induced spontaneous radiation rates for the Markovian and non-Markovian DP model are derived and the specific dependence on \(\lambda_{\gamma}\), \(\rho_{o}\) and \(R_{0}\) is obtained.
As pointed out in Section IV, the most intriguing result of this analysis consists in the prediction of a peculiar energy distribution of the spontaneous radiation rate, depending on the specific model of wave function collapse. This finding opens new scenarios in the experimental investigation of the spontaneous radiation, a specifically designed measurement, sensitive to this signature of the collapse, would be able to recognize the most probable pattern of dynamical wave function reduction.
## II Spontaneous emission rate, general expression for white and coloured CSL
The rate of the spontaneous radiation emitted by an atomic system, in the context of the Markovian CSL model, was derived in [1]:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{CSL}=\frac{\hbar\,\lambda}{6\,\pi^{2}\, \epsilon_{0}\,c^{3}\,m_{0}^{2}\,E}\sum_{i,j}\frac{q_{i}\,q_{j}}{m_{i}\,m_{j}} \cdot f_{ij}\cdot\frac{\sin(b_{ij})}{b_{ij}}, \tag{1}\]
where \(b_{ij}=2\pi|\mathbf{r}_{i}-\mathbf{r}_{j}|/\lambda_{\gamma}\), with \(\lambda_{\gamma}\) the wavelength of the spontaneously emitted photon and \(q_{j}\) and \(m_{j}\) represent, respectively, the charge and the mass of the \(j\)-th particle. \(\mathbf{r}_{j}\) denotes the position of the j-th particle, which in general changes in time, however what matters in the following calculation are the relative distances between the particles, hence \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\) will be approximated with the mean orbital radii or the mean distances among the atomic electrons.
The term \(f_{ij}\), depends on the particle mass density \(\mu\) and it encodes the balance among the emitters' distances and the correlation length of the model \(r_{C}\). It is shown in [1] that \(f_{ij}\) reduces to
\[f_{ij}\simeq\frac{3}{2}\frac{m_{i}m_{j}}{r_{C}^{2}} \tag{2}\]
when the particles radii are much smaller than the correlation length and \(r_{C}\gg|\mathbf{r}_{i}-\mathbf{r}_{j}|\). The first condition holds in the regime that we are analyzing, since we are interested in the spontaneous radiation emitted by protons and electrons in an atom, and \(r_{C}\) is constrained to be greater than several Angstroms by combining theoretical bounds with the limits obtained from the study of the expansion of a Bose-Einstein condensate (see e.g. [1; 7]). Concerning the second condition, the above-mentioned limits from cold atom experiments (see [7; 8]) imply that \(r_{C}\gtrsim|\mathbf{r}_{i}-\mathbf{r}_{j}|\), where \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\) indicates electron-electron or electron-proton distances. This constraint was shown to be weakly dependent on the cutoff frequency introduced by a non-Markovian generalization of the collapse model. On the other side, stronger constraints \(r_{C}\gg|\mathbf{r}_{i}-\mathbf{r}_{j}|\) are implied by
spontaneous radiation search experiments ([1; 3; 4]), but they are only valid under the white noise assumption. Indeed, as also shown below, the spontaneous radiation exhibits a stronger dependence on the cutoff, which is to be considered when analyzing the non-Markovian formulations. For this reason the general expression for \(f_{ij}\) is considered here [1]:
\[f_{ij}=\sum_{k=x,y,z}\int d^{3}s\int d^{3}s^{\prime}\,e^{-\frac{(\mathbf{r}_{i} -\mathbf{r}_{j}+\mathbf{s}^{\prime}-\mathbf{s})^{2}}{4r_{C}^{2}}}\,\frac{ \partial\mu_{i}(\mathbf{s})}{\partial s^{k}}\frac{\partial\mu_{j}(\mathbf{s}^ {\prime})}{\partial s^{\prime k}}=\]
\[=\int d^{3}s\int d^{3}s^{\prime}\,\mu_{i}(\mathbf{s})\mu_{j}(\mathbf{s}^{ \prime})\left\{e^{-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j}+\mathbf{s}^{\prime}- \mathbf{s})^{2}}{4r_{C}^{2}}}\,\frac{1}{2r_{C}^{2}}\left[3-\frac{(\mathbf{r}_ {i}-\mathbf{r}_{j}+\mathbf{s}^{\prime}-\mathbf{s})^{2}}{2r_{C}^{2}}\right] \right\}. \tag{3}\]
Assuming point-like mass densities
\[\mu_{i}(\mathbf{r})=m_{i}\,\delta(\mathbf{r}) \tag{4}\]
we have
\[f_{ij}=\frac{m_{i}\,m_{j}}{2r_{C}^{2}}\,e^{-\frac{(\mathbf{r}_{i}-\mathbf{r}_ {j})^{2}}{4r_{C}^{2}}}\left(3-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}}{2r_{ C}^{2}}\right), \tag{5}\]
which reduces to Eq. (2) in the limit \(r_{C}\gg|\mathbf{r}_{i}-\mathbf{r}_{j}|\).
As mentioned, strong limits on the characteristic parameters of the CSL and DP models were imposed by means of experimental investigations of the spontaneous \(\gamma\)-Rays radiation emission [1; 2] in the range \((1-3.8)\) MeV. In this energy interval \(\lambda_{\gamma}\) is intermediate between the nuclear and atomic dimensions and Eq. (1) reduces to the simple expression:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{CSL}=\frac{\hbar\,e^{2}\,\lambda}{4\, \pi^{2}\,\epsilon_{0}\,c^{3}\,r_{C}^{2}\,m_{0}^{2}\,E}\left(N_{p}^{2}+N_{e} \right), \tag{6}\]
where \(N_{p}\) and \(N_{e}\) are the numbers of protons and electrons in the atom under consideration. Much more complex is the situation when \(\lambda_{\gamma}\) is of the order of the atomic orbits radii as is the case studied in [3; 4]. If the correlation length \(r_{C}\) exceeds the distance among the emitters then the stochastic field "vibrates" them coherently and if \(\lambda_{\gamma}\) is also greater then the emitters' distances then they emit coherently, and the contribution from oppositely charged particles cancels. Intermediate regimes for \(\lambda_{\gamma}\) and \(r_{C}\), in comparison with \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\), give rise to a complex pattern, in which the shape of the expected spontaneous radiation spectrum exhibits a non-trivial energy dependence, which is influenced by the atomic structure.
More in detail let us rewrite Eq. (1), for the general \(f_{ij}\) expression Eq. (5), by considering the separate contributions of the protons (\(i_{p},j_{p}\)), of the electrons (\(i_{e},j_{e}\)) and of the combined electrons-protons emission (\(i_{p},j_{e}\)) and (\(i_{e},j_{p}\)):
\[\frac{d\Gamma}{dE}\bigg{|}_{t}^{CSL}=\frac{\hbar\,\lambda}{12\,\pi^{2}\,\epsilon_{ 0}\,c^{3}\,m_{0}^{2}\,r_{C}^{2}\,E}.\]
\[\left[\sum_{ip,jp}q_{ip}\,q_{jp}\cdot\frac{\sin(2\pi|{\bf r}_{ip}-{\bf r}_{jp}|/ \lambda_{\gamma})}{2\pi|{\bf r}_{i}-{\bf r}_{j}|/\lambda_{\gamma}}\,e^{-\frac{ ({\bf r}_{ip}-{\bf r}_{jp})^{2}}{4r_{C}^{2}}}\left(3-\frac{({\bf r}_{ip}-{\bf r }_{jp})^{2}}{2r_{C}^{2}}\right)\,+\right.\]
\[+\sum_{ip,je}q_{ip}\,q_{je}\cdot\frac{\sin(2\pi|{\bf r}_{ip}-{\bf r}_{je}|/ \lambda_{\gamma})}{2\pi|{\bf r}_{ip}-{\bf r}_{je}|/\lambda_{\gamma}}\,e^{- \frac{({\bf r}_{ip}-{\bf r}_{je})^{2}}{4r_{C}^{2}}}\left(3-\frac{({\bf r}_{ip} -{\bf r}_{je})^{2}}{2r_{C}^{2}}\right)\,+\]
\[+\left.\sum_{ie,jp}q_{ie}\,q_{jp}\cdot\frac{\sin(2\pi|{\bf r}_{ie}-{\bf r}_{jp }|/\lambda_{\gamma})}{2\pi|{\bf r}_{ie}-{\bf r}_{jp}|/\lambda_{\gamma}}\,e^{- \frac{({\bf r}_{ie}-{\bf r}_{jp})^{2}}{4r_{C}^{2}}}\left(3-\frac{({\bf r}_{ ie}-{\bf r}_{jp})^{2}}{2r_{C}^{2}}\right)\,+\right.\]
\[+\left.\sum_{ie,je}q_{ie}\,q_{je}\cdot\frac{\sin(2\pi|{\bf r}_{ie}-{\bf r}_{je }|/\lambda_{\gamma})}{2\pi|{\bf r}_{ie}-{\bf r}_{je}|/\lambda_{\gamma}}\,e^{- \frac{({\bf r}_{ie}-{\bf r}_{je})^{2}}{4r_{C}^{2}}}\left(3-\frac{({\bf r}_{ ie}-{\bf r}_{je})^{2}}{2r_{C}^{2}}\right)\right] \tag{7}\]
Let us analyze the three contributions separately:
1. The first sum in Eq. (7) is performed on the protons in the nucleus, in the regime that we are analyzing we have \(|{\bf r}_{ip}-{\bf r}_{jp}|/\lambda_{\gamma}\ll 1\) and \(({\bf r}_{ip}-{\bf r}_{jp})^{2}/r_{C}^{2}\ll 1\) hence the sum reduces to: \[3\sum_{ip,jp}q_{ip}\,q_{jp}=3\,\left(\sum_{ip}q_{ip}\right)^{2}=3\,e^{2}\,N_{p }^{2}.\] (8)
2. The arguments of the second and third sums are symmetric under the exchange \(i\to j\). For any electron \(j_{e}\) belonging to the \(o\)-th orbit of the atom we approximate: \[|{\bf r}_{ip}-{\bf r}_{je}|\sim\rho_{o}\,\,,\,\,\forall\,ip\,,\] (9) \(\rho_{o}\) represents the mean radius of the orbit hosting the \(je\)-th electron. Hence, in terms of the energy of the spontaneously emitted photon, the second and third sum yield: \[-2\,e^{2}\,N_{p}\,\sum_{o}N_{o}\,e^{-\frac{\rho_{o}^{2}}{4r_{C}^{2}}}\,\left(3 -\frac{\rho_{o}^{2}}{2r_{C}^{2}}\right)\,\frac{\sin\left(\frac{\rho_{o}\,E}{ \hbar\,c}\right)}{\left(\frac{\rho_{o}\,E}{\hbar\,c}\right)}.\] (10)
3. Concerning the last sum, if \(ie=je\) then \[3\sum_{ie,je,\,ie=je}q_{ie}q_{je}=3\,\sum_{ie}q_{ie}^{2}=3\,e^{2}\,N_{e}.\] (11) For \(i_{e},j_{e}\) belonging to the same orbit, with \(ie\neq je\) we assume \(|{\bf r}_{ie}-{\bf r}_{je}|\sim 2\rho_{o}\). More refined calculations, involving the average distance between two electrons, could be obtained from the two-particle wave-function [9], but variations from the adopted approximation only play a role at energies lower than 1 keV and are negligible in the energy range under study. Considered that the total number of pairs in the \(o\)-th orbit is \(N_{o}\,(N_{o}-1)\) we find: \[e^{2}\,\sum_{o}N_{o}\,(N_{o}-1)\,e^{-\frac{\rho_{o}^{2}}{T_{C}}}\,\left(3- \frac{2\rho_{o}^{2}}{r_{C}^{2}}\right)\,\frac{\sin\left(\frac{\rho_{o}\,E}{h \,c}\right)\cdot\cos\left(\frac{\rho_{o}\,E}{h\,c}\right)}{\left(\frac{\rho_{o }\,E}{h\,c}\right)}\,.\] (12) For \(i_{e},j_{e}\) belonging to different orbits \(o\) and \(o^{\prime}\) we take \(|{\bf r}_{ie}-{\bf r}_{je}|\sim|\rho_{o}-\rho_{o^{\prime}}|\). The corresponding contribution to the sum is: \[2e^{2}\,\sum_{o\,o^{\prime}\,\text{pairs}}N_{o}\,N_{o^{\prime}}\,\frac{\sin \left[\frac{|\rho_{o}-\rho_{o^{\prime}}|\,E}{h\,c}\right]}{\left[\frac{|\rho_{o }-\rho_{o^{\prime}}|\,E}{h\,c}\right]}e^{-\frac{(\rho_{o}-\rho_{o^{\prime}})^ {2}}{4r_{C}^{2}}}\,\left(3-\frac{(\rho_{o}-\rho_{o^{\prime}})^{2}}{2r_{C}^{2}} \right).\] (13)
By substituting Eqs. (8)-(13) in Eq. (7) the general expression for the spontaneous emission rate, due to an atomic system, for a Markovian CSL is:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{CSL}=\frac{\hbar\,e^{2}\,\lambda}{12\, \pi^{2}\,\epsilon_{0}\,c^{3}\,m_{0}^{2}\,r_{C}^{2}\,E}.\]
\[\cdot\left\{3\,N_{p}^{2}+3\,N_{e}+2\sum_{o\,o^{\prime}\,\text{pairs}}N_{o}\, N_{o^{\prime}}\,\frac{\sin\left[\frac{|\rho_{o}-\rho_{o^{\prime}}|\,E}{h\,c} \right]}{\left[\frac{|\rho_{o}-\rho_{o^{\prime}}|\,E}{h\,c}\right]}e^{-\frac{( \rho_{o}-\rho_{o^{\prime}})^{2}}{4r_{C}^{2}}}\,\left(3-\frac{(\rho_{o}-\rho_{ o^{\prime}})^{2}}{2r_{C}^{2}}\right)+\right.\]
\[\left.+\sum_{o}N_{o}\,\frac{\sin\left(\frac{\rho_{o}\,E}{h\,c}\right)}{\left( \frac{\rho_{o}\,E}{h\,c}\right)}\cdot\left[(N_{o}-1)\,e^{-\frac{\rho_{o}^{2}} {r_{C}^{2}}}\,\left(3-\frac{2\rho_{o}^{2}}{r_{C}^{2}}\right)\,\cos\left(\frac {\rho_{o}\,E}{h\,c}\right)-\right.\right.\]
\[\left.\left.-2\,N_{p}\,e^{-\frac{\rho_{o}^{2}}{4r_{C}^{2}}}\,\left(3-\frac{ \rho_{o}^{2}}{2r_{C}^{2}}\right)\right]\right\}. \tag{14}\]
Eq. (14) has to be interpreted in terms of the interplay among the sizes of the emitters' distances, with respect to the photon wave-length \(\lambda_{\gamma}\) (corresponding to the energy range under analysis) and the model correlation length \(r_{C}\). As an example let us keep \(r_{C}\) much greater than the typical atomic sizes (as found in [1]), when \(\lambda_{\gamma}\) becomes of the order of the mean orbit radii of the atom under study, then the electrons of the
corresponding orbits start to emit coherently, i.e. proportionally to the square of their number. Nevertheless the corresponding increase in the expected spontaneous emission rate is counteracted by the _cancellation_, among oppositely charged particles whose distance is exceeded by \(\lambda_{\gamma}\). In the limit in which \(\lambda_{\gamma}\) is also much bigger than the atomic size Eq. (14) reduces to:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{CSL}=\frac{\hbar\,e^{2}\,\lambda}{4\,\pi^{ 2}\,\epsilon_{0}\,c^{3}\,m_{0}^{2}\,r_{C}^{2}\,E}\left[N_{p}^{2}-2\cdot N_{p} \,N_{e}+N_{e}^{2},\right] \tag{15}\]
which vanishes for neutral atoms.
A concrete example for future analyses is shown in Fig. 1 top, in the energy interval \((1-200)\) keV. The general spontaneous emission rate Eq. (14) (red line) is compared with the simple (blue) \(1/E\) dependence of Eq. (6) which is only valid in the high-energy domain. The distributions are normalized to the common constant pre-factors to evidence differences in shape. The rate Eq. (14) is calculated for a Germanium atom, the target which is exploited in [1; 4]. The mean radii of the Germanium orbits are obtained based on a DFT [10] all-electron calculation for an isolated Germanium atom, the DFT code GPAW [11] was adopted. The general rate is plotted for a prior value of the correlation length \(r_{C}=1.15\cdot 10^{-8}\) m [1], this value is obtained by applying Eq. (6), to a \(\gamma\)-Rays survey in the range \((1-3.8)\) MeV, where Eq. (6) is an excellent approximation. As expected the simple (blue) and the general rates converge for high energies (above 200 keV), where \(\lambda_{\gamma}\) becomes sizably smaller than the lower atomic orbit radius. Since the prior on \(r_{C}\) is much greater than the size of the Germanium atom, the X-Rays regime is characterized by a balance among electrons and protons coherent emission and the cancellation of their contributions. The limit from [4] should be then re-obtained based on this low-energy complex pattern.
The general expression of the spontaneous emission rate Eq. (14) encodes the phenomenology for future investigations of the spontaneous radiation at low energies (X-Rays). Comparison of the theoretical expectation with the measured spectra requires a recursive analysis, in the first step a suitable prior has to be assumed for \(r_{C}\) (e.g. the limit obtained in [1] in which the adopted theoretical rate is accurate in the analyzed energy range), an updated value for \(r_{C}\) will be obtained which will serve as input for the new prior, the analysis should then be iterated till convergence of the \(r_{C}\) values, within the experimental sensitivity.
Figure 1: The top panel of the Figure shows in blue the simple \(1/E\) dependence Eq. (6), for the spontaneous radiation rate of a CSL model, which is only valid in the high-energy domain, compared with the general rate Eq. (14) (red) for a prior value of the correlation length \(r_{C}=1.15\cdot 10^{-8}\) m. The distributions are calculated for a Germanium atom and normalized to the common constant pre-factors. The bottom panel of the Figure shows in blue the simple \(1/E\) dependence for the spontaneous radiation rate of the DP model (Eq. (39)), which is only valid in the high-energy domain [2], compared with the general formula Eq. (38), assuming as prior for \(R_{0}\) the value \(R_{0}=0.54\) Å(red line). The distributions are calculated for Germanium and normalized to the common constant pre-factors.
To conclude this section, let us draw the general formula for the spontaneous emission rate when the noise correlation function of the CSL is not white in time. The generalization of Eq. (1) to the non-Markovian case requires to multiply the right-hand side by the Fourier transform of the noise correlation function (see e.g. [12; 13; 6; 14; 15] and the derivation of the colored DP model emission rate in Section III). For an exponentially decaying noise correlation function, characterized by a correlation time \(\Omega^{-1}\):
\[f(t-s)=\frac{\Omega}{2}\,e^{-\Omega|t-s|} \tag{16}\]
the rate becomes
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{cCSL}=\frac{\hbar\,\lambda}{12\,\pi^{2} \,\epsilon_{0}\,c^{3}\,r_{C}^{2}\,m_{0}^{2}\,E}\sum_{i,j}q_{i}\,q_{j}\frac{sin (b_{ij})}{b_{ij}}\,e^{-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}}{4r_{C}^{2}}} \left(3-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}}{2r_{C}^{2}}\right)\times \frac{E_{c}^{2}}{E_{c}^{2}+E^{2}}. \tag{17}\]
In Eq. (17) \(E_{c}=\hbar\Omega\) and cCSL denote results for a colored (non-Markovian) CSL model. The general expression for the spontaneous emission rate of the cCSL is:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{cCSL}=\left.\frac{d\Gamma}{dE}\right|_{t} ^{CSL}\times\frac{E_{c}^{2}}{E_{c}^{2}+E^{2}}. \tag{18}\]
where \(\left.\frac{d\Gamma}{dE}\right|_{t}^{CSL}\) is given in Eq. (14).
## III DP spontaneous emission rate, general expression
By analogy with the proof outlined in [1] the starting point of the semi-classical calculation of the emission rate from the charged particles of the atomic system, for a coloured DP model, is the classical expression of the total emitted power:
\[P(t)=R_{sp}^{2}\int d\Omega S(R_{sp}^{2}\hat{n},t) \tag{19}\]
with \(S\) being the Poynting vector at time \(t\), and the integration is performed on a spherical surface of radius \(R_{sp}\). This turns out to be
\[P(t)=\frac{1}{64\,\pi^{4}\,\epsilon_{0}\,c^{3}}\int_{-\infty}^{+\infty}d\omega \int_{-\infty}^{+\infty}d\nu\,e^{i(\omega+\nu)(t-R_{sp}/c)}\,\sum_{i,j}q_{i}q_ {j}\,\mathbb{E}[J_{ij}(\omega,\nu)] \tag{20}\]
with
\[J_{ij}(\omega,\nu)=4\pi\,\ddot{\mathbf{r}}_{i}(\omega)\cdot\ddot{\mathbf{r}}_ {j}(\nu)\frac{(b^{2}-1)\sin(b)+b\cos(b)}{b^{3}}-\]
\[-4\pi\,\ddot{r}_{i}^{z}(\omega)\ddot{r}_{j}^{z}(\nu)\frac{(b^{2}-3)\sin(b)+3b \cos(b)}{b^{3}}, \tag{21}\]
where \(b=|\omega\mathbf{r}_{i}+\nu\mathbf{r}_{j}|/c\).
In the case of the DP model, the acceleration induced by the collapse is:
\[\ddot{\mathbf{r}}_{i}(t)=-\frac{1}{m_{i}}\int d\mathbf{r}\,\left[\nabla_{\mathbf{r}_{i}(t)} \mu_{i}(\mathbf{r}-\mathbf{r}_{i}(t))\right]\phi(\mathbf{r},t) \tag{22}\]
where
\[\mu_{i}(\mathbf{r})=m_{i}g_{i}(\mathbf{r},R_{0}), \tag{23}\]
with \(g_{i}\) a function describing the mass density of size \(R_{0}\) of the \(i\)-th particle that we will specify below (see Eq. (34)) and \(\phi(\mathbf{r},t)\) a Gaussian noise field with zero average and correlation:
\[\mathbb{E}\left[\phi(\mathbf{r},t)\phi(\mathbf{r}^{\prime},t^{\prime}) \right]=G\hbar\frac{\delta(t-t^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}. \tag{24}\]
A delta correlation in time implies a Markovian dynamics. Similarly to the analysis done for the CSL model, non-Markovianity can be introduced by replacing this correlation function with one describing an exponentially decaying correlation in time, characterized by a cutoff frequency \(\Omega\), i.e.:
\[\mathbb{E}\left[\phi(\mathbf{r},t)\phi(\mathbf{r}^{\prime},t^{\prime}) \right]=\frac{G\hbar}{|\mathbf{r}-\mathbf{r}^{\prime}|}\frac{\Omega}{2}e^{-\Omega|t-t ^{\prime}|}. \tag{25}\]
Following the derivation in [1], the Fourier transform of the \(k\)-th component of the acceleration takes the form:
\[\ddot{r}_{i}^{k}(\omega) =-\int dte^{-i\omega t}\int d\mathbf{r}\,\left[\frac{\partial}{ \partial r_{i}^{k}}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0})\right]\phi(\mathbf{r},t)\] \[=\int d\mathbf{s}\,\left[\frac{\partial}{\partial s^{k}}g_{i}(\mathbf{s},R_{0})\right]\tilde{\phi}(\mathbf{s}+\mathbf{r}_{i},\omega) \tag{26}\]
where in the second line we performed the change of variables \(\mathbf{s}=\mathbf{r}-\mathbf{r}_{i}\) and introduced \(\tilde{\phi}(\mathbf{r},\omega):=\int dte^{-i\omega t}\phi(\mathbf{r},t)\), with zero average and correlation
\[\mathbb{E}\left[\tilde{\phi}(\mathbf{r},\omega)\tilde{\phi}(\mathbf{r}^{ \prime},\nu)\right]=2\pi G\hbar\left(\frac{\Omega^{2}}{\omega^{2}+\Omega^{2}} \right)\frac{\delta(\nu+\omega)}{|\mathbf{r}-\mathbf{r}^{\prime}|}.\]
We can now focus on the average of the scalar product of two accelerations, necessary for computing \(J_{ij}(\omega,\nu)\) in Eq. (21):
\[\mathbb{E}\left[\ddot{\mathbf{r}}_{i}(\omega)\cdot\ddot{\mathbf{r}}_{j}(\nu) \right]=2\pi\hbar G\delta(\omega+\nu)\frac{\Omega^{2}}{\Omega^{2}+\omega^{2}} f_{ij} \tag{27}\]
where \(f_{ij}=\sum_{k=x,y,z}f_{ij}^{k}\) with
\[f_{ij}^{k}=\int d^{3}s\int d^{3}s^{\prime}\frac{1}{|\mathbf{r}_{i}-\mathbf{r} _{j}+\mathbf{s}-\mathbf{s}^{\prime}|}\frac{\partial g_{i}(\mathbf{s},R_{0})}{ \partial s^{k}}\frac{\partial g_{j}(\mathbf{s}^{\prime},R_{0})}{\partial s^{ \prime k}}. \tag{28}\]
By assuming \(f_{ij}^{z}=f_{ij}/3\), which holds for spherical symmetric mass distributions, we have:
\[\mathbb{E}\left[J_{ij}(\omega,\nu)\right]=8\pi^{2}\hbar G\delta(\omega+\nu)\frac {\Omega^{2}}{\Omega^{2}+\omega^{2}}f_{ij}\frac{2}{3}\frac{\sin(b)}{b} \tag{29}\]
and
\[P(t)=\frac{G\hbar}{12\pi^{2}\epsilon_{0}c^{3}}\int_{-\infty}^{+\infty}d\omega \ \sum_{i,j}q_{i}\,q_{j}f_{ij}\frac{\sin(b)}{b}\frac{\Omega^{2}}{\Omega^{2}+\omega ^{2}}. \tag{30}\]
Since
\[P(t)=\int_{0}^{+\infty}d\omega\,\hbar\,\omega\frac{d\Gamma_{t}}{d\omega}, \tag{31}\]
the emission rate is given by:
\[\frac{d\Gamma}{dE}\bigg{|}_{t}^{cDP}=\frac{G}{6\pi^{2}\epsilon_{0}c^{3}\omega }\sum_{i,j}q_{i}\,q_{j}f_{ij}\frac{\sin(b)}{b}\frac{\Omega^{2}}{\Omega^{2}+ \omega^{2}}. \tag{32}\]
In analogy with the expected emission rate for the CSL model, the interplay between the particles mean distance and the wavelength of the spontaneously emitted photon is contained in the terms \(\sin(b)/b\). The dependence on the particles distances in relation to the correlation length of the model \(R_{0}\) is instead specified by the terms \(f_{ij}\), as shown in what follows. By substituting \(\mathbf{s}=\mathbf{r}-\mathbf{r}_{i}\) and \(\mathbf{s}^{\prime}=\mathbf{r}^{\prime}-\mathbf{r}_{j}\) in Eq. (28) and summing over \(k=x,y,z\) we have
\[f_{ij} =\sum_{k=x,y,z}\int d\mathbf{r}\int d\mathbf{r}^{\prime}\left[\frac{ \partial}{\partial r^{k}}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0})\right]\left[\frac{ \partial}{\partial r^{\prime k}}g_{j}(\mathbf{r}^{\prime}-\mathbf{r}_{j},R_{0}) \right]\frac{1}{|\mathbf{r}-\mathbf{r}^{\prime}|}=\] \[=\int d\mathbf{r}\int d\mathbf{r}^{\prime}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0}) g_{j}(\mathbf{r}^{\prime}-\mathbf{r}_{j},R_{0})\sum_{k=x,y,z}\left[\frac{\partial}{ \partial r^{\prime k}}\frac{\partial}{\partial r^{k}}\frac{1}{|\mathbf{r}-\mathbf{r}^ {\prime}|}\right]=\] \[=\int d\mathbf{r}\int d\mathbf{r}^{\prime}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0}) g_{j}(\mathbf{r}^{\prime}-\mathbf{r}_{j},R_{0})\left[-\nabla_{r}^{2}\frac{1}{|\mathbf{r}-\mathbf{r}^ {\prime}|}\right]=\] \[=\int d\mathbf{r}\int d\mathbf{r}^{\prime}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0}) g_{j}(\mathbf{r}^{\prime}-\mathbf{r}_{j},R_{0})\left[4\pi\delta(\mathbf{r}-\mathbf{r}^{\prime}) \right]=\] \[=4\pi\int d\mathbf{r}g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0})g_{j}(\mathbf{r}-\mathbf{ r}_{j},R_{0}). \tag{33}\]
According to Eq. (33), regardless of the wavelength of the spontaneously emitted photon, whenever the mass density profiles are narrow - with respect to \(R_{0}\) - and the distance among the \(i\)-th and \(j\)-th emitters is big with respect to \(R_{0}\), the corresponding contribution to the spontaneous emission rate vanishes. More quantitatively let us assume, following [2; 16] e.g. Gaussian mass density profiles:
\[g_{i}(\mathbf{r}-\mathbf{r}_{i},R_{0})=\frac{1}{(2\pi R_{0}^{2})^{3/2}}e^{-\frac{(\mathbf{r}- \mathbf{r}_{i})^{2}}{2R_{0}^{2}}}; \tag{34}\]
Then simple steps lead to
\[f_{ij}=\frac{1}{2\pi^{1/2}R_{0}^{3}}e^{-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}}{4R_{0 }^{2}}}. \tag{35}\]
By substituting Eq. (35) in Eq. (32) the general expression for the spontaneous emission rate, from a system of charged particles, in the context of the cDP model is:
\[\frac{d\Gamma}{dE}\bigg{|}_{t}^{cDP}=\frac{G}{12\pi^{5/2}\epsilon_{0}c^{3}R_{0 }^{3}E}\sum_{i,j}q_{i}\,q_{j}\,e^{-\frac{(\mathbf{r}_{i}-\mathbf{r}_{j})^{2}}{4R_{0}^{ 2}}}\,\frac{\sin\left(\frac{|\mathbf{r}_{i}-\mathbf{r}_{j}|\,E}{h\,c}\right)}{\left( \frac{|\mathbf{r}_{i}-\mathbf{r}_{j}|\,E}{h\,c}\right)}\frac{E_{c}^{2}}{E_{c}^{2}+E^ {2}}. \tag{36}\]
where \(E_{c}=\hbar\Omega\) represents the cutoff energy corresponding to the frequency \(\Omega\). It is again interesting to specify the expression for the rate emitted by an atom, as a function of the mean radii of the atomic orbits. By repeating the steps (1)-(3) which lead from Eq. (7) to (14), it is easy to obtain:
\[\frac{d\Gamma}{dE}\bigg{|}_{t}^{cDP}=\frac{Ge^{2}}{12\pi^{5/2}\epsilon_{0}c^{ 3}R_{0}^{3}E}\left\{\{N_{p}^{2}+N_{e}+2\sum_{oo^{\prime}\text{pairs}}N_{o}\,N _{o^{\prime}}\frac{\sin\frac{|\rho_{o}-\rho_{oo^{\prime}}|E}{\hbar c}}{\frac{ |\rho_{o}-\rho_{oo^{\prime}}|E}{\hbar c}}\,e^{-\frac{(\rho_{o}-\rho_{oo^{ \prime}})^{2}}{4R_{0}^{2}}}+\right.\]
\[\left.+\sum_{o}N_{o}\frac{\sin\frac{\rho_{o}E}{\hbar c}}{\frac{\rho_{o}E}{ \hbar c}}\bigg{[}(N_{o}-1)\,e^{-\frac{\rho_{oo^{\prime}}^{2}}{R_{0}^{2}}}\cos \frac{\rho_{o}E}{\hbar c}-2\,N_{p}\,e^{-\frac{\rho_{oo^{\prime}}^{2}}{4R_{0}^ {2}}}\bigg{]}\right\}\times\frac{E_{c}^{2}}{E_{c}^{2}+E^{2}} \tag{37}\]
The corresponding general formula for the spontaneous radiation emission rate, in the context of a Markovian DP, it follows from Eq. (37) in the limit \(E_{c}\rightarrow\infty\):
\[\frac{d\Gamma}{dE}\bigg{|}_{t}^{DP}=\frac{Ge^{2}}{12\pi^{5/2}\epsilon_{0}c^{3 }R_{0}^{3}E}\left\{\{N_{p}^{2}+N_{e}+2\sum_{oo^{\prime}\text{pairs}}N_{o}\,N _{o^{\prime}}\frac{\sin\frac{|\rho_{o}-\rho_{oo^{\prime}}|E}{\hbar c}}{\frac{ |\rho_{o}-\rho_{oo^{\prime}}|E}{\hbar c}}\,e^{-\frac{(\rho_{o}-\rho_{oo^{ \prime}})^{2}}{4R_{0}^{2}}}+\right.\]
\[\left.+\sum_{o}N_{o}\frac{\sin\frac{\rho_{o}E}{\hbar c}}{\frac{\rho_{o}E}{ \hbar c}}\bigg{[}(N_{o}-1)\,e^{-\frac{\rho_{oo^{\prime}}^{2}}{R_{0}^{2}}}\cos \frac{\rho_{o}E}{\hbar c}-2\,N_{p}\,e^{-\frac{\rho_{oo^{\prime}}^{2}}{4R_{0}^ {2}}}\bigg{]}\right\} \tag{38}\]
Analogously to the CSL scenario, the simple energy dependence of the rate, which was derived in [2], is recovered in the limit in which \(\lambda_{\gamma}\) is much smaller than the atomic mean radii. In this case:
\[\left.\frac{d\Gamma}{dE}\right|_{t}^{DP}=\frac{Ge^{2}}{12\pi^{5/2}\epsilon_{0} c^{3}R_{0}^{3}E}\left\{\,N_{p}^{2}+N_{e}\right\}. \tag{39}\]
This simple energy dependence is applicable to the analysis described in [2] where the energy range \((1-3.8)\) MeV was investigated. The same rate was also exploited in [4], in which due to the much lower
energy of the observed photons (the range (19-100) keV is analyzed) the general formula in Eq. (38) should be applied. Note that Eq. (39) differs from the result presented in [2] by a factor \(8\pi\). This is because in [2] we adopted the convection introduced in [17] while here we referred to the original model introduced by Diosi [18], in which the factor \(8\pi\) was not present.
The simple \(1/E\) expression for the white DP rate is shown in Figure 1 bottom as a blue curve, and confronted with the general formula Eq. (38) assuming as a prior for \(R_{0}\) the value \(R_{0}=0.54\) A (red curve) from [2]. The distributions are calculated for a Germanium atom and normalized to the common constant pre-factors to evidence differences in shape.
## IV Conclusions
To conclude, a comparison between Figures 1 top and bottom unveil the most interesting consequence of the cancellation effect. In the low energy regime, for correlation lengths of the models of the order, or bigger, than the atomic orbits radii, the shapes predicted for the spontaneous emission rates distributions of the CSL and DP models strongly differ. This is both due to the different mathematical structure of the \(f_{ij}\) terms and the different values of the correlation lengths \(r_{C}\) and \(R_{0}\) of the two models. In the simple scenario in which \(\lambda_{\gamma}\) is much smaller than the atomic size, any difference is washed-out and the shapes of the spontaneous radiation rates of the two models just differ of a scaling factor.
An experimental investigation of the spontaneous radiation emitted by the atoms, in the energy range going from few to tens of keV, exploiting the phenomenological analysis scheme outlined in Section II and based on the predicted rates Eqs. (14) and (38), may be able to disentangle which model better describes the data. If the measurement is sensitive to the signal of collapse, it should be also able to determine the model which fits the data with higher probability. Moreover the experimental sensitivity may be improved by performing a survey on atomic targets of different atomic numbers, exploiting the peculiar impact of the atomic structure on the cancellation phenomenon.
The spontaneous emission rates are consistently formulated under non-Markovian assumption (Eqs. (18) and (37) for the future experimental surveys.
###### Acknowledgements.
This publication was made possible through the support of Grant 62099 from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. We acknowledge support from the Foundational Questions Insti
tute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation (Grants No. FQXi-RFP-CPW-2008 and FQXi-MGA-2102), and from the H2020 FET TEQ (Grant No. 766900). We thank: the INFN Institute, for supporting the research presented in this article and, in particular, the Gran Sasso underground laboratory of INFN, INFN-LNGS, and its Director, Ezio Previtali, the LNGS staff, and the Low Radioactivity laboratory for the experimental activities dedicated to the search for spontaneous radiation. We thank the Austrian Science Foundation (FWF) which supports the VIP2 project with the grants P25529-N20, project P 30635-N36 and W1252-N27 (doctoral college particles and interactions). K.P. acknowledges support from the Centro Ricerche Enrico Fermi - Museo Storico della Fisica e Centro Studi e Ricerche "Enrico Fermi" (Open Problems in Quantum Mechanics project). AB acknowledges financial support from the EIC Pathfinder project QuCoM (GA no. 101046973).
|
2306.14068 | Asymmetric rectified electric and concentration fields in multicomponent
electrolytes with surface reactions | Recent studies have utilized AC fields and electrochemical reactions in
multicomponent electrolyte solutions to control colloidal assembly. However,
theoretical investigations have thus far been limited to binary electrolytes
and have overlooked the impact of electrochemical reactions. In this study, we
address these limitations by analyzing a system with multicomponent
electrolytes, while also relaxing the assumption of ideally blocking electrodes
to capture the effect of surface electrochemical reactions. Through a regular
perturbation analysis in the low-applied-potential regime, we solve the
Poisson-Nernst-Planck equations and obtain effective equations for electrical
potential and ion concentrations. By employing a combination of numerical and
analytical calculations, our analysis reveals a significant finding:
electrochemical reactions alone can generate asymmetric rectified electric
fields (AREFs), i.e., time-averaged, long-range electric fields, even when the
diffusivities of the ionic species are equal. This finding expands our
understanding beyond the conventional notion that AREFs arise solely from
diffusivity contrast. Furthermore, we demonstrate that AREFs induced by
electrochemical reactions can be stronger than those resulting from asymmetric
diffusivities. Additionally, we report the emergence of asymmetric rectified
concentration fields (ARCFs), i.e., time-averaged long-range concentration
fields, which supports the electrodiffusiophoresis mechanism of colloidal
assembly observed in experiments. We also derive analytical expressions for
AREFs and ARCFs, emphasizing the role of imbalances in ionic strength and
charge densities, respectively, as the driving forces behind their formation.
The results presented in this article advance the field of colloidal assembly
and also have implications for improved understanding of electrolyte transport
in electrochemical devices. | Nathan Jarvey, Filipe Henrique, Ankur Gupta | 2023-06-24T22:23:29Z | http://arxiv.org/abs/2306.14068v4 | Asymmetric rectified electric and concentration fields in multicomponent electrolytes with surface reactions\({}^{\dagger}\)
###### Abstract
Recent experimental studies have utilized AC electric fields and electrochemical reactions in multicomponent electrolyte solutions to control colloidal assembly. However, theoretical investigations have thus far been limited to binary electrolytes and have overlooked the impact of electrochemical reactions. In this study, we address these limitations by analyzing a system with multicomponent electrolytes, while also relaxing the assumption of ideally blocking electrodes to capture the effect of surface electrochemical reactions. Through a regular perturbation analysis in the low-applied-potential regime, we solve the Poisson-Nernst-Planck equations and obtain effective equations for electrical potential and ion concentrations. By employing a combination of numerical and analytical calculations, our analysis reveals a significant finding: electrochemical reactions alone can generate asymmetric rectified electric fields (AREFs), i.e., time-averaged, long-range electric fields, even when the diffusivities of the ionic species are equal. This finding expands our understanding beyond the conventional notion that AREFs arise solely from diffusivity contrast. Furthermore, we demonstrate that AREFs induced by electrochemical reactions can be stronger than those resulting from asymmetric diffusivities. Additionally, we report the emergence of asymmetric rectified concentration fields (ARCFs), i.e., time-averaged long-range concentration fields, which supports the electrodiffusiophoresis mechanism of colloidal assembly observed in experiments. We also derive analytical expressions for AREFs and ARCFs, emphasizing the role of imbalances in ionic strength and charge density, respectively, as the driving forces behind their formation. The results presented in this article advance the field of colloidal assembly and also have implications for improved understanding of electrolyte transport in electrochemical devices.
## I Introduction
Colloidal particles immersed in an electrolyte aggregate along a plane near an electrode when an AC field is applied [1; 2; 3; 4; 5; 6; 7; 8] due to attractive electrohydrodynamic flows between the particles [2; 9]. Intriguingly, Woehl et al. [10] and Bukosky and Ristenpart [11] reported that the planar height at which colloids aggregate exhibits a bifurcation that depends on the electrolyte type and the frequency of the AC field. This bifurcation was particularly surprising because the particles levitate several diameters away from the electrode [10].
Since this discovery, Hashemi et al. [12] demonstrated through direct numerical simulations of the Poisson-Nernst-Planck (PNP) equations for a binary electrolyte that diffusivity contrast between anions and cations induces a long-range steady electric field, also referred to as an asymmetric rectified electric field (AREF). The strength of the AREF is dependent on the diffusivity contrast and frequency [13], which consequently determines the electrophoretic force on the particle and the bifurcation height. This mechanism was experimentally validated in a study by Bukosky et al. [14]. Hashemi et al. [15] have also argued that AREFs can directly impact or even dominate flows from mechanisms such as induced-charged electrophoresis.
While AREFs are able to recapitulate experimental observations, their direct numerical simulation requires high-order adaptive meshing [12; 16], which poses its own challenge. To this end, for a binary electrolyte, Hashemi et al. [17] performed a regular perturbation expansion on the PNP equations in the low-applied-potential limit and showed that AREFs appear at the second order in applied potential. Balu and Khair [18], in contrast, performed a singular perturbation expansion in the thin-double-layer limit and demonstrated that AREFs are recovered at the second order in the ratio between double layer and cell length. While both of these studies have furthered our understanding of AREFs, they are limited to binary electrolytes. Moreover, the aforementioned analyses on AREFs rely on the ideally blocking electrode approximation, which can be a limiting factor [19; 20; 21]. Wang et al. [20] note that _"the AREF theory assumes no flux for all ions at the electrodes; essentially, it does not account for Faradaic reactions (electrochemistry), which will take place at frequencies below 1 kHz in water"_. In their work, in addition to the AC electric field, Wang et al. [20] employed water splitting reactions and experimentally demonstrated that colloidal aggregation occurs at the location where the pH of the solution is at its maximum. A similar influence of pH on colloidal aggregation was also reported by Rath et al. [22], where the authors employed the electroreduction of para-benzoquinone while also including a variable steady component of the applied potential.
Given the increasing interest in utilizing electrochemical reactions for manipulating colloidal assembly, we generalize the regular perturbation analysis of Hashemi et al. [15] in the small potential limit to multicomponent electrolytes while also relaxing the ideally blocking electrode assumption. We find that AREFs can also be induced solely through
electrochemical reactions, even for symmetric diffusivities, and can be stronger than AREFs created by diffusivity contrast alone. This demonstrates that AREFs may be present in a wider parameter space than previously anticipated. In addition to AREFs, we report the formation of asymmetric rectified concentration fields (ARCFs). While AREFs induce an electrophoretic force, ARCFs induce a diffusiophoretic (or osmotic, depending on the definition) force [23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. We discover that ARCFs are primarily observed in systems with diffusivity contrast and that electrochemical reactions do not produce ARCFs, but can enhance ARCFs caused by diffusivity asymmetry.
The simultaneous inclusion of AREFs and ARCFs could rationalize recent experimental findings. For instance, the colloidal aggregation reported in Wang et al. [20] closely resembles the diffusiophoretic focusing for an acid-base reaction reported in Shi et al. [33] and Banerjee et al. [27], but as their system also includes an imposed electric field, the authors invoked the phenomena of electrodiffusiophoresis. Our findings also provide mechanistic insights into the formation of AREFs and ARCFs. Specifically, we highlight that the imbalances in ionic strength and charge density lead to AREFs and ARCFs, respectively. We also provide convenient analytical expressions for AREFs and ARCFs, which although valid only in the limiting case of small applied potentials and thin electrical double layers, provide a good starting point for estimating their spatial variations.
We provide a simplified model for ease of understanding in Section II. We outline the problem formulation in section III. We perform a regular perturbation in the low-applied-potential limit and derive effective equations for AREFs and ARCFs; see sections IV and V. Next, in section VI, we validate our numerical results with analytical calculations and report the dependencies of AREFs on various parameters. We briefly discuss the factors which control the strength of ARCFs. Finally, we describe the limitations of our analysis, outline potential future research directions, and discuss the implications of our findings on colloidal assembly and electrochemical devices.
## II Toy model
Before delving into the details of the electrokinetic equations, inspired by Hashemi et al. [12; 15], we propose a toy
Figure 1: **Schematic of the toy model**. (a) A cation and an anion move in the \(z\)-direction as a response to an electric field \(E\). The cation also moves in response to a surface redox flux \(N\). Both \(E\) and \(N\) are sinusoidal in time. \(N\) is proportional to \(E\). The velocity induced by the electric field is dependent on the charge and the diffusivity of the ion. The velocity induced by surface reactions on the cation is proportional to the strength of the surface reaction. (b) The position of the cation (\(z_{+}\)) and anion (\(z_{-}\)) as a function of time \(t\). When the diffusivities of each ion are equal and there are no reactions, the movement of ions has equal amplitude and thus no AREF is formed. If the diffusivities are unequal or surface reactions are present, the amplitudes are no longer equal and an AREF is formed.
model that is able to capture the effect of surface reactions. At the outset, we would like to clarify that the toy model described here ignores several complexities that are present in a real system. However, it provides a convenient choice to grasp the dominant physics within the system.
We consider a system with a cation and an anion. Both the ions can move in the \(z\)-direction due to an applied sinusoidal electric field \(E=E_{0}\cos(\omega t)\), where \(E_{0}\) is the amplitude, \(\omega\) is the frequency and \(t\) is time. In addition to the electric field, the cation also moves in the \(z\)-direction due to the presence of a redox flux field, denoted here as \(N=N_{0}\cos(\omega t)=g_{1}E_{0}\cos(\omega t)\); where \(g_{1}\) is a constant. This assumption assumes that the redox flux is proportional to the electric field, which is valid for small amplitude oscillations [6]. Note that the redox flux is not consuming/producing the cation but is rather inducing a velocity on the cation; see Fig. 1. The two ions have valences of \(+1\) and \(-1\) and their locations are denoted \(z_{+}\) and \(z_{-}\), respectively. It is assumed that both the ions are at the location \(z=0\) at \(t=0\). The cation and anion have diffusivities of \(D_{+}\) and \(D_{-}\), respectively.
The applied electric field is known to create an electromigrative flux, and the induced velocity for the cation and anion are given by \(\pm\frac{D_{+}e}{k_{B}T}E_{0}\cos(\omega t)\), where \(e\) is the charge on an electron, \(k_{B}\) is Boltzmann's constant, and \(T\) is the temperature. The velocity induced on the cation due to the reactive flux is \(\frac{g_{1}}{C}E_{0}\cos(\omega t)\) (obtained by equating the convective flux to the redox flux), where \(C\) is the concentration scale of the cations. This implies that one can write
\[\frac{dz_{+}}{dt}=\left(\frac{D_{+}e}{k_{B}T}+\frac{g_{1}}{C} \right)E_{0}\cos(\omega t), \tag{1a}\] \[\frac{dz_{-}}{dt}=-\frac{D_{-}e}{k_{B}T}E_{0}\cos(\omega t), \tag{1b}\]
which upon integration yield \(z_{\pm}=\pm m_{\pm}\frac{E_{0}}{\omega}\sin(\omega t)\), where \(m_{+}=\left(\frac{D_{+}e}{k_{B}T}+\frac{g_{1}}{C}\right)\) and \(m_{-}=\frac{D_{-}e}{k_{B}T}\). The net electric field induced \(E_{\text{induced}}\) by the ions at a location \(z\) can be calculated by applying Coulomb's law, assuming the ions are point charges similar to Hashemi et al.[12]. For \(|z|\gg z_{\pm}\) and time averaging, it is straightforward to obtain
\[\left\langle E_{\text{induced}}\right\rangle\propto E_{0}^{2}\left(m_{+}^{2}-m _{-}^{2}\right), \tag{1c}\]
where \(\left\langle\right\rangle\) represents time averaging and where we have ignored the higher order terms beyond the second order in \(\left(\frac{z_{\pm}}{z}\right)^{2}\).
Clearly, as per Eq. (1c), if \(g_{1}=0\) and \(D_{+}=D_{-}\), the induced electric field vanishes. If \(g_{1}=0\) and \(D_{+}\neq D_{-}\), the induced electric field is an AREF and falls under the scenario described by Hashemi et al. [12; 15]. However, even if \(D_{+}=D_{-}\), an AREF is also possible when \(g_{1}\neq 0\); see Fig. 1. This is the key finding that is explored in this paper, as a surface redox flux can also produce AREFs without the requirement of asymmetric diffusivities by enhancing the effective mobility of one ion. We reiterate that the toy model has its limitations, as it is not able to capture the subtle features that we discuss in the remainder of this manuscript.
## III Problem setup
### Dimensional Problem
We study a one-dimensional electrochemical system with an arbitrary number of ions and two electrodes separated by a distance \(2L\); see Fig. 2. An AC field of frequency \(\Omega\) is applied to the electrochemical cell. \(X=0\) is at the centerline of the cell, \(\ell\) is the length of the concentration boundary layer (conc. BL) and \(\lambda\) is a measure of the length of the electrical double layer (EDL). Here, we investigate the formation of AREFs and ARCFs in the presence of surface reactions without imposing any restrictions on ionic diffusivities.
We seek to describe the spatial and temporal variations of ionic concentrations and potential in the system to subsequently determine the AREF and ARCF. Therefore, we write Poisson's equation [34; 35; 36]
\[-\varepsilon\frac{\partial^{2}\Phi}{\partial X^{2}}=Q_{e},\] (2a) with \[\varepsilon\] being the electrical permittivity of the solvent, \[\Phi\] being the potential, \[X\] being the spatial coordinate, and \[Q_{e}\] being the volumetric charge density. \[Q_{e}=\sum_{i}ez_{i}C_{i}\], where \[e\] is the charge on an electron and \[z_{i}\] and \[C_{i}\] are the valence and concentration of the \[i^{\text{th}}\] ion, respectively.
Ion transport is modeled using the Nernst-Planck equations [34; 35; 36], i.e.,
\[\frac{\partial C_{i}}{\partial\tau}+\frac{\partial N_{i}}{\partial X}=0, \tag{2b}\]
with \(N_{i}\) as the flux of ion \(i\), given by
\[N_{i}=-\mathcal{D}_{i}\frac{\partial C_{i}}{\partial X}-\frac{\mathcal{D}_{i}z_{ i}eC_{i}}{k_{B}T}\frac{\partial\Phi}{\partial X}, \tag{2c}\]
where \(\tau\) is time, \(\mathcal{D}_{i}\) is the diffusivity of the \(i^{\rm th}\) ion, \(k_{B}\) is Boltzmann's constant, and \(T\) is temperature. We note that Eq. (2b) ignores any volumetric reactions. The charge flux (or current per unit area) is evaluated as \(J=\sum_{i}z_{i}eN_{i}\).
Eqs. (2) are subjected to the following boundary and initial conditions. The sinusoidal potential boundary conditions are given as
\[\Phi(\pm L,\tau)=\pm\Phi_{D}\sin{(\Omega\tau)}.\] (3a) The above equation ignores any native zeta potential of the electrodes, similar to Hashemi et al. [17]. We consider a surface reactive flux condition (i.e., non-ideally blocking) at the two electrodes \[N_{i}(\pm L,\tau)=N_{i0}\sin{(\Omega\tau)}, \tag{3b}\]
We note that the flux amplitude \(N_{i0}\) may not be identical at the two electrodes [37], but is assumed to be the same and time-independent for simplicity. Typically, \(N_{i0}\) is dependent on applied potential and ion concentrations. We would like to clarify that that the applied flux is dependent on frequency and has the same sinusoidal dependency as the potential, i.e., it is assumed that the potential and fluxes are in-phase. The dependency of the amplitude \(N_{i0}\) on \(\Phi_{D}\) is discussed in Sections IV and VI.4. We also define current amplitude \(J_{0}=\sum_{i}z_{i}eN_{i0}\); see Fig. 2.
At \(\tau=0\), the electrical potential is
\[\Phi(X,0)=0, \tag{3c}\]
and the concentrations are given by
\[C_{i}(X,0)=C_{i0}. \tag{3d}\]
The initial concentrations are required to maintain electroneutrality, i.e., \(\sum_{i}z_{i}C_{i0}=0\).
Figure 2: **Schematic of the model problem.** a) We consider a cell of length \(2L\) with an arbitrary number of ions. \(X\) is the spatial coordinate and \(J_{0}\sin{(\Omega\tau)}\) is the charge flux due to surface reactions. b) Zoomed-in schematic of the dashed box. The cell consists of three spatial regions: the electrical double layer (EDL), a concentration boundary layer (conc. BL), and the bulk. The thickness of the EDL is denoted by \(\lambda\) and the thickness of the conc. BL is denoted by \(\ell\). \(\mathcal{D}_{i}\) is the diffusivity of the \(i^{\rm th}\) ion, \(\mathcal{D}\) is a
characteristic diffusivity corresponding to the conc. BL length, and \(\Omega\) is the frequency of the applied field. We show that \(\mathcal{D}_{i}\) asymmetry and surface reactions can cause AREFs, and both attributes do so due to an imbalance in ionic strength. We also show that \(\mathcal{D}_{i}\) asymmetry can create an ARCF due to an imbalance in charge, and surface reactions can further enhance them.
### Dimensionless Problem
We write dimensionless concentration \(c_{i}=\frac{C_{i}}{C^{*}}\), diffusivity \(D_{i}=\frac{D_{i}}{D^{*}}\), time \(t=\frac{\tau D^{*}}{L^{*}}\), potential \(\phi=\frac{\epsilon\Phi}{k_{B}T}\), spatial coordinate \(x=\frac{X}{L}\), charge density \(\rho_{e}=\frac{Q_{e}}{eC^{*}}\), charge flux \(j=\frac{JL}{eD^{*}C^{*}}\), species fluxes \(n_{i}=\frac{N_{i}L}{D^{*}C^{*}}\), and frequency \(\omega=\frac{\Omega L^{2}}{D^{*}}\), where \(C^{*}\) is a reference concentration and \(\mathcal{D}^{*}\) is a reference diffusivity. We also define \(\lambda=\sqrt{\frac{\epsilon k_{B}T}{e^{*}C^{*}}}\) as a representative measure of double-layer thickness and \(\kappa=\frac{L}{\lambda}\). We would like to clarify that \(\lambda\) is not the true Debye length, as it is based on a reference value \(C^{*}\) and not the ionic strength. We make this choice on purpose to decouple the effects of ionic valences and \(\kappa\). As we will discuss later, the true Debye length is given by a combination of \(\lambda\) and ionic strength.
In dimensionless variables, Eqs. (2) take the form
\[-\frac{\partial^{2}\phi}{\partial x^{2}}=\kappa^{2}\rho_{e}. \tag{4a}\] \[\frac{\partial c_{i}}{\partial t}-D_{i}\frac{\partial^{2}c_{i}}{\partial x ^{2}}-D_{i}z_{i}\frac{\partial}{\partial x}\left(c_{i}\frac{\partial\phi}{ \partial x}\right)=0, \tag{4b}\]
and Eqs. (3) become
\[\phi(\pm 1,t)=\pm\phi_{D}\sin{(\omega t)}, \tag{5a}\] \[n_{i}(\pm 1,t)=n_{i0}\sin{(\omega t)},\] (5b) \[\phi(x,0)=0,\] (5c) \[c_{i}(x,0)=c_{i0}, \tag{5d}\]
where \(\phi_{D}=\frac{e\Phi_{D}}{k_{B}T}\), \(n_{i}=-D_{i}\frac{\partial c_{i}}{\partial x}-D_{i}z_{i}c_{i}\frac{\partial \phi}{\partial x}\), \(n_{i0}=\frac{N_{i0}L}{D^{*}C^{*}}\), \(j_{0}=\frac{J_{0}L}{eD^{*}C^{*}}\) and \(c_{i0}=\frac{C_{i0}}{C^{*}}\), while we require \(\sum_{i}z_{i}c_{i0}=0\) to maintain electroneutrality.
## IV Asymptotic Solution for Small Applied Potentials
In this work, we employ a regular perturbation expansion in the low-applied-potential limit, i.e. \(\phi_{D}\ll 1\). While this limit is not directly observed in experiments (which generally tend to operate in moderate to large potential limits), it is able to capture the essential physics of the electrokinetic problem, albeit qualitatively [7; 11; 14]. This limit is also a common choice for theoretical developments [17; 38; 39; 40; 41; 42; 43].
The perturbation expansions in powers of \(\phi_{D}\) are \(\phi=\phi^{(0)}+\phi_{D}\phi^{(1)}+\phi_{D}^{2}\phi^{(2)}+O(\phi_{D}^{3})\), \(c_{i}=c_{i}^{(0)}+\phi_{D}c_{i}^{(1)}+\phi_{D}^{2}c_{i}^{(2)}+O(\phi_{D}^{3})\), \(n_{i}=n_{i}^{(0)}+\phi_{D}n_{i}^{(1)}+\phi_{D}^{2}n_{i}^{(2)}+O(\phi_{D}^{3})\), \(j=j^{(0)}+\phi_{D}j^{(1)}+\phi_{D}^{2}j^{(2)}+O(\phi_{D}^{3})\), and \(\rho_{e}=\rho_{e}^{(0)}+\phi_{D}\rho_{e}^{(1)}+\phi_{D}^{2}\rho_{e}^{(2)}+O( \phi_{D}^{3})\), where the superscripts (0), (1), and (2) refer to the leading-, first-, and second-order terms, respectively.
In the small potential limit, we assume that the equilibrium cell potential is 0 and invoke the linearized Butler-Volmer kinetic equation to write \(n_{i0}=\phi_{D}n_{i0}^{(1)}\). Thus, neglecting the equilibrium potential implies that the applied potential is the overpotential and the fluxes (and consequently the current) are proportional to the overpotential. This relationship was systematically derived by Prieve et al. [6]. We acknowledge that this assumption ignores the impact of equilibrium cell potential [35; 36] and also neglects higher-order effects. These effects can become important in experimental systems [20; 21; 22] where the voltage amplitude or the steady bias in the applied potential impacts the reaction rate, boundary conditions, and leading order solution. Therefore, the analysis presented here will need to be adjusted to incorporate these effects. We outline the modifications required to incorporate these effects in section VI.4.
### Leading Order
The leading-order limit refers to the condition of no applied potential, and thus the leading-order solutions are set by the initial conditions
\[\phi^{(0)}(x,t)=0, \tag{6a}\] \[c_{i}^{(0)}(x,t)=c_{i0}. \tag{6b}\]
One can also verify that the solution above satisfies the governing equations and boundary conditions at the leading order.
### First Order
At the first order, Eq. (4a) takes the form
\[\frac{\partial^{2}\phi^{(1)}}{\partial x^{2}}=-\kappa^{2}\rho_{e}^{(1)}.\] (7a) For the \[i^{\text{th}}\] ion, Eq. ( 4b ) reduces to \[\frac{1}{D_{i}}\frac{\partial c_{i}^{(1)}}{\partial t}=\frac{\partial^{2}c_{i} ^{(1)}}{\partial x^{2}}+z_{i}c_{i0}\frac{\partial^{2}\phi^{(1)}}{\partial x^{2 }}.\] (7b) In order to separate temporal and spatial variables, we consider solutions to Eqs. ( 7a ) and ( 7b ) of the forms \[\phi^{(1)}(x,t)=\text{Im}\left[e^{i\omega t}\hat{\phi}^{(1)}(x)\right]\] and \[c_{i}^{(1)}(x,t)=\text{Im}\left[e^{i\omega t}\hat{c}_{i}^{(1)}(x)\right]\]. The governing equations with only spatial dependency become \[\frac{d^{2}\hat{\phi}^{(1)}}{dx^{2}}=-\kappa^{2}\hat{\rho}_{e}^{( 1)},\] (8a) \[\frac{i\omega\hat{c}_{i}^{(1)}}{D_{i}}=\frac{d^{2}\hat{c}_{i}^{( 1)}}{dx^{2}}+z_{i}c_{i0}\frac{d^{2}\hat{\phi}^{(1)}}{dx^{2}}.\] (8b) Following a similar process for the boundary conditions listed in Eqs. ( 5 ), we find the boundary conditions for Eqs. ( 8 ) are \[\hat{\phi}^{(1)}\bigg{|}_{x=\pm 1}=\pm 1,\] (9a) \[-\left(\frac{d\hat{c}_{i}^{(1)}}{dx}+z_{i}c_{i0}\frac{d\hat{\phi} ^{(1)}}{dx}\right)\bigg{|}_{x=\pm 1}=\frac{n_{i0}^{(1)}}{D_{i}}.\] (9b) Eqs. ( 8 ) and ( 9 ) enable the determination of \[\hat{c}_{i}^{(1)}\] and \[\hat{\phi}^{(1)}\]. Since the variables are periodic at this order, i.e., the average of \[e^{i\omega t}\] is 0, neither an AREF nor an ARCF are observed. Thus, we examine the second order.
From the results of Eq. ( 8 ), we write salt concentration or salt \(\hat{s}^{(1)}=\sum_{i}\hat{c}_{i}^{(1)}\), charge density \(\hat{\rho}_{e}^{(1)}=\sum_{i}z_{i}\hat{c}_{i}^{(1)}\), ionic strength \(\hat{I}^{(1)}=\sum_{i}z_{i}^{2}\hat{c}_{i}^{(1)}\), and electric field \(\hat{E}^{(1)}=-\frac{d\hat{\phi}^{(1)}}{dx}\). These variables are employed at the second order.
### Second Order
We time average the governing equations at the second order over one period of the applied potential such that Eq. (4a) reads
\[\frac{d^{2}\left\langle\phi^{(2)}\right\rangle}{dx^{2}}=-\kappa^{2}\left\langle \rho_{e}^{(2)}\right\rangle.\] (10a) We add Eqs. ( 4b ) for all ions and time average to get \[\frac{d^{2}\left\langle s^{(2)}\right\rangle}{dx^{2}}-\frac{1}{2}\frac{d}{dx} \text{Re}\left(\hat{\rho}_{e}^{(1)}\bar{E}^{(1)}\right)=0.\] (10b) We multiply Eq. ( 4b ) by \[z_{i}\], time average, and sum the equations to obtain \[\frac{d^{2}\left\langle\rho_{e}^{(2)}\right\rangle}{dx^{2}}+I_{0}\frac{d^{2} \left\langle\phi^{(2)}\right\rangle}{dx^{2}}-\frac{1}{2}\frac{d}{dx}\text{Re} \left(\hat{I}^{(1)}\bar{E}^{(1)}\right)=0.\] (10c) Variables with bar are complex conjugates of variables with hat, and \[\left\langle\right\rangle\] corresponds to time-averaged variables. Note that Eqs. ( 10 ) are sufficient to determine the presence and forms of the AREF and ARCF. Further, we determine that diffusivity has no explicit effect on the second order time-averaged results, though \[D_{i}\] indirectly influences the
first-order variables.
The boundary conditions given in Eqs. (5) at the second order become
\[\left.\left(\frac{d\left\langle s^{(2)}\right\rangle}{dx}-\frac{1}{2} \mathrm{Re}\left(\hat{\rho}_{e}^{(1)}\bar{E}^{(1)}\right)\right)\right|_{x=\pm 1} =0, \tag{11a}\] \[\left.\left(\frac{d\left\langle\rho_{e}^{(2)}\right\rangle}{dx}+ I^{(0)}\frac{d\left\langle\phi^{(2)}\right\rangle}{dx}-\frac{1}{2}\mathrm{Re} \left(\hat{I}^{(1)}\bar{E}^{(1)}\right)\right)\right|_{x=\pm 1} =0,\] (11b) \[\left.\left\langle\phi^{(2)}\right\rangle\right|_{x=\pm 1} =0. \tag{11c}\]
We integrate Eqs. (10b) and (10c) with boundary conditions in Eqs. (11a) and (11b) to write
\[\frac{d\left\langle s^{(2)}\right\rangle}{dx}-\frac{1}{2}\mathrm{ Re}\left(\hat{\rho}_{e}^{(1)}\bar{E}^{(1)}\right) =0, \tag{12a}\] \[-\frac{1}{\kappa^{2}}\frac{d^{3}\left\langle\phi^{(2)}\right\rangle}{ dx^{3}}+I_{0}\frac{d\left\langle\phi^{(2)}\right\rangle}{dx}-\frac{1}{2} \mathrm{Re}\left(\hat{I}^{(1)}\bar{E}^{(1)}\right) =0, \tag{12b}\]
where we have also utilized Eq. (10a). Note that we define the ARCF as the salt gradient \(\frac{d\left\langle s^{(2)}\right\rangle}{dx}\). To integrate \(\left\langle s^{(2)}\right\rangle\) using Eq. (12a), since the flux of salt is zero at both boundaries at the second order, \(\int_{-1}^{1}\left\langle s^{(2)}dx\right\rangle=0\) can be used as a boundary condition.
Similarly, Eq. (12b) is a third-order equation in \(\left\langle\phi^{(2)}\right\rangle\), but we only have two boundary conditions in Eq. (11c). To find the third boundary condition [37], we note that since the flux of charge is zero at both boundaries at the second order, \(\int_{-1}^{1}\left\langle\rho_{e}^{(2)}dx\right\rangle=0\). By substituting Eq. (10a) in the aforementioned condition, we obtain
\[\left.\frac{d\left\langle\phi^{(2)}\right\rangle}{dx}\right|_{x=1}-\left.\frac {d\left\langle\phi^{(2)}\right\rangle}{dx}\right|_{x=-1}=0. \tag{13}\]
Eq. (12b) can thus be solved with Eqs. (11c) and (13) as boundary conditions.
Experimentally [8; 10; 11; 14; 20; 22], the relevant limit is thin EDLs, or \(\kappa\gg 1\). In this limit, a singular perturbation is required where solutions are divided into the EDL regions and the region outside the EDLs [18; 37; 44]. However, since AREFs occur outside the EDLs, one can simplify Eq. (12b) in the limit \(\kappa\gg 1\) to directly write
\[\left\langle E^{(2)}\right\rangle=-\frac{1}{2I^{(0)}}\mathrm{Re}\left(\hat{I} ^{(1)}\bar{E}^{(1)}\right), \tag{14}\]
where \(\left\langle E^{(2)}\right\rangle=-\frac{d\left\langle\phi^{(2)}\right\rangle }{dx}\). We note that Eq. (14) is valid only outside the EDL regions, and thus directly predicts the AREF. Eq. (14) highlights that the presence of an AREF is only dependent on the first-order ionic strength and the first-order electric field. Even further, it is known that as \(\bar{E}^{(1)}\neq 0\) in the concentration boundary layer due to the asymmetric boundary conditions (see Eq. (9) and the discussion in section VI.1.1), \(\left\langle E^{(2)}\right\rangle\) will be nonzero when \(\hat{I}^{(1)}\neq 0\). Physically, this implies that an imbalance in ionic strength outside the EDL regions creates an AREF. This is a crucial physical insight that our analysis reveals. We would like to emphasize that this requirement is true for an arbitrary number of ions without any restriction on valences and diffusivities, albeit within the limits of thin double layers and small potentials.
We provide a brief physical explanation regarding the requirement of \(\hat{I}^{(1)}\neq 0\) to create an AREF. Physically, EDL charging and/or redox reactions produce electric currents which subsequently induce a net electric field in the regions outside the EDLs. Still, electroneutrality is required to hold in the regions excluding the EDLs. Therefore, outside the EDLs, the only possible charge flux is an electromigrative flux, which has a magnitude that is directly dependent on the local ionic strength. This induced imbalance in ionic strength produces an asymmetry in the local conductivity of the electrolyte, which results in a time-averaged charge flux. An AREF forms to balance this charge flux.
Eq. (12a) shows that there can be steady salt gradients at the second order. Since salt gradients give rise to diffusiophoresis [23; 24; 25; 27; 28; 29; 32], the salt gradient or ARCF can simply be evaluated by utilizing Eq. (12a). We emphasize that ARCFs are also a phenomenon that occurs outside EDLs and are induced by an imbalance in the first-order charge density; see Eq. (12b). At this point, we note that diffusiophoretic mobility may vary for different combinations of ions[28], and calculating a steady concentration field for each ion may be important in some scenarios. These calculations are straightforward, but we do not detail them for brevity.
We summarize the procedure for estimating AREFs and ARCFs in the form of a flowchart in Fig. 3. Ion valences, diffusivities, charge fluxes due to surface reactions, relative double-layer thickness, and frequency are the required inputs of the first-order solution. At the first order, the Poisson equation (15a) and the Nernst-Planck (NP) equation in charge (15b) are solved together to obtain charge density and potential. They subsequently are used as inputs to determine the first-order ionic strength. Once the first-order charge density, potential (and by extension electric field), and ionic strength are determined, the AREF is evaluated using Eq. (14). A nonzero \(\hat{I}^{(1)}\) outside of the EDLs indicates a nonzero \(\left\langle E^{(2)}\right\rangle\), and thus indicates the presence of an AREF. Similarly, first-order charge density and potential are used to determine the ARCF using Eq. (12a). A nonzero \(\hat{\rho}_{\epsilon}^{(1)}\) outside the EDLs is a requirement for an ARCF.
### Numerical Solution Procedure
We first solve Eqs. (8) with boundary conditions in Eqs. (9) using the _bvp4c_ functionality in MATLAB. \(D_{i}\), \(z_{i}\), \(c_{i0}\), \(n_{i0}^{(1)}\), \(\kappa\) and \(\omega\) are used as inputs to our code. _bvp4c_ deploys an adaptive grid meshing in the spatial dimension. We benchmark these numerical results with the results from Hashemi et al. [17] for a binary electrolyte with no reactions and asymmetric diffusivities.
Next, we also use _bvp4c_ to solve Eq. (12b) with boundary conditions set by Eqs. (11c) and (13). These equations require the additional input of \(\hat{I}^{(1)}\bar{E}^{(1)}\), which we calculate from the first-order equations. Similarly, we solve Eq. (12a) with the salt conservation boundary condition. This calculation requires the input of \(\hat{\rho}_{\epsilon}^{(1)}\bar{E}^{(1)}\), which we obtain from the first-order equations. We benchmark our results with the binary electrolyte results from Hashemi et al.[17]. Comparisons between numerical and analytical results are shown in Figs. 4 and 6, and our adaptation of the analytics from Hashemi et al. [17] are given in the Appendix.
Figure 3: Methodology for calculating the AREF and ARCF analytically in low-potential and thin-double-layer limits for multi-ion electrolytes. The key finding is that a nonzero first-order ionic strength is a sufficient criterion to observe an AREF and a nonzero first-order charge density outside EDLs is a sufficient condition to observe an
Analytical solution for symmetric diffusivites and thin double layers
To make analytical progress, we assume equal diffusivities of all ions, i.e., \(D_{i}=1\), and the thin-double-layer limit, i.e., \(\kappa\gg 1\). Eq. (4b) at first order in \(\phi_{D}\) for the variables \(\hat{\rho}_{e}^{(1)}\), \(\hat{I}^{(1)}\), and \(\hat{\phi}^{(1)}\) becomes
\[-\frac{d^{2}\hat{\phi}^{(1)}}{dx^{2}}=\kappa^{2}\hat{\rho}_{e}^{( 1)}, \tag{15a}\] \[i\omega\hat{\rho}_{e}^{(1)}=\frac{d^{2}\hat{\rho}_{e}^{(1)}}{dx^ {2}}+I^{(0)}\frac{d^{2}\hat{\phi}^{(1)}}{dx^{2}},\] (15b) \[i\omega\hat{I}^{(1)}=\frac{d^{2}\hat{I}^{(1)}}{dx^{2}}+\sum_{i}z_ {i}^{3}c_{i0}\frac{d^{2}\hat{\phi}^{(1)}}{dx^{2}}, \tag{15c}\]
The corresponding boundary conditions for these equations are given by
\[\hat{\phi}^{(1)}\bigg{|}_{x=\pm 1}=\pm 1, \tag{16a}\] \[-\left(\frac{d\hat{\rho}_{e}^{(1)}}{dx}+I^{(0)}\frac{d\hat{\phi} ^{(1)}}{dx}\right)\bigg{|}_{x=\pm 1}=\sum_{i}z_{i}n_{i0}^{(1)},\] (16b) \[-\left(\frac{d\hat{I}^{(1)}}{dx}+\sum_{i}z_{i}^{3}c_{i0}\frac{d \hat{\phi}^{(1)}}{dx}\right)\bigg{|}_{x=\pm 1}=\sum_{i}z_{i}^{2}n_{i0}^{(1)}. \tag{16c}\]
We require asymmetric solutions to these equations since \(\hat{\phi}^{(1)}\) has asymmetric boundary conditions. We begin by finding an expression for \(\hat{\rho}_{e}^{(1)}\). As a shorthand notation, let \(\lambda_{1}=\sqrt{i\omega}\) and \(\lambda_{2}=\sqrt{I^{(0)}\kappa^{2}+i\omega}\). Then, by substituting Eq. (15a) into Eq. (15b) and integrating, we determine
\[\hat{\rho}_{e}^{(1)}=B\sinh{(\lambda_{2}x)}, \tag{17}\]
where \(B\) is an unknown constant. Eq. (17) is utilized in Eq. (15a) and integrated twice to find
\[\hat{\phi}^{(1)}=x+B\frac{\kappa^{2}}{\lambda_{2}^{2}}\left[x\sinh{(\lambda_ {2})}-\sinh{(\lambda_{2}x)}\right],\] (18a) where we have already employed the boundary conditions in Eq. ( 16a ). With functional forms for both \[\hat{\phi}^{(1)}\] and \[\hat{\rho}_{e}^{(1)}\], we use Eq. ( 16b ) to show \[B=-\frac{\lambda_{2}^{2}\left[j_{0}^{(1)}+I^{(0)}\right]}{I^{(0)}\kappa^{2} \sinh{(\lambda_{2})}+\lambda_{1}^{2}\lambda_{2}\cosh{(\lambda_{2})}},\] (18b) where \[j_{0}^{(1)}=\sum_{i}z_{i}n_{i0}^{(1)}\]. With the full forms of \[\hat{\phi}^{(1)}\] and \[\hat{\rho}_{e}^{(1)}\] determined, we solve for \[\hat{I}^{(1)}\]. Based upon Eq. ( 15c ), we observe that we will require a particular and a homogeneous solution to determine \[\hat{I}^{(1)}\] due to the inhomogeneity brought about by the electromigrative term. Rewriting Eq. ( 15a ) in the form of an operator on the left side, we show \[\left(\frac{d^{2}}{dx^{2}}-\lambda_{1}^{2}\right)\hat{I}^{(1)}=\sum_{i}z_{i}^{3 }c_{i0}\kappa^{2}\hat{\rho}_{e}^{(1)}. \tag{19}\]
Note that from Eqs. (15a) and (15b), \(\left(\frac{d^{2}}{dx^{2}}-\lambda_{2}^{2}\right)\hat{\rho}_{e}^{(1)}=0\). This means that we can apply the operator \(\left(\frac{d^{2}}{dx^{2}}-\lambda_{2}^{2}\right)\) to both sides of Eq. (19) to reach a homogeneous equation. Taking only the asymmetric solutions for \(\hat{I}^{(1)}\), we arrive at
\[\hat{I}^{(1)}=F\sinh{(\lambda_{1}x)}+G\sinh{(\lambda_{2}x)},\] (20a) where \[F\] and \[G\] are constants that need to be determined. We take \[G\] such that it cancels out the inhomogeneity in Eq. ( 19 ), which results in \[G=\frac{\sum_{i}z_{i}^{3}c_{i0}}{I^{(0)}}B. \tag{20b}\]
Next, we apply the boundary conditions in Eq. (16c) and determine
\[F=-\frac{\sum_{i}z_{i}^{2}n_{i0}^{(1)}+\left(\sum_{i}z_{i}^{3}c_{i0}\right)j_{0}^ {(1)}/I^{(0)}}{\lambda_{1}\cosh\lambda_{1}}. \tag{20c}\]
With \(\hat{\rho}_{e}^{(1)}\) and \(\hat{I}^{(1)}\) fully determined, we invoke Eq. (14) to obtain an expression for the AREF. We also invoke Eq. (12a) to calculate the ARCF.
The analytical expressions obtained shed light on the physics of the EDLs and concentration boundary layers. In the experimentally relevant limit of thin double layers relative to the length of the concentration boundary layer, i.e., \(\omega/(I^{(0)}\kappa^{2})\ll 1\), the charge density obtained from Eq. (17) is entirely screened over the EDL regions. However, there are salt dynamics introduced by the surface reactions, resulting in an ionic strength imbalance over the concentration BL regions, as seen from the homogeneous solution in Eq. (20a). AREFs, therefore, result from the simultaneous presence of first-order electric field and ionic strength. Local maxima of the components of ionic strength are found over the concentration boundary layer dimensionless length scale \(\omega^{-1/2}\); see Fig. 2(b). Simultaneously, surface reactions, an AC field, or both effects can produce a residual electric field in the concentration boundary layers. In fact, it can be shown by integrating Eqs. (15a) and (15b) over the EDLs that in order for the combination of diffusive and electromigrative fluxes to match the surface charge flux and the rate of change of accumulated charge in the EDLs, there must be a homogeneous residual electric field in the concentration BLs. As seen from Eq. (18a), this residual field is given by \(\hat{E}^{(1)}=-1-\frac{B\kappa^{2}\sinh(\lambda_{2})}{\lambda_{2}^{2}}\neq 0\), which leads to the conclusion that only a nonzero \(\hat{I}^{(1)}\) is requisite to lead to an AREF. Additionally, we note that the charge density is fully screened over the EDLs in this scenario, meaning no ARCF will develop (see also section VI.2).
## VI Results and Discussion
### Asymmetric Rectified Electric Fields (AREFs)
#### vi.1.1 Binary Electrolyte
We first analyze the formation of an AREF due to surface reactions and compare it with the previously known requirement of diffusivity asymmetry to produce AREFs [17; 12; 18]. We focus on the limit of \(\kappa\gg 1\) such that Eq. (14) is valid. We begin by comparing the two mechanisms for a monovalent binary electrolyte. Fig. 4 displays a comparison between numerical (orbs) and analytical (solid lines) results between the two mechanisms for AREF formation, i.e. diffusivity asymmetry (blue) and surface reactions (pink). Numerical results are calculated as per the procedure outlined in section IV.4. Analytical results are determined by the approach described in section V. Results by Hashemi et al. [17] are employed for the diffusivity asymmetry case to obtain first-order results which are subsequently utilized in Eq. (14) to obtain the AREF; see the Appendix for details. The surface reaction case is presented for \(j_{0}^{(1)}=-0.5\) and \(D_{1}=D_{2}=1\), while the diffusivity asymmetry case is presented for \(j_{0}^{(1)}=0\), \(D_{1}=2\) and \(D_{2}=1\). We ignore the spatial regions located between \(x=-1\) to \(x=-0.9\) and \(x=0.9\) to \(x=1\) to focus on the regions outside the EDLs. As evident from Fig. 4, we obtain excellent quantitative agreement between analytical and numerical results in all scenarios and for either mechanism.
First, we note that an AREF is present even with symmetric diffusivities due to the presence of surface reactions, indicating a wider parameter space that can give rise to AREFs than was previously anticipated. We observe that both the shape and the magnitude of the AREF are different for the two formation mechanisms employed; see Fig. 4(a). Specifically, for the parameters chosen, reaction-driven AREFs display a maximum near the left electrode, while AREFs produced by diffusivity asymmetry possess a maximum near the right electrode. The magnitude of the AREF maxima with surface reactions is greater than that of the AREF maxima with asymmetric diffusivities by roughly a factor of 5.
We discuss the dependency of the maximum value of the AREF, denoted here by \(\text{AREF}_{\text{max}}\), with \(\kappa\) in Fig. 4(b). For both mechanisms, \(\text{AREF}_{\text{max}}\) decreases in magnitude as \(\kappa\) increases. However, the decrease observed with surface reactions is significantly lower than the decrease observed with diffusivity asymmetry; we find that AREFs with diffusivity asymmetry decay as \(\kappa^{-2}\), consistent with Balu and Khair [18]. Physically, AREF formation due to diffusivity asymmetry is driven by the currents arising from the EDLs. An increase in \(\kappa\) reduces the volume of charge (and the current) in the EDLs. Therefore, \(\text{AREF}_{\text{max}}\) also decreases. On the other hand, the current produced by surface reactions is not directly related to \(\kappa\), leading to a weaker dependence of \(\text{AREF}_{\text{max}}\) on \(\kappa\). The variation of \(\text{AREF}_{\text{max}}\) with \(\omega\) is given in Fig. 4(c). The surface reactions \(\text{AREF}_{\text{max}}\) is insensitive to the change in \(\omega\), while the
diffusivity asymmetry \(\text{AREF}_{\text{max}}\) increases with an increase in \(\omega\). For \(\omega=100\), \(\text{AREF}_{\text{max}}\) with reactions is still greater than with \(D_{i}\) asymmetry by roughly a factor of 5.
The results outlined in Fig. 4 (b,c) demonstrate that if surface reactions are present, AREFs could be stronger than the AREFs created by diffusivity asymmetry alone, at least for the parameters chosen. To better understand the dependencies of \(\text{AREF}_{\text{max}}\) on \(\kappa\) and \(\omega\), we employ numerical results to expand our parameter sweep to \(\kappa=10^{2}-10^{4}\) and \(\omega=10-10^{3}\) in Fig. 5. Fig. 5(a) shows the results for AREFs driven by surface reactions. The largest values of \(\text{AREF}_{\text{max}}\) occur around \(\kappa=100\) and \(\omega=100\). Additionally, the different contours shown are all within an order of magnitude of one another, indicating that \(\text{AREF}_{\text{max}}\) is weakly dependent on \(\kappa\) and \(\omega\). This weak dependency for a constant reactive charge flux can be understood through the dependency of first-order ionic strength. A nonzero \(\hat{I}^{(1)}\) with surface reactions is impacted strongly by the boundary conditions \(n_{i0}^{(1)}\) and weakly due to the effects of \(\kappa\) and \(\omega\), as shown analytically in Eqs. (18b) and (20c). Furthermore, our analysis shows that \(\bar{E}^{(1)}\) also has a strong dependency on the surface reactive flux and a weak dependency on the frequency of the applied field. These coupled dependencies directly lead to the non-monotonic behavior observed in Fig. 5(a), and also explain why \(\text{AREF}_{\text{max}}\) is a strong effect.
Figure 4: **Analysis of the** AREF for a binary electrolyte.
values are within an order of magnitude of one another for a wide range of \(\kappa\) and \(\omega\) values.
In contrast, Fig. 5(b) shows the contours of \(\text{AREF}_{\text{max}}\) with \(\kappa\) and \(\omega\) values for AREFs caused by diffusivity asymmetry. Here, a monotonic increase in \(\text{AREF}_{\text{max}}\) with an increase in \(\omega\) and a decrease in \(\kappa\) are observed. Unlike the surface reactions case, the differences in \(\text{AREF}_{\text{max}}\) are on the scale of several orders of magnitude between different contours. This indicates a strong dependency of the AREF on both \(\kappa\) and \(\omega\).
The results outlined in Figs. 4 and 5 emphasize that the behavior of AREFs with surface reactions is different from the behavior of AREFs with asymmetric diffusivities alone. Specifically, we find that the magnitude of an AREF tends to be larger with surface reactions for the range of parameters explored. In fact, the maximum values are also insensitive to \(\kappa\) and \(\omega\), meaning surface reactions are an important mechanism to tune AREFs. Next, we discuss AREFs in the presence of more than two ions.
Figure 5: **Contour plot of the maximum AREF for binary electrolyte for (a) surface reactions only and (b) diffusivity asymmetry only. Panel (a) is simulated with \(j_{0}^{(1)}=-0.5\), \(D_{1}=1\), and \(D_{2}=1\), while panel (b) is simulated with \(D_{1}=2\), \(j_{0}^{(1)}=0\), and \(D_{2}=1\). Both cases have two ions with valences of \(z_{1}=1\) and \(z_{2}=-1\), respectively. Variation of \(\text{AREF}_{\text{max}}\) for the surface reactions case alone is less sensitive to changes in \(\kappa\) and \(\omega\) than the diffusivity asymmetry case alone.**
Figure 6: **Comparison of data for analytical calculations and simulations for an electrolyte solution with three ions for (a) the maximum value of AREF, i.e., \(\text{AREF}_{\text{max}}\) and (b) the location of the maximum AREF near left electrode, i.e., \(x_{c}\). In both panels, \(j_{0}^{(1)}=-0.01\) to \(-\,0.85\) (orange), \(\kappa=50\) to \(400\) (blue), \(\omega=10\) to \(100\) (pink), and both \(z_{1}=-3\) to \(3\) and \(z_{2}=-2,-3\) (green). When not varied, \(\kappa=100\), \(\omega=100\), \(j_{0}^{(1)}=-0.5\), \(z_{1}=1\), \(z_{2}=-1\), and \(z_{3}=1\). In all cases, we find that the results from analytics and numerical simulations collapse onto the diagonal, indicating strong agreement between the results of the two methods.**
#### iii.2.2 Three Ions
To move beyond the comparisons outlined in the prior subsection, we investigate the parameter dependencies of \(\text{AREF}_{\text{max}}\) and its location, denoted here by \(x_{c}\), for a solution with three ions. We note that \(x_{c}\) is a crucial physical parameter that has been previously observed in experiments [7; 8; 11; 14; 20; 22].
First, we compare the values of \(\text{AREF}_{\text{max}}\) obtained from both numerical simulations and analytical calculations for a three-ion cell; see Fig. 6(a). To focus on the effect of surface reactions, we keep diffusivities constant, or \(D_{1}=D_{2}=D_{3}=1\). We perform a comprehensive sweep of parameters including \(\omega\), \(\kappa\), \(z_{i}\) and \(j_{0}^{(1)}\). Excellent quantitative agreement is observed between simulations and analytical solutions obtained over the entire space of parameters. Since the dependency of \(\text{AREF}_{\text{max}}\) on \(\kappa\) and \(\omega\) was discussed previously, we focus this discussion on the dependency of \(\text{AREF}_{\text{max}}\) on the other listed parameters. We find that \(\text{AREF}_{\text{max}}\) increases with an increase in \(j_{0}^{(1)}\). This trend is expected since increasing the the value \(j_{0}^{(1)}\) leads to a larger \(\tilde{I}^{(1)}\), which dictates the strength of the AREF. The changes in valence, in contrast, result in a non-monotonic behavior. The non-monotonic behavior of \(z_{i}\) can be explained from the trend of \(\kappa\). We recall that \(\kappa^{-1}\) is not the true Debye length, but instead a measure of Debye length in the system. This choice was made out of mathematical convenience; see section III.2. As such, the green arrow in Fig. 6(a) starts at \(z_{1}=-3\) and ends at \(z_{1}=3\). Therefore, the true Debye length first increases and then decreases. In effect, the trend should follow a decrease in \(\kappa\) (i.e., opposite to blue arrow) and then an increase in \(\kappa\) (i.e., in the direction of the blue arrow). This is consistent with the observed behavior of change in ionic valence.
We now focus on \(x_{c}\) closest to the left electrode; see Fig. 6(b). We observe strong quantitative agreement between simulations and analytical calculations. An increase in \(j_{0}^{(1)}\) magnitude moves the location of the maximum further from the electrode. In contrast, an increase in \(\omega\) moves the location of the maximum closer to that of the electrode. Surprisingly, the location of the maximum yields a non-monotonic behavior with \(\kappa\). We investigate this in more detail in Fig. 7. To probe this non-monotonic behavior, Fig. 7 shows the location of the maximum with \(\kappa\) for six different values of \(j_{0}^{(1)}\) ranging from -0.005 to -0.75. For small \(j_{0}^{(1)}\) values, we observe that the location moves towards the electrode with an increase in \(\kappa\). However, as \(j_{0}^{(1)}\) grows, this behavior becomes more non-monotonic, suggesting that competing effects are present in the system. In fact, when \(\kappa\) values are small, the different curves of \(j_{0}^{(1)}\) appear to converge. We believe the non-monotonic behavior present is due to \(\kappa\) driving movement of \(x_{c}\) towards the electrode whereas \(j_{0}^{(1)}\) is driving \(x_{c}\) away from the electrode.
The results described in Fig. 6 demonstrate that the analytical procedure outlined in this manuscript is able to uncover the complex dependencies of AREFs on system parameters even for more than 2 ions. However, our analytical methodology is limited to the scenario of equal diffusivities. Asymmetric diffusivities can be included in the analytical formulation, but the relevant physics are not clearly visible with asymmetric diffusivities. As such, we solve the case with both asymmetric diffusivities and surface reactions numerically for convenience. With this in mind, we next focus on a 5-ion case with both surface reactions and diffusivity asymmetries.
#### iv.1.3 Five-Ion Case
We investigate a five-ion problem (Na\({}^{+}\), H\({}^{+}\), Ca\({}^{2+}\), Al\({}^{3+}\), and Cl\({}^{-}\), where the diffusivity values for each ion are taken from literature [45]) such that only one of the cations is reactive; a schematic of the problem is provided in Fig. 8(a). We would like to emphasize that the setup described here is hypothetical and these surface reactions may not be necessarily observable in a real electrochemical cell. The intent of this exercise is to study the simultaneous impact of diffusivity asymmetry and surface reactions, such as the ones present in experiments [20, 21, 22].
We focus on a constant charge flux magnitude due to surface reactions \(j_{0}^{(1)}\). The resulting AREFs given in Fig. 8(b) show the cases where each of the cations is reactive with \(j_{0}^{(1)}=-0.5\). We note that the maximum AREF values are smaller that previously disscussed scenarios due to the inverse relationship between AREF\({}_{\rm max}\) and \(I^{(0)}\); see Eq. (14).
We find that the peak AREF location and magnitude is weakly dependent upon which ion is reacting. Even though the diffusivity of the H\({}^{+}\) ion is larger than the other reacting ions by approximately one order of magnitude, no appreciable change in the AREF is observed. This indicates that if ionic diffusivities are on the same order of magnitude and \(j_{0}^{(1)}=O(1)\), estimating AREFs by assuming equal diffusivities could serve as a good starting point. This underscores the utility of the analytical procedure outlined in section V.
### Asymmetric Rectified Concentration Fields (ARCFs)
We investigate the formation of ARCFs in a binary electrolyte. We employ Eq. (12a) to estimate the ARCF based on first-order results from (i) numerical calculations described in section IV.4 and (ii) analytical calculations described in section V and the Appendix.
We note that ARCFs form when \(\hat{\rho}_{e}^{(1)}\neq 0\) beyond the EDLs. To this end, we find that surface reactions alone are unable to produce ARCFs in the limit \(\sqrt{I^{(0)}}\kappa\gg\sqrt{\omega}\), which is valid in experiments [14, 20, 22, 10]. In this limit, for symmetric diffusivities, Eq. (17) reduces to \(\hat{\rho}_{e}^{(1)}\approx-\left(j_{0}^{(1)}+I^{(0)}\right)\frac{\sinh\left( \sqrt{I^{(0)}}\kappa x\right)}{\sinh\left(\sqrt{I^{(0)}}\kappa\right)}\), which suggests that the charge is only accumulated in the EDLs and ARCFs do not form. We note that Bazant et al. [46] demonstrated that a region outside the EDLs could accumulate charge for large potentials even for symmetric diffusivities, but this effect cannot be captured in our analysis and could be a potential avenue for future research.
Now, we shift our focus to the case of asymmetric diffusivities only. We show that \(\hat{\rho}_{e}^{(1)}\neq 0\) outside EDLs for \(D_{1}\neq D_{2}\); see Fig. 9(a). As expected, the magnitude of \(\hat{\rho}_{e}^{(1)}\) increases with an increase in \(D_{1}\). This charge imbalance
occurs because the equations for the charge and salt are coupled for asymmetric diffusivities [38; 15]; also see the Appendix. The induced ARCFs due to these charge imbalances are shown in Fig. 9(a). The ARCF is also a long-range steady field, indicating that it could be important for relatively large distances away from the electrodes. Interestingly, the spatial dependency of the ARCF remains identical even if the values of \(D_{1}\) and \(D_{2}\) are interchanged (results not shown), in contrast to AREFs [18; 15; 12]. We note that the magnitude of the ARCF is smaller compared to the AREF; see Fig. 4. While this might suggest that the role of the ARCF is minor, we note that the shape of the ARCF is different than the AREF and could thus influence the regions where the AREF is smaller. We also anticipate that ARCFs will become stronger in the nonlinear applied potential limit [46]. Furthermore, the relative importance of AREFs and ARCFs will depend on the interactions between the ionic species and particles [24]. This is particularly important for the phenomena of electrodiffusiophoresis, where both electrophoresis and diffusiophoresis are present [20; 21]. We emphasize that while surface reactions cannot induce ARCFs on their own, they can enhance ARCFs caused by diffusivity asymmetry; see Fig. 9(b), which shows the linear dependency of \(\text{ARCF}_{\text{max}}\) on \(j_{0}^{(1)}\).
### Validity of the proposed framework
Since our work assumes \(\phi_{D}\ll 1\), the results are most accurate up to an applied potential of \(\pm 25\) mV. However, as shown by Balu and Khair [18], the peak value of AREFs is relatively linear up to \(\phi_{D}=10\). This means that the results shown here may be extra-polated until applied potentials of \(\pm 250\) mV with a low to moderate loss in accuracy. With even larger voltages, this framework is only suitable for a qualitative analysis. The surface reactive flux boundary conditions used in this work break down when reaction rate is no longer linear with potential; the potential value at which they break down will depend on the transfer coefficients and Stern layer thickness [47], and we thus refrain from making a quantitative estimate of this limit.
### Limitations of the proposed framework
The proposed framework demonstrates that electrochemical reactions can produce AREFs and that ARCFs are also present in the system. Further, our work elucidates that imbalances in ionic strength and charge density outside the EDLs produce AREFs and ARCFs, respectively. Nonetheless, our work has two primary limitations.
The first limitation of this work is that it assumes a small applied potential. In experiments, the voltage applied is significantly larger [10; 14; 20; 21], and thus the nonlinear PNP equations would need to be solved. In this scenario, typically, the thin EDL limit is invoked and a singular perturbation expansion is performed [18; 37; 44; 46; 48]. The analysis for a binary electrolyte with asymmetric diffusivities and no electrochemical reactions has been investigated by
Figure 9: Presence of ARCFs. (a) ARCF vs \(x\) for diffusivity asymmetry for \(D_{1}=2\) (blue), \(D_{1}=3\) (pink), \(D_{1}=4\) (black), and \(D_{1}=8\) (orange). \(D_{2}=1\) in all cases. (b) \(\text{ARCF}_{\text{max}}\) vs \(-j_{0}^{(1)}\) with both diffusivity asymmetry and surface reactions (only cation reactive). \(D_{1}=1\) and \(D_{2}=4\). In all cases, analytical results are solid lines and numerical simulations are orbs. \(\kappa=100\), \(\omega=1000\), \(z_{1}=1\), and \(z_{2}=-1\).
Balu and Khair [18]. Based on the trends observed in Fig, 4(b), we anticipate that the AREFs due to chemical reactions will appear at leading- and first-order expansion terms in the singular perturbation expansion, unlike asymmetric diffusivities where they are observed at the second-order [18], though a complete analysis is required to be certain. For a system with electrochemical reactions and symmetric diffusivities, the framework proposed in our prior work [37] can be extended.
The second limitation of our work is that it ignores the effect of equilibrium cell potential. Clearly, the surface reactions become significant beyond a certain cell voltage. To capture such an effect, Frumkin-Butler-Volmer kinetics could be invoked [47; 49]. In this approach, forward and reverse reactions are written in terms of overpotential, defined as the difference between the applied potential and the equilibrium cell potential. Additionally, a Stern layer needs to be accounted for in the system to write the Frumkin-Butler-Volmer kinetic equation. There are two primary differences from the approach described in this paper. First, the leading order solution requires the formation of equilibrium double layers. Second, the boundary conditions for potential will now be written as a linear potential drop across each Stern layer. We note that our prior work [37] provides a methodology to capture these effects, while also simultaneously capturing the effect of EDLs and the region outside EDLs, albeit in the limit of symmetric diffusivities. To capture these effects for asymmetric diffusivities, the frameworks laid out by Balu and Khair [18] and Jarvey et al. [37] would need to be combined, but the analysis is likely to be limited to a binary electrolyte. For multicomponent electrolytes, numerical calculations would be required.
We emphasize that while the opportunities outlined above will improve the accuracy of the current results, qualitatively, the small potential calculations are able to capture the essential physics of the system.
## VII Conclusion and Outlook
In this article, we performed a regular perturbation expansion in the small-applied-potential limit on the Poisson-Nernst-Planck equations for multicomponent electrolytes for arbitrary diffusivities and valences, while also including the effect of electrochemical surface reactions. Our results highlight that surface electrochemical reactions are an additional mechanism for AREF formation. We show that an imbalance in ionic strength is a prerequisite for a nonzero AREF. Further, we find that ARCFs may also be present in electrochemical cells and could induce a diffusiophoretic force on the particles. We show analytical expressions for AREFs and ARCFs are possible by further invoking the thin-double-layer limit. AREFs caused by surface reactions are less sensitive to parameters as compared to AREFs from diffusivity contrast. Lastly, we find that ARCFs appear primarily due to diffusivity contrast, though electrochemical reactions can enhance them.
Our contributions are directly applicable for the directed assembly of colloids using electric fields, including ellipsoids [50], colloidal dumbbells [51], colloidal dimers [52], dicolloids [53], Janus particles [54], patchy anisotropic microparticles, [55] and chiral clusters [56], among others. This assembly method holds great promise in creating materials with superior optical and electrical properties [55; 56; 57; 52], such as photonic crystals, microactuators, and colloidal robots. The work presented here can help estimate the assembly of colloids in an electrochemical cell, where reactions provide an additional knob to tune colloidal assembly [19; 20; 21; 22].
We highlight that the implications of our findings extend beyond the realm of colloids. Specifically, multicomponent electrolytes and surface reactions are often used in batteries [58; 59], Faradaic desalination [60; 61], carbon dioxide reduction [62], hybrid capacitors [63], and reversible metal electrodeposition windows [64]. While multicomponent electrolytes and surface electrochemical reactions are prevalent in experimental literature, theoretical understanding of ion transport in these systems still remains elusive. In our previous work [37], we analyzed the coupled effects of electrical double layers and surface electrochemical reactions on ion transport in multicomponent electrolytes for a DC potential. This work furthers the literature on AC fields, which have important implications in electrochemical impedance spectroscopy [44] and transport in porous materials [38; 39; 40; 65]. The results outlined here are also a potential avenue for improving upon the modified Donnan potential approach [47; 60] for modeling Faradaic capacitive deionization [66].
## VIII Acknowledgements
A.G. thanks the National Science Foundation (CBET - 2238412) CAREER award for financial support. Acknowledgement is made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research. N. J. thanks the ARCS Foundation Scholarship and GAANN fellowship in Soft Materials for financial support. F. H. acknowledges the Ryland Family Graduate Fellowship for financial assistance.
## Appendix: Adapting results from Hashemi et al. [17]
At first order in \(\phi_{D}\), we directly apply the analytical results of Hashemi et al. [17]. Note that their derivation is only valid for binary electrolytes, and as such is used exclusively to compare with results for binary electrolytes. To match the formulation used in their work, we define
\[\kappa_{h}=\kappa\sqrt{z_{2}^{2}z_{1}-z_{1}^{2}z_{2}}, \tag{21a}\] \[\omega_{h}=\frac{\omega}{\sqrt{2}\kappa^{2}},\] (21b) \[\beta=\frac{D_{1}-D_{2}}{D_{1}+D_{2}},\] (21c) \[\gamma=\frac{z_{1}+z_{2}}{z_{1}-z_{2}}, \tag{21d}\]
where ion 1 is the cation and ion 2 is the anion. We then write the additional parameters in their analytical solutions such that
\[\Delta=1-4\beta\omega_{h}\left(i\gamma+\beta\omega_{h}\right), \tag{22a}\] \[s=2i\beta\omega_{h}+\sqrt{\Delta},\] (22b) \[z=0.5\left(z_{1}-z_{2}\right),\] (22c) \[\lambda_{+}=\sqrt{\frac{1+2i\omega_{h}+\sqrt{\Delta}}{2}},\] (22d) \[\lambda_{-}=\sqrt{\frac{1+2i\omega_{h}-\sqrt{\Delta}}{2}}. \tag{22e}\]
Continuing to build to the forms of the first-order solutions the authors arrive at, we write
\[\Gamma=s^{2}+2\gamma+1-\frac{1}{2\kappa_{h}}\left(\frac{(\gamma+ 1)(s-1)^{2}(\lambda_{-}\kappa_{h}-\tanh\lambda_{-}\kappa_{h})}{\lambda_{-}^{3 }}\right.- \tag{23a}\] \[\left.\frac{(\gamma-1)(s+1)^{2}(\lambda_{+}\kappa_{h}-\tanh \lambda_{+}\kappa_{h})}{\lambda_{+}^{3}}\right),\] \[A=\frac{s-1}{\lambda_{-}\kappa_{h}\cosh\left(\lambda_{-}\kappa_{h} \right)\Gamma},\] (23b) \[\mathcal{B}=\frac{s+1}{\lambda_{+}\kappa_{h}\cosh\left(\lambda_{ +}\kappa_{h}\right)\Gamma},\] (23c) \[C=\kappa_{h}^{-1}\left(-1+\frac{A(1+\gamma)(s-1)\sinh\lambda_{ -}\kappa_{h}}{2\lambda_{-}^{2}}+\right.\] (23d) \[\left.\frac{\mathcal{B}(1-\gamma)(s+1)\sinh\lambda_{+}\kappa_{h} }{2\lambda_{+}^{2}}\right).\]
Finally, we write the first order variables
\[\hat{c}_{1}^{(1)}=-\left(A(-\gamma+s)\sinh\left(\lambda_{-}x\kappa _{h}\right)+\mathcal{B}(1+\gamma)\sinh\left(\lambda_{+}x\kappa_{h}\right) \right), \tag{24a}\] \[\hat{c}_{2}^{(1)}=-\left(A(1+\gamma)\sinh\left(\lambda_{-}x\kappa _{h}\right)-\mathcal{B}(-\gamma+s)\sinh\left(\lambda_{+}x\kappa_{h}\right) \right),\] (24b) \[\hat{\phi}^{(1)}=-z^{-1}\left(Cx\kappa_{h}-\frac{A(1+\gamma)(s-1 )\sinh\left(\lambda_{-}x\kappa\right)}{2\lambda_{-}^{2}}\right.-\] (24c) \[\left.\frac{\mathcal{B}(1-\gamma)(s+1)\sinh\left(\lambda_{+}x \kappa\right)}{2\lambda_{+}^{2}}\right),\] \[\hat{E}^{(1)}=-z^{-1}\left(Cx\kappa_{h}-\lambda_{-}\kappa_{h} \frac{A(1+\gamma)(s-1)\cosh\left(\lambda_{-}x\kappa\right)}{2\lambda_{-}^{2}}-\right.\] (24d) \[\left.\lambda_{+}\kappa_{h}\frac{\mathcal{B}(1-\gamma)(s+1)\sinh \left(\lambda_{+}x\kappa\right)}{2\lambda_{+}^{2}}\right).\]
To determine the AREF and ARCF from these first-order results, we employ Eqs. (14) and (12a). Note that the remainder of the analytics in section V do not hold for asymmetric diffusivities, but Eqs. (14) and (12a) are valid for asymmetric diffusivities.
|
2310.00923 | Lightweight Regression Model with Prediction Interval Estimation for
Computer Vision-based Winter Road Surface Condition Monitoring | Winter conditions pose several challenges for automated driving applications.
A key challenge during winter is accurate assessment of road surface condition,
as its impact on friction is a critical parameter for safely and reliably
controlling a vehicle. This paper proposes a deep learning regression model,
SIWNet, capable of estimating road surface friction properties from camera
images. SIWNet extends state of the art by including an uncertainty estimation
mechanism in the architecture. This is achieved by including an additional head
in the network, which estimates a prediction interval. The prediction interval
head is trained with a maximum likelihood loss function. The model was trained
and tested with the SeeingThroughFog dataset, which features corresponding road
friction sensor readings and images from an instrumented vehicle. Acquired
results highlight the functionality of the prediction interval estimation of
SIWNet, while the network also achieved similar point estimate accuracy as the
previous state of the art. Furthermore, the SIWNet architecture is several
times more lightweight than the previously applied state-of-the-art model,
resulting in more practical and efficient deployment. | Risto Ojala, Alvari Seppänen | 2023-10-02T06:33:06Z | http://arxiv.org/abs/2310.00923v2 | # Enhanced Winter Road Surface Condition Monitoring with Computer Vision
###### Abstract
Winter conditions pose several challenges for automated driving applications. A key challenge during winter is accurate assessment of road surface condition, as its impact on friction is a critical parameter for safely and reliably controlling a vehicle. This paper proposes a deep learning regression model, SIWNet, capable of estimating road surface friction properties from camera images. SIWNet extends state of the art by including an uncertainty estimation mechanism in the architecture. This is achieved by including an additional head in the network, which estimates a prediction interval. The prediction interval head is trained with a maximum likelihood loss function. The model was trained and tested with the SeeingThroughFog dataset, which features corresponding road friction sensor readings and images from an instrumented vehicle. Acquired results highlight the functionality of the prediction interval estimation of SIWNet, while the network also achieved similar point estimate accuracy as the previous state of the art. Furthermore, the SIWNet architecture is several times more lightweight than the previously applied state-of-the-art model, resulting in more practical and efficient deployment.
Computer vision, convolutional neural networks, intelligent vehicles, vehicle safety
## I Introduction
Friction between the road and vehicle types plays a key role in defining how a vehicle should be controlled and manoeuvred in winter conditions. The impacts of friction on driving safety are substantial, directly affecting factors such as braking distance and slip angle. The friction between a vehicle tyre and the road depends on both the tyre and road surface properties. Whereas the tyre properties remain mostly static, the road surface properties can greatly vary, especially during winter in countries with below-freezing temperatures. It has been explicitly noted in the literature that accident rates are strongly affected by road surface condition [1]. In winter conditions, the road surface friction properties are mainly dependent on the amount of snow, ice, and water on the road. Quantifying these road friction properties is essential for modelling, estimating, and predicting the friction between the tyre and the road. Consequently, vehicle control needs to adapt to the prevailing road surface conditions to ensure safe operation. As the availability and development of automated driving features is on the rise, the estimation methods for road surface friction properties have an increasingly important role. Automated driving solutions must be capable of adapting to different friction conditions, modifying their control based on the environment.
Road surface condition can be analysed with several on-board methods [2, 3]. A common approach has been to utilise wheel dynamics information for the estimation task. However, these methods generally have difficulties quantifying the road surface friction properties before severe slippage has occurred. Special optical sensors have also been developed for analysing road surface condition, yet these are generally too expensive for consumer vehicle applications. Recently, computer vision solutions with typical visible light cameras have been a popular approach for the task. Computer vision approaches offer the convenience of utilising the existing windshield camera equipment. Additionally, they have the potential of assessing the road surface condition in front of the vehicle, which enables predictive actions.
This paper expands computer vision-based road surface condition monitoring in winter conditions by presenting a deep learning model, SIWNet, for the task. The regression model is developed to predict a scalar estimate for the road surface friction properties, summarising the effects of visible snow, ice, and water on tyre-road friction. Hence the name of the model, SIWNet (Snow, Ice, Water Network). Additionally, the model estimates the uncertainty of the prediction by providing a prediction interval with a multi-head architecture. Such feature has not been previously proposed for regression-based road surface condition monitoring models. Furthermore, SIWNet has been designed with practical on-board deployment in mind, the model being several times smaller than the previous state of the art [4], yet achieving similar accuracy. Similarly to [4], SIWNet is trained in an automated manner by matching images and corresponding friction information. However, the work presented here utilises the SeeingThroughFog-dataset [5], where the friction values have been acquired with an optical road friction sensor.
The novel contributions of this work can be summarised as:
* SIWNet is the first road surface friction regression model to feature prediction intervals in the estimates.
* SIWNet is more computationally lightweight than previous state of the art.
* This paper is the first work training a road surface friction estimation computer vision model for winter conditions based on optical road friction sensor data.
## II Related Work
### _Road surface condition monitoring_
Road surface condition monitoring is essential for different vehicle applications, and the topic has been studied in the past with several approaches. Current interest in automated driving solutions has increased the importance of the field, as vehicle control algorithms require information of road surface
condition to properly assess the situation. Road surface condition monitoring is especially important in winter conditions, where the road surface friction properties greatly vary. The existing methods can be divided to contact and non-contact based methods.
Contact methods are not analysed here in-detail, as this work is focused on computer vision methodology. For a more thorough overview of contact-based methods, the review by Acosta _et al._[2] is recommended. Fundamentals of contact-based estimation methods are still briefly presented here. Contact-based methods generally measure the actual friction between the tyre and the road. Road surface condition can be consequently estimated from this information.
Of the contact-based methods, slip-based methods are the most common. Measuring wheel rotation information and inertial measurement unit readings, friction between the tyre and the road can be estimated [6]. Utilising this information, the road surface condition can be defined based on the friction characteristics. In a commercial vehicle, this information is readily available in the anti-lock braking unit and electronic stability control. However, slip-based methods typically cannot accurately estimate the road surface conditions until severe slippage has already occured. The review by Acosta _et al._[2] notes that slip-based approaches are typically considered inadequate for reliably improving the functionality of advanced driver assistance systems. Other contact-based methods for road surface monitoring are based on vibration. Low frequency methods can utilise signals such as the vehicle rotation speed [7]. However, the approaches rely largely on slip-slope assumptions, and consequently lack robustness. High frequency methods have yielded impressive results, yet these methods rely on additional sensor equipment. Works on the topic have utilised microphones installed on the vehicle to monitor the tyres [8]. This might not be a commercially feasible solution due to the burden of additional sensor installations.
To enable alternative means for estimating road surface condition, a number of non-contact methods have been developed. Non-contact methods typically exhibit different operation characteristics and attributes compared to the contact-based methods. They offer alternative ways for the measurement, or they could be fused with contact-based methods. Review for non-contact friction estimation presented by Ma _et al._[3]. The non-contact approaches can be roughly divided into methods that utilise special optical sensors, and methods which utilise computer vision algorithms to process images captured with traditional visible light cameras. Additionally, there have also been some studies exploring road surface condition monitoring with automotive radars [9].
With dedicated optical sensors, non-contact approaches commonly utilise infrared spectroscopy [10]. The approach is based on the different reflectance characteristics of water, ice, and snow. This measurement approach is commonly utilised, and multiple commercial products applying the method are available on the market [11, 12, 13]. Another optical technique for road surface condition monitoring is based on analysing polarisation of light [14].
Computer vision applied on regular visible light cameras has been a popular topic for non-contact road surface condition estimation. The approach is lucrative from a practical point-of-view, since modern vehicles are equipped with forward-facing cameras. Review of computer vision-based estimation has been prepared by Wu _et al._[15]. Generally, computer vision approaches utilise machine learning models such as convolutional neural networks (CNNs) to analyse images of the road.
Several works utilising computer vision have applied classification techniques to assess the road surface condition. Nolte _et al._[16] proposed applying CNNs to perform this classification task. They trained ResNet50 [17] and InceptionV3 [18] models to recognise six distinct categories of road surface: asphalt, dirt, grass, wet asphalt, cobblestone, and snow. Similar studies have been conducted by Sabanovic _et al._[19], who developed a CNN capable of classifying the road pavement type (asphalt, cobblestone, gravel) as well as the surface condition (dry, wet). The effectiveness of the developed road surface monitoring system was demonstrated with vehicle braking tests. The tests highlighted that the stopping distance was shorter when utilising an adaptive anti-lock braking system control strategy, which was tuned based on the classification results. To extend the capabilities of road surface classification, Wang _et al._[20] proposed applying a segmentation CNN to perform the classification task. Their classification task featured a total of nine different pavement and road surface condition combinations, including categories from winter conditions.
To further enhance the development of computer vision-based road surface monitoring in summer conditions, Cordes _et al._ published an open classification dataset called RoadSaw [21]. They collected the dataset in an automated manner, mounting an optical road surface monitoring sensor as well as a camera on a vehicle. As a result, a large dataset of images in realistic road conditions was captured, along with the corresponding road surface information. The dataset features three different pavement types (asphalt, cobblestone, concrete), which have four different surface conditions (dry, damp, wet, very wet). The authors evaluated the performance of a MobileNetV2 [22] classifier on the data. They also proposed adding deterministic uncertainty quantification [23] functionality to the network, allowing assessment of the uncertainty of the classification predictions.
Classification-based road surface monitoring has also been extended to model the road surface in finer spatial detail. In the camera view, several road surface conditions may be visible simultaneously, such as patches of snow on an asphalt road. Providing a single classification for the entire visible road may result in inadequate assessment of the prevailing situation. Roychowdhury _et al._[24] proposed a two-stage approach for detailed analysis of the road surface in winter conditions. After classifying the overall road surface condition with a CNN, they split the image of the road into a grid, in which each cell was classified separately.
The resolution of road surface condition estimation has been also extended by applying regression models to the task, instead of utilising classification models. In terms of friction-related information, regression models allow for more accurate representation of the road surface properties. This is due the
model predicting a continuous value, instead of utilising a discrete number of categories labelled with certain values. Vosahlik _et al._[4] proposed automated training of a CNN regression model based on corresponding friction information derived from a slip-based contact method. Based on data acquired with a sub-scale car model, they created a dataset of matching friction values and images, which also included samples from winter conditions. They trained a ResNet50 network on the data to perform the regression task. Regression-based road surface condition estimation was also developed by Du _et al._[25], who studied the problem in summer conditions. They collected road surface condition information with a commercial contact-based monitoring system, utilising a trailer pulled behind the vehicle. With matching image information, they trained ResNet50 and InceptionV3 [18] regression models to predict the road surface friction properties. The vision-based estimators were fused with another neural network, which processed the vehicle dynamics information. The fusion model seemed to offer clear benefits, as several key performance values were improved.
### _Prediction intervals for deep learning regression models_
Uncertainty assessment has been a key topic in deep learning due to the obscurity of the data-driven the models. In order to evaluate the reliability of the estimates produced by deep learning models, several approaches have been proposed to quantify related uncertainties. An extensive review on the topic has been written by Abdar _et al._[26]. Most extensive uncertainty quantification methodologies include Bayesian neural networks and ensemble techniques. Bayesian neural networks model the network weights as distributions, allowing the predictions to be accurately modelled as posterior distributions. Ensemble models rely on utilising multiple neural network models to process the input, and determining the uncertainty based on the outputs of the networks. Bayesian and ensemble methods have been proven to generally provide reliable uncertainty information. However, these methods typically require immense computational resources to operate, and applying them is often not practically feasible.
A common approach for assessing uncertainty in regression problems is the estimation of prediction intervals [27]. Several approaches have been proposed for generating prediction intervals with deep learning regression models, including the previously mentioned Bayesian and ensemble methods. However, more lightweight approaches for estimating prediction intervals have also been developed. Typically this has been achieved by modifying the neural network architecture to feature an additional output node. In order to quantify uncertainty, Nix and Weigend [28] proposed adding a separate output node with its own hidden layers to a fully connected network architecture. The additional output node was responsible for estimating the variance of each prediction on each forward pass, whereas the other output node remained responsible for producing the point estimate. Modelling each network prediction as a probability distribution, the network could be trained with a maximum likelihood loss function. In such architecture, uncertainty is quantified by the value of the variance output, which can also be used to generate prediction intervals. Somewhat similar prediction interval estimation has been proposed by Khosravi _et al._[29]. Their approach was based on two output nodes, which were responsible for predicting the lower and upper bound of the prediction interval. A special loss function was used for training, which determined the target coverage of the prediction interval.
### _Research gap_
This paper aims to enhance the existing state of the art of winter road surface condition evaluation. A novel CNN architecture, SIWNet, is proposed for the task of computer vision-based estimation of road friction properties. Similarly to the work of Vosahlik _et al._[4], SIWNet is implemented as a regression model.
SIWNet expands state of the art by including an additional prediction head in the network architecture for uncertainty quantification. This enables representation of the model output as a prediction interval. The approach is based on the work of Nix and Weigend [28]. Uncertainty quantification is a vital feature for computer vision-based road surface monitoring systems, as visual estimation of road surface friction properties is bound to feature varying levels of uncertainty. Cordes _et al._[21] have previously proposed an uncertainty estimation approach in their classification-based work, yet this is the first paper to propose such feature for a regression approach.
In addition to including an uncertainty quantification mechanism, SIWNet is also computationally extremely lightweight. The findings of this paper highlight that winter road surface condition estimation does not require an extensively large neural network architecture. This is a notable benefit, considering the generally limited vehicle on-board computational resources.
## III Methods
### _Problem formulation and dataset_
The goal of the research was to develop a model capable of assessing the road surface friction properties based on images captured from a typical vehicle windshield camera. To acquire relevant data for training such a model, the publicly available SeeingThroughFog dataset [5] was utilised. The dataset has been gathered by driving an instrumented vehicle in central European and Nordic countries during winter. Part of the dataset has also been recorded in the springtime, to include summer-like conditions. The instrumented vehicle contained a forward-facing camera, as well as an optical road friction sensor. The sensor setup is illustrated in Figure 1. In this paper, the camera images and corresponding road friction sensor readings were utilised for developing a neural network model, SIWNet, for assessing the road surface condition. SIWNet is trained to process an image of the road, and predict a value corresponding to the road friction sensor reading.
The optical friction measurement unit used in the dataset is a prototype from a widely recognised manufacturer, seemingly similar to those of the manufacturer's other models [11, 12, 13]. Similarly to the commercial equivalents of the sensor, the unit measures the amount of snow, ice, and water on the road
surface. By manufacturer software, these road surface friction properties are summarised into a single factor, _grip factor_, within range 0.09...0.82. This factor effectively indicates the slipperiness of the road. Here, this grip factor was normalised to a range 0.00...1.00, and called _friction factor_, \(\hat{f}\). It should be noted that the friction factor is not the friction coefficient between the tyre and the road, as determining this value depends on the tyre properties. The friction factor represents a value that can be used to estimate slipperiness of the road, and consequently the actual friction coefficient in case relevant tyre parameters are known.
The images in the dataset have a resolution of 1920x1024 pixels. To focus the analysis on the most relevant portion of the image, the road directly ahead, a predefined static section of the images was cropped and transformed to bird's-eye-view. The predefined section was chosen manually based on a single image, and the bird's-eye-view transformation was achieved by stretching the images to a square. This process is visualised in Figure 2. These pre-processing steps are similar to those presented in [4].
The used dataset contained a total of 4330 samples, as this is the number of measurements in the SeeingThroughFog-dataset which contain readings from the road friction sensor. These samples were collected in different locations on 12 different days. Figure 3 displays the number of samples collected on each date. The distribution of the friction factor values in the dataset is visualised in Figure 4. Image samples from the dataset with corresponding friction factor readings are provided in Figure 5.
### _Model architecture_
The architecture of SIWNet consists of a feature backbone, as well as a point estimate head and a prediction interval head. SIWNet architecture is presented in detail in Figure 6. The feature backbone is responsible for processing relevant features from the images. Based on this information, the point estimate head outputs the predicted friction factor \(\hat{f}\). The prediction interval head is responsible for assessing the uncertainty related to the point estimate. Based on the features and the point estimate, the prediction interval head outputs a predicted standard deviation \(\hat{\sigma}\), which is utilised to establish a prediction interval around the point estimate.
The feature backbone is based on the ResNet [17] architecture, applying the same basic residual building blocks. Contrary to typical ResNet implementations, each block is used only once. This design results in an extremely lightweight architecture, ensuring practical applicability in embedded on-board systems. In the point estimate head, a single fully connected layer is applied, which is typical for regression tasks. A sigmoid activation function was added to the final output, as the friction factor values were bound between 0 and 1. The prediction interval head is also constructed of fully connected layers. These layers analyse the features provided by the feature backbone, as well as the prediction provided by the point estimate head. At the end of the prediction interval head, a sigmoid activation was added to enhance the stability of the output. This conveniently limited the predicted standard deviation \(\hat{\sigma}\) to positive numbers. The upper limit of 1 for \(\hat{\sigma}\) was
Fig. 1: Illustration of the sensor installations, with the camera at the windshield and the road friction sensor at the bumper.
Fig. 3: Number of samples per date in the dataset.
Fig. 2: Transformation of the area in front to bird’s-eye-view.
eemed reasonable, since with the probabilistic model applied to train the network, this corresponded to a nearly uniform distribution within the range 0...1.
### _Loss function for prediction interval head_
The prediction interval head of SIWNet is trained to quantify the uncertainty related to the prediction of the point estimate head. Based on the quantified uncertainty, a prediction interval can be generated. In order to perform this task, the output of SIWNet is modelled as a truncated normal distribution. The goal of the training process is to maximise the likelihood of the training labels with regard to the predicted distributions. Previous work has applied similar methodology with a regular normal distribution [28]. A truncated normal distribution of a random variable \(x\) has a probability density function (PDF) of the form [30]
\[p(\mu,\sigma,a,b;x)=\frac{\phi(\frac{x-\mu}{\sigma})}{\Phi(\frac{b-\mu}{\sigma} )-\Phi(\frac{a-\mu}{\sigma})} \tag{1}\]
where \(\mu\) and \(\sigma\) denote the mean and standard deviation of the underlying normal distribution, respectively. The lower and upper truncation bounds are denoted by \(a\) and \(b\), respectively. The truncated normal distribution PDF representation is based on the underlying normal distribution PDF, defined as
\[\phi(\frac{x-\mu}{\sigma})=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^{2}}{2 \sigma^{2}}} \tag{2}\]
as well as the underlying normal distribution cumulative distribution function, defined as
\[\Phi(\frac{b-\mu}{\sigma})=\frac{1}{2}(1+erf(\frac{b-\mu}{\sigma\sqrt{2\pi}})), \tag{3}\]
where \(erf(\cdot)\) is the error function. Consequently, the negative log-likelihood of the truncated normal distribution has the form
\[-\ln p(\mu,\sigma,a,b;x) =\ln\sigma+\frac{(\mu-x)^{2}}{2\sigma^{2}}\] \[+\ln{(erf(\frac{\mu-b}{\sigma\sqrt{2}})-erf(\frac{\mu-a}{\sigma \sqrt{2}}))}. \tag{4}\]
The training process of the prediction interval head of SIWNet is based on maximising the likelihood of the corresponding ground truths and predicted truncated normal distributions. Consequently, this means minimising the negative log-likelihood with the predictions. The utilised loss function for a training batch has the form
\[\mathcal{L}=\sum_{i=i}^{n}-\ln p(\hat{f}_{i},\hat{\sigma}_{i},a,b;f_{i}) \tag{5}\]
where \(n\) denotes the number of samples in the batch and \(f\) denotes the ground truth friction factor. The underlying normal distribution mean is the predicted friction factor \(\hat{f}\) from the point estimate head, whereas the predicted standard deviation \(\hat{\sigma}\) is the output of the prediction interval head. To improve stability, \(\hat{\sigma}\) is thresholded to a minimum value of \(1\cdot 10^{-4}\) when computing the loss.
### _Training and testing_
For training SIWNet and conducting the presented experiments, the following steps were taken. The utilised dataset was randomly divided into train-validation-test sets with a respective 70%-15%-15% split. When splitting the data, timestamps were utilised to ensure that data samples gathered from the same location were not included in different sets. Hyperparameters were optimised by training the network on the training set and finding the best result on the validation set. For the evaluation on the test set, the model was re-trained on a combination set containing both the training and validation sets.
During the training, the feature backbone and point estimate head were first trained for 60 epochs utilising regular mean squared error as the loss function. After these parts of the network were trained, their weights were frozen. Afterwards, the prediction interval head was trained for 60 epochs with the loss function presented in Equation 5. Dropout with a probability of 0.5 was applied when feeding the feature backbone output to the prediction interval head during training.
Training was carried out with a batch size of 32, and stochastic gradient descent with a momentum of 0.9 was used as the optimisation method. During training, the learning rate
Fig. 4: Friction factor values in the dataset, with a zoomed view providing a clearer depiction of the under-represented values.
was decayed with a step-based scheduler. Every twenty epochs the learning rate was reduced to one tenth of the previous value. As for data augmentation during training, the images were randomly flipped horizontally as well as rotated with a value from [-4,4] degrees. Furthermore, the pixel values were slightly scaled by random color jitter in the range [0.9, 1.1]. For both training and testing, images were reshaped to 324x324 pixels before being fed to the network. Additionally, the pixel values were normalised with the mean and standard deviation of the pixel values in the training set.
The presented experiments demonstrate comparison of SI-WNet to the ResNet50 model previously applied in the literature [4]. The same training and testing procedures were applied on the ResNet50 model, except for steps related to the prediction interval head, which the ResNet50 model does not include. A sigmoid activation was also added to the output of the ResNet50 model, since this was noted to boost performance. Both models were implemented on the PyTorch deep learning framework [31]. Number of trainable parameters in the models as well as floating point operations executed during inference were analysed with the pftbops-tool [32].
In order to benchmark the prediction intervals and the probabilistic properties of the predictions in the experiments, interval score and continuous ranked probability score (CRPS) were utilised [33]. The 90% prediction interval was used for defining the interval score. For constructing the prediction interval based on the truncated normal distribution output of SIWNet, the 90% interval of the distribution with the highest likelihood was used. Since the compared ResNet50 model produces only point estimates, there was no straightforward approach to define interval scores for the model. Thus, the ResNet50 model was given a static prediction interval surrounding the point estimate. The boundaries were set at a distance from the point estimate, which corresponded to the 90% error threshold \(e_{90\%}\) on the validation set. The 90% error was defined as the value below which 90% of absolute errors on the validation set were located. The prediction interval boundaries were clamped by the plausible friction factor values, 0 to 1.
were also analysed to determine the applicability of the model in embedded on-board usecases. The accuracy of SIWNet was evaluated on the test set in terms of point estimates, as well as prediction intervals. Samples of SIWNet predictions plotted next to corresponding test set images are presented in Fig. 7. Each prediction plot features the point estimate of the friction factor \(\hat{f}\), as well as the truncated normal distribution and prediction interval estimated by the prediction interval head. Results of SIWNet were compared to those of ResNet50, which has in previous literature been applied for a similar regression task.
### _Model size and computational load_
The neural network models were evaluated in terms of computational load by analysing the number of trainable parameters in the architectures, as well as the floating point operations (FLOPs) required to perform inference on a single input image. The SIWNet architecture was designed to require minimal computational resources, as evident in the results presented in Table I.
Compared to the previously applied ResNet50 model, SIWNet features several times fewer parameters. Additionally, the SIWNet model requires notably fewer FLOPs to process an image. The results highlight that compared to ResNet50, SIWNet requires less computational resources to operate. This indicates that SIWNet is well-fitted for on-board utilisation with limited embedded hardware.
### _Point estimate head performance_
Training the feature backbone and point estimate heads were the initial steps for training SIWNet. Hyperparameters were tuned with grid search, resulting in the initial learning rate and
Fig. 6: SIWNet architecture, with tensor sizes reported for processing a single image of 324x324 pixels. Each block shows the number of features in the output.
## IV Conclusion
Fig. 7: Samples of SIWNet predictions on the test set.
weight decay values of 0.1 and \(1\cdot 10^{-3}\), respectively. Identical hyperparameter tuning was performed for the ResNet50, resulting in the same optimal parameter values. Table II reports the point estimate accuracies achieved by the networks when predicting friction factors on the test data. Accuracies are reported in terms of mean absolute error (MAE) and root-mean-square error (RMSE).
The presented results highlight that SIWNet achieved a nearly identical point estimate accuracy as ResNet50. Considering the minimal size of the SIWNet, this indicates that a larger architecture does not offer added benefit for the prediction task.
### _Prediction interval head performance_
The key feature of SIWNet is its capability of assessing the uncertainty of the friction factor prediction, enabled by the prediction interval head. The prediction interval head was trained while keeping the feature backbone and point estimate head frozen. Hyperparameters were optimised with grid search, tuning the initial learning rate, weight decay, as well as number of neurons in the fully connected layers of the prediction head. Based on the optimisation, the initial learning rate and weight decay were set at values of \(5\cdot 10^{-4}\) and \(1\cdot 10^{-3}\), respectively. Acquired average interval score results are presented in Table III for SIWNet and ResNet50. As described previously, the prediction interval for ResNet50 was formulated based on the 90% error on the validation set.
Based on the comparison, SIWNet clearly achieved a more favourable average interval score. This indicates that the prediction interval head was capable of assessing the uncertainty related to the friction factor predictions of the point estimate head.
As another test to evaluate the performance of the prediction interval head, the CRPS metric was used to assess the quality of the distributions predicted by SIWNet. The point estimates of ResNet50 were scored on the metric for comparison. For point estimates, the CRPS is equal to the MAE score.
The acquired results highlight that the probabilistic forecasts of SIWNet were capable of more accurate prediction than the point estimates of ResNet50. This indicates that the prediction interval head of SIWNet did learn a functional strategy for assessing the uncertainty of the friction factor estimates.
### _Ablation study_
In order to ensure that the design decisions behind the SIWNet architecture were sensible and beneficial for the performance, an ablation study was carried out. Similarly to the previously presented results, the ablation study analysed the point estimate head and prediction interval head separately.
The point estimate head was modified by removing the sigmoid activation, and the related training and testing procedures were repeated. Identical actions were taken with the ResNet50, removing the sigmoid activation from its output. During testing, the outputs of the networks were clamped between 0 and 1. Resulting accuracies on the test set are presented in Table V. Based on the results, it is evident that the sigmoid activation was a beneficial addition for both networks.
Ablation study of the prediction interval head also investigated the effect of the sigmoid activation. The sigmoid activation was removed from the prediction interval output, and training and testing were repeated. In another test, the prediction interval head was trained and tested without using dropout regularisation on the feature backbone output. Additionally, the SIWNet model was studied by training the model without the proposed technique of pretraining the feature and backbone and point estimate head, and freezing their weights. Instead the whole network was optimised simultaneously utilising the loss function presented in Equation 5. Finally, the overall efficacy of the prediction interval head was evaluated by formulating a static prediction interval around the point estimates based on the 90% error on the validation set, similarly to the prediction interval used for the ResNet50 predictions. The ablations were evaluated with the interval score, and results from the tests are presented in Table VI. All tests concluded that the originally proposed SIWNet architecture was capable of scoring higher interval scores.
## V Discussion
SIWNet was capable of producing point estimates with equivalent accuracy as the ResNet50. This is an impressive result, considering that the model has approximately 79% fewer parameters, and required roughly 77% fewer FLOPs to process an input image. The presented experiments also highlight that SIWNet was capable of successfully assessing the uncertainty related to its friction factor predictions. This was evident in the CPRS as well as interval score results in Tables IV and III. However, exact quantification of the prediction interval capabilities of SIWNet is difficult, since only a naive method was used for comparison. The static prediction interval applied with ResNet50 point estimates was by no means an optimal method for uncertainty quantification. However, since no previous works applying uncertainty quantification to road surface friction regression exist, there were no clear alternatives for comparison. The implementation of SIWNet is published as open source, and future works on the topic are encouraged to compare their approaches to SIWNet.
Concerning the training procedure of SIWNet, the chosen approach was likely not optimal. During training of the prediction interval head with the loss function presented in Equation 5, the point estimate head as well as the feature backbone were frozen. This was done to stabilise the learning process, as the loss function can produce quite extreme values. The result of a direct training approach was demonstrated in the ablation study. However, the training stability comes at the cost of not finding optimal weights in the feature backbone for the uncertainty estimation. Finding a strategy to optimise the feature backbone accordingly might result in better uncertainty quantification and prediction interval estimates.
Regarding the validity of the presented results overall, some unreliability was likely caused by the used dataset and its characteristics. Overall, a larger dataset would provide more definitive results, as the models could learn richer representations from data, and the testing would provide a more thorough investigation of the prediction capabilities. In addition, the used dataset contained a clear over-representation of certain values, as visible in Figure 4. A more balanced dataset would likely result in better learning results of the detectors, as an unbalanced dataset can result in bias in the models. These factors may have contributed to the SIWNet and ResNet50 models achieving such similar point estimate accuracies. Increased variation in the used dataset might have resulted in more notable differences in the prediction results.
Another minor disadvantage of the used dataset was the fact that the road friction sensor readings did not exactly correspond to the visible road in the camera view. This can be seen in Figure 1, which illustrates the sensor placement. The road friction sensor was measuring the road area directly below the bumper of the vehicle. This specific road area was not actually visible in the camera view simultaneously, as the camera was monitoring the road slightly ahead of the bumper. Consequently, in order to use the data, an assumption had to made that the road friction properties were the same below the bumper and in the visible road area. If full time series of the recordings were available, this issue could be mitigated by matching the sensor reading to a previously captured image. However, assumption of same road condition below the bumper and slightly ahead in the camera view should hold most of the time. Effects of this drawback were likely minimal in the presented results.
Perhaps a more impactful feature of the used dataset was the utilisation of a single road friction sensor reading for determining the ground truth value for an entire image. The road friction sensor measured only a single point on the road, whereas the camera image captured a large portion of the road ahead. In some scenarios, the visible road conditions may have varied greatly within the image. For example, the road may have been partially covered in snow, while also featuring tyre tracks with clear asphalt or water. This naturally led to a certain degree of ambiguity and noise in the data, as the ground truth value has not always fully represented the entire information in the image. This was an inherent problem in the dataset, which likely resulted in the models learning approximate solutions that on average provided the highest scoring results despite the noise in the data. However, despite this limitation, similar datasets with limited ground truth coverage have been previously successfully applied in road condition monitoring development [21, 4]. An optimal dataset for road surface friction estimation purposes should include several ground truth labels for the visible road area, allowing more fine-grained estimation. This could be achieved by for example mounting a vehicle with several measurement units. To the best of the authors' knowledge, no such public dataset unfortunately exists.
It should also be noted that the utilised dataset, and consequently the trained models, only consider road friction properties related to snow, ice, and water. This was due to the focus on winter conditions, in which these factors largely determine the overall friction properties of the road. Consequently, analysis of road surface pavement type was ignored, which can be a drawback in certain road conditions. For better generalisability, future development of SIWNet could aim to include the road pavement type as a factor in the analysis.
## VI Conclusion
This work enhanced camera-based winter road condition monitoring by presenting the SIWNet model. Based on an image of the road, the deep learning regression model predicts a friction factor, which summarises the friction properties of the visible road. SIWNet also includes built-in uncertainty quantification capabilities, allowing the model to generate prediction intervals instead of traditional point estimates. Due to this feature, SIWNet enables robust road condition monitoring via computer vision. This is a key finding, as reliable road condition monitoring is vital for proper tuning of controllers in automated vehicle applications. Additionally, the SIWNet model is computationally lightweight, allowing convenient implementation of the network in embedded on-board systems.
Future research efforts on the topic should focus on more fine-grained spatial analysis of winter road surfaces. Computer vision solutions for road condition monitoring could be extended to pixel-level analysis of the road surface. Furthermore,
spatio-temporal data analysis might also provide clear benefits to the accuracy at which road condition can be monitored. Integrating such features to the SIWNet architecture could allow for even more reliable modelling and estimation of the road surface conditions in the future. Considering the wide variety of winter road conditions and their effect on optimal automated driving strategies, additional research efforts should be concentrated on developing increasingly robust solutions for these demanding circumstances.
In order to improve the reliability of automated driving solutions, the technology must be capable of operating in all weather and road conditions. Proper situational awareness of vehicles in adverse conditions is a key factor in enabling safe and robust future automated vehicle solutions. SIWNet advances the road condition monitoring capabilities of vehicles in winter conditions, potentially enhancing the operation of automated driving functionalities and bringing reliable automated driving functions closer to reality.
## Acknowledgments
The authors wish to acknowledge the funding provided by Henry Ford Foundation Finland.
|
2306.09987 | Transforming Observations of Ocean Temperature with a Deep Convolutional
Residual Regressive Neural Network | Sea surface temperature (SST) is an essential climate variable that can be
measured via ground truth, remote sensing, or hybrid model methodologies. Here,
we celebrate SST surveillance progress via the application of a few relevant
technological advances from the late 20th and early 21st century. We further
develop our existing water cycle observation framework, Flux to Flow (F2F), to
fuse AMSR-E and MODIS into a higher resolution product with the goal of
capturing gradients and filling cloud gaps that are otherwise unavailable. Our
neural network architecture is constrained to a deep convolutional residual
regressive neural network. We utilize three snapshots of twelve monthly SST
measurements in 2010 as measured by the passive microwave radiometer AMSR-E,
the visible and infrared monitoring MODIS instrument, and the in situ Argo
dataset ISAS. The performance of the platform and success of this approach is
evaluated using the root mean squared error (RMSE) metric. We determine that
the 1:1 configuration of input and output data and a large observation region
is too challenging for the single compute node and dcrrnn structure as is. When
constrained to a single 100 x 100 pixel region and a small training dataset,
the algorithm improves from the baseline experiment covering a much larger
geography. For next discrete steps, we envision the consideration of a large
input range with a very small output range. Furthermore, we see the need to
integrate land and sea variables before performing computer vision tasks like
those within. Finally, we see parallelization as necessary to overcome the
compute obstacles we encountered. | Albert Larson, Ali Shafqat Akanda | 2023-06-16T17:35:11Z | http://arxiv.org/abs/2306.09987v1 | Transforming Observations of Ocean Temperature with a Deep Convolutional Residual Regressive Neural Network
###### Abstract
Sea surface temperature (SST) is an essential climate variable that can be measured via ground truth, remote sensing, or hybrid "model" methodologies. Here, we celebrate SST surveillance progress via the application of a few relevant technological advances from the late 20th and early 21st century. We further develop our existing water cycle observation framework, Flux to Flow (F2F), to fuse AMSR-E and MODIS into a higher resolution product with the goal of capturing gradients and filling cloud gaps that are otherwise unavailable. Our neural network architecture is constrained to a deep convolutional residual regressive neural network. We utilize three snapshots of twelve monthly SST measurements in 2010 as measured by the passive microwave radiometer AMSR-E, the visible and infrared monitoring MODIS instrument, and the in situ Arog dataset ISAS. The performance of the platform and success of this approach is evaluated using the root mean squared error (RMSE) metric. We determine that the 1:1 configuration of input and output data and a large observation region is too challenging for the single compute node and dcrnn structure as is. When constrained to a single 100 x 100 pixel region and a small training dataset, the algorithm improves from the baseline experiment covering a much larger geography. For next discrete steps, we envision the consideration of a large input range with a very small output range. Furthermore, we see the need to integrate land and sea variables before performing computer vision tasks like those within. Finally, we see parallelization as necessary to overcome the compute obstacles we encountered.
## 1 Introduction
Water is both an essential and abundant resource on earth; its availability and quality are critical for sustaining life and ecosystems. Though it is abundant, the majority of earth's water, about 97%, is found in the ocean, while the remaining 3% is freshwater found in glaciers, lakes, rivers, and underground aquifers on land. Water is not only crucial for sustaining life, but also plays a vital role in shaping the planet's climate and weather patterns. One significant but understudied climate variable that hydrologists must consider is sea surface temperature. SST has a profound impact on the water cycle, specifically evaporation [1]. Over-ocean-anomalies like atmospheric rivers can lead to both anomalous and enormous quantities of meteorological water falling on land [2]. The same is true in reverse, as failure of the rains in India are influenced not only by orographic factors from the towering Himalayan mountain range but also determinants of the Bay of Bengal and eponymous nearby ocean [3]. Understanding the relationship between sea surface temperature and the water cycle is essential for predicting and adapting to extreme events, managing water resources, and sustainable the global ecosystem [4].
Evidence shows that human beings through industrialization have modified and are continuing to significantly modify the climate. However, the modern cause for concern is the rate at which our climate has changed rather than the Boolean of has it or has it not. Measurements of carbon dioxide (CO\({}_{2}\)) tell the story: detected values of atmospheric CO\({}_{2}\) have increased by 50% of the starting value at the advent of industrialization [5, 6]. Invariant to latitude and longitude, the impacts are felt everywhere. Earth's response to our stimuli manifests in the form of heat waves, stronger storms, longer periods of drought, greater impulses of meteorological water accumulation over land, and a general increase in environmental variability. While in wealthy communities, modern civil infrastructure serves as a boundary layer to environment-related catastrophes, the poor and powerless are unequally yoked [7]. One must consider also
the importance of the ecology itself. How is the global health of all creatures of the atmosphere, land and oceans [8, 9, 10, 11]? What does the next five, ten, or one hundred years look like at the current rate [12, 13]? What changes can be made to mitigate or adapt to implications of past and present poor actions [14]? What are the global environmental quality standards [15]? How can changes be promoted in unequal nation states [16, 17, 18]?
SST is worthy of study for a number of reasons: its status as a key observational characteristic of water in the environment; the importance of SST in numerical weather and climate forecasting; SST's detectability via satellite-mounted RS instrumentation; and the availability of matched continuous ground truth temporospatial measurements that can be studied for intercomparison of dataset bias, variances, and uncertainties. We compare the raw satellite observations to the lower resolution but more precise in situ measurements of sea surface temperature (ISAS). We apply a treatment to the lower resolution but generally more available satellite instrument (AMSR-E), setting its target output to be the higher resolution MODIS product. Our hypothesis is that fusing the AMSR-E data to MODIS data will create a product that is closer in performance to MODIS than its AMSR-E input.
SST is a fascinating variable because not only is it a predictor of future weather anomalies, but it represents a largely unexplored stage regarding the capture and storage of CO\({}_{2}\). Marine carbon dioxide removal (mCDR) is a crucial strategy for mitigating climate change by extracting and sequestering carbon dioxide (CO\({}_{2}\)) from the atmosphere. Natural processes, such as the biological pump and carbon storage in marine ecosystems, contribute significantly to CO\({}_{2}\) removal [19, 20]. Engineered approaches, including iron fertilization to stimulate primary production [21], nutrient optimization for enhanced carbon export [22], and ocean alkalinity enhancement to increase CO\({}_{2}\) absorption capacity [23], offer potential avenues for mCDR. However, challenges exist, including environmental risks and unintended ecosystem disruptions [24].
One tool humans have in their arsenal is the ability to artificially model the environment. Modeling is the combination of a priori measurement data with empirically derived functions that simulate and represent the behavior of the environment. Given observed information obtained by sensors or measurement devices, we can use the collective knowledge to make better decisions. The types of observation data we collect about the state of the climate now in a given place, (e.g. tree ring cores, core samples) or via man-made data collection devices (satellites, weather balloons, airplanes, drifters, buoys, gauges), and have better insights to what tomorrow or the same time in five years might bode given current inputs, trajectories, and these empirical functions [25].
The use of neural networks as a tool for modeling has grown in frequency over the last two decades alongside the increase in availability and speed of fast mathematical computing hardware like graphics processing units. Nvidia has just crossed the trillion dollar market capitalization in large part to the AI boom driven by the proliferation of generative models like GPT and Stable Diffusion [26]. Here, we build on an existing neural network architecture called dcrrnn, which stands for a deep convolutional residual regressive neural network and is pronounced "discern" [27, 28]. When applied to problems involving the water cycle, dcrrnn is under the umbrella of Flux to Flow (F2F). This is an expectorant measure developed during the creation process of the first experiment. It is hypothesized that one might prefer to extract one or the other depending on the subject environment. Specifically, the extraction of just the dcrrnn structure for use in another vertical. Likewise, development of F2F with a different but related structure and a focus specifically on water-focused variables are anticipated to be better categorized as Flux to Flow projects. Dcrrnn is narrow, whereas F2F is wide.
The premise and motivation of dcrrnn and F2F is to study measurements of water, be them remotely sensed, gauged measurements, or hybrid datasets. From ingestion, some preprocessing of the data occurs to prepare for training either via statistical conformation or temporospatial constraint. The data is used to train a neural network. Inferences on unseen data are performed to evaluate the trained algorithm according to a relevant figure of merit for the experiment. Our chosen metric in set of experiments is the dimensionless root mean squared error (RMSE). This metric is a solid introductory measurement because of its frequent use and acceptance within the scientific community. Our stochastic framework was successful in mimicking the performance of deterministic algorithms used to predict gauged river streamflow from meteorological foncings. In light of this success and our interest in all facets of the water cycle, we elected to next examine the ability of dcrrnn and F2F to mimic the abilities of image enhancement and ocean modeling software such as the Scatterometer Image Reconstruction (SIR) algorithm, the regional ocean modeling system (ROMS) algorithm, and the Massachusetts Institute of Technology's General Circulation Model (MITgcm) [29, 30, 31, 32]. These algorithms take in a variety of different spatial datasets and perform some deterministic process to provide a value-added output. We aimed to evaluate the research questions: "Can dcrrnn, the supervised learning structure, be used as a surrogate to other commonly used image optimization algorithms and ocean modeling software? When applied to SST fields, does dcrrnn improve an input dataset based solely on statistical optimization associated with neural networks and the high resolution training data?"
From the outset, one important facet of dcrrnn and F2F to retain is the integral relationship between the paper and the code used to perform the experiments. A primary motivation behind the work writ large is to disseminate the
programming element so others will want to join. Frequently, frustrations and disagreements arise about scientific discourse because the experiment process is not documented, or there is no repeatable proof. We as much as possible take a white box approach. Scripts for these experiments are made available as jupyter notebooks at [https://github.com/albertlarson/f2f_sst](https://github.com/albertlarson/f2f_sst). As a primer, we recommend the genesis dcrrm paper [27]. Though the likely audience is one with or approaching a graduate degree and a penchant for education, our intention is that the language is such that it might be introduced in an undergraduate or advanced K-12 classroom. Coincidingly, pointing out pitfalls is a valuable exercise to help prevent others from repeating the same mistakes. Here, we capture what occurs when a naive, stochastic system is used on a single compute node to attempt a performance of the classic data fusion problem via neural network methods with SST fields as the subject matter. We determine that our approach has definite merit, but requires further investigation with current datasets and other configurations. A prerequisite for advancement of this effort should either be a larger amount of compute time via single node, or via multiple compute nodes conducted in parallel.
With this article, our contributions to the field are the following: 1, the continued development of F2F as an open source water cycle measurement framework; 2, to further consideration of dcrrmn as a viable neural network architecture; 3, the active constraint of the work to a single compute node and focus on the integration of the manuscript with the code behind the experiments; 4, the consideration and focus on the importance of sea surface temperature from a hydrologist's point of view; 5, a fresh consideration of neural network based data fusion and data engineering techniques; 6, an illustration of the limitations of underparameterization; 7, a demonstration of how the neural network improves its performance when the requirements of the process are relaxed; and last but not least, 8, a biased focus on global water resources.
## 2 Materials and Methods
### Sea Surface Temperature (SST)
The origin of SST as a continuously monitored variable began when Benjamin Franklin captured measurements of the ocean as he traversed the Atlantic, acquiring data and synthesizing these observations into information about the Gulf Stream [33]. Since then, the field of physical oceanographic research has made great strides in a variety of advancements relevant here such as observational techniques and data analysis methods. In recent decades, satellite remote sensing has emerged as a crucial tool for observing the ocean on a global scale, and for acquiring SST data. Satellites equipped with infrared sensors provide accurate and high-resolution measurements of SST [34; 35]. These satellite-based measurements offer advantages over traditional in situ measurements, as they provide comprehensive coverage of the oceans, including remote and inaccessible regions. In situ devices are point source measurements and have a relatively limited observation window compared to the hundreds of kilometers sun-synchronous satellites capture at a single moment.
In addition to infrared sensors, satellite sensors that detect microwave radiation, starting with the Nimbus-5 in 1972, have also been used to retrieve SST [36]. Microwave-based SST retrieval methods have the benefit of low sensitivity to atmospheric conditions, such as cloud cover and aerosols, compared to infrared measurements. The availability of long-term satellite-based SST datasets has led to significant advancements in climate research. Scientists have utilized these fields to study various climate phenomena, including oceanic oscillations such as the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO) [37; 38]. Large-scale oscillations influence the long-term variability of SST and have implications for regional climate patterns.
Furthermore, satellite-based SST observations have been essential in understanding the impacts of climate change on the oceans. Studies have shown that global warming has led to widespread changes in SST, including increases in average temperatures, alterations in temperature gradients, and shifts in the distribution of warm and cold regions [39]. These changes have significant implications for marine ecosystems, including shifts in species distributions, changes in phenology, and coral reef bleaching events [40; 41]. Satellite-derived SST data also contribute to the prediction and forecasting of weather and climate events. The accurate representation of SST conditions is crucial for weather prediction models, as it affects the development and intensity of atmospheric phenomena such as hurricanes and tropical cyclones [42; 43]. The integration of satellite-based SST data into numerical weather prediction models has improved forecast accuracy, particularly in regions where in situ observations are sparse or nonexistent.
In addition to weather forecasting, satellite-based SST data has practical applications in fisheries management and marine resource monitoring. SST information helps identify optimal fishing grounds by indicating areas with suitable temperature conditions for target species [44; 45]. Furthermore, monitoring changes in SST can provide insights into the health of marine ecosystems and aid in the assessment and management of protected areas and biodiversity hot spots [46; 47; 48].
### Aqua
The Aqua satellite was launched on May 4, 2002. Upon it, two instruments sit: AMSR-E and MODIS. Both, among other things, are designed to study the temperature of the ocean. The measurements obtained as the satellite is moving from South Pole towards North always crosses the equator at approximately 1:30 PM local time nadir (directly below the satellite). In the downward portion of the orbit, the satellite crosses the equator at 1:30 AM local time nadir [49]. The AMSR-E instrument ceased functioning after ten years of service. MODIS continues to operate, logging over twenty years of active surveillance. AMSR-E has been succeeded by a follow-on instrument AMSR2. AMSR2 is aboard a Japanese mission called GCM-W, one of a series of global climate observation missions [50]. AMSR2 has a slightly larger antenna than AMSR-E, but is similar in scope to the AMSR-E instrument. The de facto replacement of MODIS is VIIRS, an instrument series carried on the Suomi National Polar-orbiting Partnership (SNPP), NOAA-20, and NOAA-21 satellites [51, 52]. Other global aerospace mission datasets associated with SST are available, like the Chinese Haiyang and Gaofen series [53, 54].
### Amsr-E
AMSR-E was a passive microwave radiometer [55, 56]. The acronym stands for Advanced Microwave Scanning Radiometer for Earth Observing System. There are several products produced on top of the raw radiance data that were collected by this instrument, and the AMSR-E data was processed by different ground stations depending on the parameter of interest. The produced datasets contain latitude, longitude, several physical parameters (e.g., SST, Albedo, soil moisture) as well as other pertinent metadata. As it pertains to sea surface temperature, AMSR-E is available in Level 3 products and as part of Level 4 assimilation system output. As of the writing of this document, Level 2 SST is no longer publicly available. However, we were able to obtain Level 2 fields from the Physical Oceanography Distributed Active Archive Center (PODAAC) through the Jet Propulsion Laboratory before the sunsetting of the data product.
To detail a sample, one single Level 2 netCDF (.nc) file containing AMSR-E data was procured. The record selected is that of March 3rd, 2004, with a UTC time of 01:07:01. The file contains three coordinates (latitude, longitude, and time) and thirteen data variables. Each variable is a single matrix comprised of columns and rows of measurements. The important distinction here is that the data structure is stored to reflect the path of the orbit. See Figure 1. When the sea surface temperature is plotted as it sits in the matrix, it is difficult to discern what is transpiring. There appears to be some curvature of the measurements, but other than that little is known to an untrained eye beyond the title and colormap.
Inclusion of the latitude and longitude coordinates, as well as a global basemap generates a clearer picture as seen in Figure 2. A single Level 2 AMSR-E SST file contains matrices representing one full orbit around the globe. Each file holds partially daytime and partially nighttime observations. Because of diurnal warming, it is desirable to separate the nighttime and daytime passes. Furthermore, many analyses are comprised of an ensemble of satellite observations from different platforms such as this one. A grid makes for more orderly computations at large spatial scales. Certainly, one could elect to grid every observation to the AMSR-E or MODIS native product coordinate system. With our experiments, we choose the path of rectangular gridding. We consider the Level 3 product because of the interest in spatial relationship across large geographic scales and variable time (daily, weekly, seasonally, yearly, generationally).
AMSR-E is available (accessed 7 June 2023) via its producer, Remote Sensing Systems of Santa Rosa, CA [57]. This daily product comes in 25 km resolution and is delineated by daytime and nighttime passes of the satellite. The time series runs from June of 2002 until October of 2011 when the AMSR-E instrument ceased functioning. Figure 3 illustrates the point that even without explicitly defining the coordinate system in the visualization, the matrix of SST values is already placed in proper spatial order. Figure 4 reinforces the fact that little change occurs with the inclusion of latitude and longitude coordinates when plotted on a rectangular grid. Here, we simply mean each month's worth of daily daytime and nighttime passes on a pixel-wise basis. We call these day and night in the experiments that follow. We also create a hybrid dataset, where we average the monthly averages of day and night passes together. Finally, we
Figure 1: L2 AMSR-E SST field, March 3, 2004, no coordinate system
transform all three of these datasets from the native AMSR-E grid system to the slightly different MODIS grid; this function is carried out using the xESMF software [58].
### Modis
MODIS, or Moderate Resolution Imaging Spectroradiometer, measures thirty-six different radiance bands in the infrared and visible ranges of the electromagnetic spectrum [59]. Level 3 sea surface skin temperature as obtained from MODIS comes in 4 kilometer and 9 km products, and is derived from a subset of the thirty-six radiance bands. The products are available in daily, average of eight days, and monthly products. They are also delineated by daytime and nighttime passes of the Aqua's polar-orbiting nature. SST products deriving from MODIS are further specified by the length of the waves within the thermal infrared range used to derive the measurement: longer waves (11-12 microns) and middling waves (3-4 microns). The MODIS documentation state that the 3-4 micron wave SST product is less uncertain, but
Figure 3: L3 AMSR-E file plotted without supplied coordinate system
Figure 2: L2 AMSR-E SST, plotted with available coordinates and world map
only usable at night because of the daytime sun glint impact on 3-4 micron waves. We use the long wave 11-12 micron infrared measurements to keep constant the source of both daytime and nighttime passes.
The MODIS Aqua Level 3 SST Thermal IR Monthly 4km V2019.0 product [60; 61] comes with latitude and longitude coordinates, SST values and per pixel quality measurements denoting when contamination is likely. The grid is equidistant rectangular, a match with the AMSR-E grid but at a finer original resolution. Of the over thirty million pixels for an entire day of 4 km MODIS pixels, 90% of them in the random sample selected here are deemed contaminated and filtered out Figure 5. This contrasts with the 50% loss of AMSR-E pixels. This great loss in pixels due to quality is attributed to cloud contamination. To compensate, we use the monthly product Figure 6 where a greater amount of time has transpired, allowing for a higher probability of clean global coverage. A randomly sampled MODIS monthly image yields 50% loss, in line with the AMSR-E daily product and much improved upon relative to the daily MODIS observation files.
Figure 4: L3 AMSR-E file plotted with available coordinates and world map
Figure 5: L3 daily MODIS file containing only high quality flagged pixels
### Ground Truth Measurements
For a source of ground truth data, we selected the "In Situ Analysis System" (ISAS) dataset obtained from the University of California's Argo repository and produced by a consortium of French institutions [62]. An important constraint for this work was to obtain only the surface level measurement of temperature at the highest frequency available during the years of both AMSR-E and MODIS. These products are provided in a gridded format are used to observe temperature measurements at many depth levels. In the publication attached to the ISAS dataset [62], the target physical quantity is steric height and ocean heat content; with these as their target output, gridded depth-dependent temperature is stored as a byproduct. The 0.5 degree monthly dataset is presented in a Mercator projection, slightly different than the AMSR-E and MODIS grids. Mercator lines of longitude have a uniform distance in between them; the distance between latitudes from the equator changes. Identical to AMSR-E, we re-grid this data to the MODIS grid and coordinate system.
### Treatment
This work is an extension of [27]. The F2F code base provides an step-by-step approach to the fundamentals of the materials and methods applied within. As such, it is a key of part of the work and has been made openly accessible at [https://github.com/albertlarson/f2f](https://github.com/albertlarson/f2f) (accessed on 6 June 2023). The scripts follow the logical order of that paper. Concretely, this work builds upon that foundation. The notebooks found at [https://github.com/albertlarson/f2f_sst](https://github.com/albertlarson/f2f_sst) follow the simple process of extracting the data, transforming the data, feed the data into a transformation system (dcrmn), and then evaluating the performance of the system.
The treatments we apply to the data are several configurations of one common concept: neural networks. Neural networks are not new, but the growth of graphical processing units (hardware) has enabled them to flourish in software. Neural networks are a type of learned representation. A structure is fed connected input and target pairs. Based on the predictive quality of the initial network structure, an error between the neural network output and the target occurs. This error is in turn fed to an optimization algorithm that iteratively and slightly alters each "neuron" of the initial network structure until it reaches a designated optimal state. Via many small calculations and the simultaneous application of statistical mechanics, neural networks are known to provide qualities like that of a brain, such as capturing spatial eccentricities and temporal changes in sets of related images. Neural networks are applied to a range of tasks from the more mundane such as learning a quadratic equation, to the more cutting edge, like extreme event forecasting or cancer detection.
Transfer learning has become commonplace in the field of machine learning [63]. Transfer learning places an emphasis on creating reusable treatment structures for others to build on top of without inadvertently causing the audience to get lost in possibly unimportant details. We employ transfer learning to create a complex configuration with a relatively short learning curve. The neural network is characteristically deep, convolutional, residual, and regressive.
Figure 6: Monthly L3 MODIS image containing only high quality observations
Our construct is inspired by the work of residual networks [64]. However, our problem is one of a regressive nature. Sea surface temperature has a continuous temperature range that it exists within. This is a notable difference to some of the more common introductory neural network examples, such as those associated with the MNIST and CIFAR datasets where the number of possible outputs is very small [65; 66]. Loss functions associated with regressive problems are constrained to just a couple: mean absolute error (MAE) and mean squared error (MSE). The calculation of the loss function must be differentiable. This is due to the optimization component of neural networks. The literature is rich with publications regarding neural network optimizers, as well as the general mechanics of neural nets [67; 68].
Once neural network architecture and hyperparameters are chosen, training and validation data is loaded into the network. While training the neural network, close observation is made of the reduction in error between training input and output as the neural network begins to optimize or learn. We also monitor the validation dataset at each training iteration. The learning process stops once the training and validation data has been passed through the network a certain number of times, or epochs. When prototyping or pilot-testing the experiment set to be carried out, one should test with a very short number of epochs and a larger sum of epochs to see where good performance meets fast time of computation.
After training, the optimized neural network structure is intentionally frozen. Before the point of freezing, the neurons of the network can be adjusted for optimization, like a student asking a teacher for advice when studying. The frozen state and inference imposed upon it is like a student being prompted with a pop quiz and no teacher assistance. This test or input data are similar enough to the training that the teacher believes the student will have success in passing the test according to the selected merit (mean squared error, the loss function). After the test, the performance of the model is evaluated and a decision is made regarding next logical steps in the research.
A neural network can become biased to its training inputs. It starts to memorize the training dataset, which does not make for a generally applicable algorithm. Avoidance of biasing comes at the cost of variance [69]. Applying dropout is one technique to systematically prevent system bias by simply "turning off" a certain percentage of random neurons at each iteration of the algorithm [70; 71]. Another approach is the application of early stopping. The loss function of a neural network typically looks like a very steep curve down to a flat bottom. Rather than allow the network to persist in the flat bottom for long and become overfit, simple logic can be employed to stop training early when the network shows evidence that it has reached an optimal state. Percentage of data split between training and testing proportions is another relevant training hyperparameter. A larger proportion of the dataset being part of the training portion could lead to overfitting of the model and lack of generalized predictability. On the other hand, insufficient training data might lead to an inability to adequately characterize the reality of the data pairs.
Figure 7: Sample training monthly observation; January 2010 MODIS day observation of the Hawaiian Islands; segmented into 100 x 100 pixel regions.
The image sets subjected to treatment are on the large side computationally. Holding many one million or nine million pixel images within the memory of a single graphical processing unit becomes intolerable to the device. One could elect to use multiple GPUs or a compute node with a great provision of memory. Here, we constrain the experiment to a single GPU and cut the images up into smaller pieces of square data. Our patch size is fixed at 100 x 100, though this is a tunable hyperparameter. Figure 7 shows a Pacific Ocean study region, highlighting Hawaii and regions east. While this image is too large to process directly in the neural network, we can solve this problem by creating the eighteen patches of 100 x 100 pixels, representing the 300 x 600 pixel region under observation.
Neural networks do not function when nan values are present in any of the images. We enacted a broad treatment to the AMSR-E and MODIS images, computing the mean of the entire image, excluding the nan values. Then, where the nan values are present, we replace them with the mean value. This has the convenient byproduct of introducing into the neural network many training pairs where the input and output are simply comprised of the average global SST value as obtained via the AMSR-E and MODIS instrument.
## 3 Results
We randomly selected a single calendar year from the available time series where AMSR-E, MODIS, and ISAS overlap. Of this year's worth of data, we settled on nine different cases consisting of three different locations (Atlantic Ocean, Pacific Ocean, and Indian Ocean) and three different observation windows (day, night, mean averaged of day and night). We train the neural network on the first ten months of the year, validate with the eleventh month, and test with the twelfth month. However, we did not intend for this system to be deterministic or biased in nature. Therefore, we shuffle the training pairs to confuse the network and promote regularization [72]. A training session runs for 100 epochs. Each image in the geographically constrained time series is 300 pixels x 600 pixels in size, divided up into eighteen 100 x 100 pixel segments to incrementally feed the neural net.
Results of the nine cases are illustrated in Figure 8. The upper panel is a test matrix. Each of the ten rows in the test matrix represent a tuple of datasets that are compared. All of the different data products we use are compared here to the ISAS data and the MODIS data as benchmarks. Columnwise, the test matrix is delineated into each of the nine cases (experiments) that we perform. For example, PD stands for Pacific Day, AN means Atlantic Night, and IH Indian hybrid. The hybrid is simply a mean of the day and night images for a given month. Take the PD column. We compute the RMSE between the complete 300 x 600 Pacific scene for each of the comparison tuples. In row 1, column 1, the RMSE value of the (Argo, Argo) comparison for the Pacific Day experiment comes out to zero. This is expected, because there should be no error between two identical datasets. Root mean squared error is not computed in locations where land is present.
The bottom two panels are graphs of the in-the-loop performance of the dcrmn algorithm as the learning process occurs. The loss function, sometimes otherwise referred to as the cost function, for dcrmn is mean squared error (MSE). This calculation is related to the RMSE metric used in the upper panel. Though the computations are different, the results are not perfectly related. The cause of this discrepancy is a function of a constraint for the training process. If an input or target image has any non-numerical pixels present, we have to replace them in some way. The cause of nans are largely attributed in this case to a location that is partially or completely land, or is a pixel contaminated by clouds or rainfall. We retain a global land mask for use before and after the training process to enable the removal of the temporarily-filled land pixels, which gives way to the clean RMSE test matrix. Our fill mechanism for each of the training and validation squares is to use the local mean of that square as the fill value. As such, there are always the same number of pixels factored into each 100 x 100 square during the neural network training process and in particular the calculation of in-the-loop training loss.
Pred in the test matrix refers to the neural network's prediction of the unseen test data, the single month of data. It is fed into the trained network as eighteen 100 x 100 squares, but then recombined into one 180,000 pixel array. Land locations are added back on top. No other compute mechanism is employed. The Optim dataset has an extremely faint bandpass filter on top of the Pred dataset. Simply put, if there are any pixels in the Pred result from the neural network that is outside of the three standard deviations from the mean of the image, they are converted to nans and deemed erroneous. In this experiment, this filter has little effect. One would see more drastic effects should the trigger for filtering data from Pred to Optim change from three standard deviations to two, one, or less standard deviations from the mean. The risk of using a bandpass filter is that much of the interesting nuanced information can be filtered away. This feature was implemented during the experiment phase in response to the analyst's acknowledgement that artefact in the experiments were occurring along the coast.
Figure 8: December 2010. The top panel shows the RMSE matrix for all datasets (input and outputs) for each of the nine cases. Two bottom panels are MSE plots of training and validation losses during neural network training. Atlantic, Pacific, and Indian Oceans segmented by monthly Day, Night, and Hybrid
## 4 Discussion
With every case summarized in Figure 8, the RMSE between Optim and MODIS is higher than the AMSR-E input. In occasional instances, the test case of December does make an Optim output that is closer to Argo than the input AMSR-E. In some cases, though, it makes a worse performing product with regards to Argo than either AMSR-E or MODIS. See Figures 9 and 10 for samples of how the RMSE translates to actual transformation of the images. In the second row of the first column, one sees all green. This solid green color demarcates a clear relationship between the two compared datasets. In this case, that is because this square is the difference between Argo and itself. Considering all images in row 2 of Figure 9, the second column where AMSR-E is compared to Argo seems to be the most green, denoting a closeness. Pred and Optim are both covered in green; however, the bandpass filter in Optim is not being utilized. Therefore, we see the coastal artefact around the Hawaiian islands present as a result of dealing with nan values due to land. We are hesitant to use a liberal bandpass filter, because there's a real risk of deleting good dynamic data. This fact has lead us to an open question on the most appropriate way to handle water cycle data. It is evident that a gridded ocean product where land has no numerical representation isn't acceptable, and solutions like replacing values with local means isn't a true solution but a workaround. The most obvious answer is to create a unified global surface temperature dataset. However, this is less than best for the hydrology community. An adequate replacement specification for land surface temperature might be soil moisture or calculated surface and subsurface flows.
Figure 10: Relatively “poor” perceptual change, Indian Day case, axes are pixels
Figure 9: Relatively “good” perceptual change, Pacific Night case, axes are pixels
Figure 10 represents a dcrnn experiment conducted over the Arabian Sea and Bay of Bengal; it also illustrates a less desirable outcome than that of Figure 9. The telltale signal of underparameterization is evident in columns four and five of Figure 10, most notably in rows 2 and 3. As mentioned earlier, because of compute constraints, we can not load the entire image into the neural network at one time. We have to break it up into squares. Here there are clear horizontal and vertical boundary lines. Furthermore, there is the appearance of much colder estimates of SST, driven by artefact from the coastal regions. It appears that edges were systematically not able to be resolved based on the configuration of the dcrnn structure and quantity of data fed into the system.
We determined that just looking at the nine cases in one constant way was not considerate enough of potential unseen indirect effects or confounding variables [73, 74]. As an additional measure, we dug deeper into the night only observations of the Atlantic ocean. We selected a single 100 x 100 pixel that is completely over ocean within the Atlantic region. Our selection is seen in Figure 11. Figure 12 illustrates that all pixels in the selection are real numbers (left). The presence of the color red would denote nan pixels present. This point is reinforced on the right panel of Figure 12 with the single vertical line histogram denoting all real pixels.
We extend the number of epochs from 100 as compiled in Figure 8 to 500 in this more-zoomed-in observation. A larger amount of training epochs, a smaller dataset size, and no nans present are strategic measures away from the baseline experiments to improve the outcome. Results of this experiment are illustrated in Figure 13. With this
Figure 11: Atlantic Night, single patch selection overlain on top of the the land mask (left) and SST field (right)
Figure 12: Atlantic Night, single 100 x 100 pixel patch real/nan Boolean (left) and histogram of real/nan (right)
Figure 13: Atlantic Night, AMSR-E (col 1), Pred (col 2), MODIS (col 3), histogram of differences (col 4)
configuration, we find that the network reduces the RMSE difference between AMSR-E and MODIS by 20%, bringing the RMSE between AMSR-E and MODIS from 53.4 down to 44.0 between Pred and MODIS. However, there is no major perceptual change between the AMSR-E image and the Pred image. It is a promising result that there is an improvement in the results here relative to the nine larger cases. This performance indicates that rectification of the performance issues we are seeing can be further alleviated by shrinking the target size relative to the input, or by increasing the input size. As the output size shrinks, the problem converges with that of the earlier dcrrnn work [27].
Wherever land and coastal regions are present, the neural network alone struggles. This is due to the nature of the land sea boundary layer in all these datasets. At the presence of land, the raw data (as they are downloaded as.nc files) are given a non-number (nan) designation. For the purpose of training a neural network using the convolutional flavor, images with zero nans are needed. Otherwise the software fails catastrophically. As referenced earlier, steps were taken during the training process to circumvent the presence of land by substituting those pixels temporarily with the local mean value. Another option is the application of the substitution of the nan values with the mean as computed by the entire "scene" or day. There is a risk that the substitution of these values introduces a source of structured noise. This noise might be one factor leading to the higher than zero RMSE scores denoted in Figure 8. Furthermore, it is possible that this structured noise is hindering training of the neural network process itself. This is unfortunate, because coastal regions are the stage for a variety of interesting SST events such as boundary currents. At the same time, hydrologists and global health professionals have a growing stake in the influence of the ocean. Sea level rise and saltwater intrusion are two hallmark conditions present at the coastal interface [75]. Beyond sea surface temperature, ocean color and sea surface height are variables intimately linked to the coastal environment [76, 77, 78]. We hypothesize that global datasets considering these parameters have the same challenges when conformation to a gridded structure and neural network process are prescribed. We believe that some harmony via integration of land and data sets would be a next discrete step to evaluate the impact on training and inferring with Flux to Flow. A good candidate is surface and subsurface flow as produced by the Global Land Data Assimilation System (GLDAS). GRACE, VIIRS, and SMAP all have land products available as well as oceanic products. It's possible that if the data is ocean focused, perhaps leaving the land observations in their raw state with no physical scaling would be the best practice.
Our images are single channel inputs, and can be considered grayscale pictures. They are slightly different though; grayscale pictures are usually digitized as pixels with numbers between 0 and 1. When displaying SST images, measurements of physical properties, we use a colormap with minimum and maximum based on what we know to be the physical limits of the parameter itself. Nevertheless, it is closely related to a grayscale image. In grayscale computer vision tasks such as this set of experiments, the use of mean squared error as a loss function has audience of skptics [79]. There are examples where the loss function optimized in a neural network drops in value significantly but sees no improvement in the quality of the image. Alternate loss functions to the standards built into PyTorch are available [80]. Without alteration, these functions require the inputs to be either between 0 and 1 (grayscale) or 0 and 255 (color images). Another avenue is the pursuit of physics-based loss functions [81]. Some data engineering is definitely needed in future iterations. As we have experience with the surface and subsurface flow rasters and these are land viewing, we think the combination of SST and GLDAS as predictors of streamflow in coastal regions is a suitable discrete next step. This next step will force us to decrease the number of target outputs, as there is a paucity of point measurements in streamflow monitoring relative to that of the interpolated target ISAS data used herein.
Only a fraction of the available data was observed in this study. The ISAS Argo dataset comes as a single compressed file; we extracted the surface layer only. There is great value in consideration of SST depth layers; however, it was outside the scope of our investigation. Furthermore, we studied monthly time series images of all three raw datasets. AMSR-E and MODIS each have near complete global pictures within two to three days. These datasets are then transformed in different ways and can lose fidelity by various types of decimation such as re-gridding from swaths to squares, uncertainty in formulas used for conversion from base input to high level physical parameter, or by forms of compression.
We took a naive approach to the problem. It is common to initialize a model with a historical known bias, and make slight inferences based upon the long-term mean. This tightly constrains the problem to the known past environment. We did not do that. We actively attempt to root out and prevent any sort of deterministic bias, and see under a tough conditioning what the dcrrnn algorithm does. This leads to some less than desirable results; however, our zoom in shows that when the complexity of the system is relaxed, the algorithm improves according to the RMSE target. In a future study, it would be helpful to start the training with a long-term bias of SST for an entire year based on, for example, the Operational Sea Surface Temperature and Ice Analysis time series dataset produced by the Group for High Resolution Sea Surface Temperature [82].
## 5 Conclusions
Sea surface temperature (SST) is an essential climate variable. A better understanding of SST equates to a better understanding of global hydrology and the interactions of water as it moves around the hydrosphere in liquid, solid, and vapor forms. The advent of satellite SST observation has allowed for the study of large scale phenomena otherwise invisible. The beginning of the 21st century marked a new frontier in the measurement of SST via the Aqua mission and Argo program. Recently, neural networks have changed the way that scientists consider modeling of the environment. In this study, we continued to develop Flux to Flow, an extract, transform, load, treat, and evaluation framework based around a deep convolutional residual regressive neural network (dcrmnn). We extended its additional functionality of streamflow prediction to transform one Aqua instrument dataset into another: AMSR-E observations towards MODIS observations.
We focused on three large oceanic regions: Indian, Pacific, and Atlantic. With each of the three locations comprised of eighteen 100 x 100 pixel image pairs per month, and ten months of training data, the neural network struggles to transform AMSR-E into MODIS. When we relax the amount of data fed into dcrmn, looking at a single 100 x 100 pixel image pair per month and ten months of training data, the network is statistically better able to transform AMSR-E into MODIS data. Provided these results, we believe that a next discrete step is to focus on coastal areas where the hydrology and oceanography are closely linked. We would like to investigate the performance of dcrmnn in predicting the streamflow of a river when it does and does not consider oceanic behavior in the adjacent ocean. We hypothesize that merging ocean and land datasets will not only ease the challenges associated with handling non-numbers, but that the streamflow prediction will benefit from the unique signatures present in the ocean data alongside the land measurements of the water cycle.
|
2306.08441 | On the Formation of GW190521-like Binary Black Hole Merger Systems | GW190521 is the most massive merging binary black hole (BBH) system detected
so far. At least one of the component BHs was measured to lie within the
pair-instability supernova (PISN) mass gap ($\sim 50-135\;{\rm M}_{\odot}$),
making its formation a mystery. However, the transient observed signal allows
alternative posterior distributions. There was suggestion that GW190521 could
be an intermediate-mass ratio inspiral (IMRI), with the component masses
$m_1\sim 170\;{\rm M}_{\odot}$ and $m_2\sim 16 \;{\rm M}_{\odot}$, happening to
straddle the PISN mass gap. Under this framework, we perform binary population
synthesis to explore the formation of GW190521-like systems via isolated binary
evolution. We numerically calculate the binding energy parameter for massive
stars at different metallicities, and employ them in our calculation for common
envelope evolution. Our results prefer that the progenitor binaries formed in
metal-poor environment with $\rm Z\leq0.0016$. The predicted merger rate
density within redshift $z=1.1$ is $\sim 4\times 10^{-5}-5\times 10^{-2} \,\rm
Gpc^{-3}yr^{-1}$. We expect that such events are potentially observable by
upcoming both space and ground-based gravitational wave detectors. | Zhe Cui, Xiang-Dong Li | 2023-06-14T11:29:55Z | http://arxiv.org/abs/2306.08441v1 | # On the Formation of GW190521-like Binary Black Hole Merger Systems
###### Abstract
GW190521 is the most massive merging binary black hole (BBH) system detected so far. At least one of the component BHs was measured to lie within the pair-instability supernova (PISN) mass gap (\(\sim 50-135\) M\({}_{\odot}\)), making its formation a mystery. However, the transient observed signal allows alternative posterior distributions. There was suggestion that GW190521 could be an intermediate-mass ratio inspiral (IMRI), with the component masses \(m_{1}\sim 170\) M\({}_{\odot}\) and \(m_{2}\sim 16\) M\({}_{\odot}\), happening to straddle the PISN mass gap. Under this framework, we perform binary population synthesis to explore the formation of GW190521-like systems via isolated binary evolution. We numerically calculate the binding energy parameter for massive stars at different metallicities, and employ them in our calculation for common envelope evolution. Our results prefer that the progenitor binaries formed in metal-poor environment with Z \(\leq 0.0016\). The predicted merger rate density within redshift \(z=1.1\) is \(\sim 4\times 10^{-5}-5\times 10^{-2}\) Gpc\({}^{-3}\)yr\({}^{-1}\). We expect that such events are potentially observable by upcoming both space and ground-based gravitational wave detectors.
keywords: black hole - black hole mergers \(-\) gravitational waves \(-\) stars: evolution
## 1 Introduction
Detection of gravitational waves (GWs) serves us an alternative way to observe the universe. Since the first GW event GW150914 was discovered by the ground-based detectors advanced LIGO (aLIGO) and later joined advanced Virgo, the number of binary black hole (BBH) merger events has increased to \(\sim\)100 (Abbott et al., 2016, 2019, 2021; The LIGO Scientific Collaboration et al., 2021). The observed GW signals are classified as coalescing BBH, binary neutron star (BNS) and neutron star-black hole (NSBH) systems. GW170817 is the only GW source with electromagnetic (EM) counterpart definitely observed (Abbott et al., 2017).
GW190521, observed on May 21, 2019 at 03:02:29 UTC, is the most massive merging BBH system detected so far (Abbott et al., 2020, 20). The association of GW190521 with the candidate counterpart ZTF19abanrhr reported by Zwicky transient facility (ZTF) (Graham et al., 2020) is still inconclusive (Ashton et al., 2021; Nitz & Capano, 2021; Palmese et al., 2021). Under the assumption that GW190521 is a quasi-circular BBH coalescence, the estimated individual component masses are \(m_{1}=85^{+21}_{-14}\)M\({}_{\odot}\), \(m_{2}=66^{+17}_{-18}\)M\({}_{\odot}\), and the total mass \(150^{+29}_{-17}\) M\({}_{\odot}\) within 90% credible region, providing direct evidence of intermediate mass BHs (IMBHs) (Abbott et al., 2020, 20). Gamba et al. (2021) drew similar results but under hyperbolic orbit hypothesis. Romero-Shaw et al. (2020) claimed that GW190521 may be an eccentric bianry merger with aligned spins. Gayathri et al. (2022) interpreted this signal under the combination of both eccentricity and spin precession configuration. Barrera & Bartos (2022) estimated the ancestral mass of GW190521 and also favored the heaviest parental BH mass in the pair-instability supernova (PISN) mass gap (between \(\sim 50-135\)M\({}_{\odot}\), Yusof et al., 2013; Belczynski et al., 2016).
Since the BH masses in GW190521-like events challenge the standard stellar evolutionary theory, there have been various models proposed to interpret their formation, including dynamical binary formation in dense stellar clusters (Rodriguez et al., 2019; Romero-Shaw et al., 2020; Fragione et al., 2020; Anagnostou et al., 2020; Gamba et al., 2021; Arca-Sedda et al., 2021; Rizzuto et al., 2022), additional gas accretion and hierarchical mergers in active galactic nuclei (AGNs) (Tagawa et al., 2020, 2021, and references therein), and the primordial BH scenarios (De Luca et al., 2021). Alternatively, Palmese & Conselice (2021) suggested that
GW190521 may be the merger of central BHs from two ultradwarf galaxies. However, the origin of this event as an isolated binary still cannot be excluded (Belczynski et al., 2020; Farrell et al., 2021; Kinugawa et al., 2021; Tanikawa et al., 2021). In addition, the exact boundaries of the PISN mass gap are in dispute, due to the uncertainties in stellar evolution and SN simulation, which may entail a reassessment (Woosley, 2017; Marchant et al., 2019; Farmer et al., 2019; Mapelli et al., 2020; Vink et al., 2021).
Nevertheless, GW190521 is qualitatively different from previous GW sources, not only because it was the most massive GW source observed to date, but also this transient signal was found with only a short duration of approximately \(0.1\) s, and only around four cycles in the frequency band \(30-80\) Hz, so multimodal posterior distributions would be consequently ineluctable (Fishbach and Holz, 2020; Nitz and Capano, 2021; Bustillo et al., 2021; Estelles et al., 2022). Among them, Nitz and Capano (2021) suggested that GW190521 may be an intermediate-mass-ratio inspiral (IMRI), with the component masses of \(m_{1}\sim 170\,\mathrm{M}_{\odot}\) and \(m_{2}\sim 16\,\mathrm{M}_{\odot}\), straddling the PISN mass gap. Comparison of the parameters derived by Abbott et al. (2020) and Nitz and Capano (2021) are shown in Table 1.
Inspired by the results of Nitz and Capano (2021), here we attempt to interpret the formation of GW190521 assuming that it was an IMRI through isolated binary evolution, and investigate the properties of their progenitor binaries as well as the possible distributions of natal kicks on the two component BHs, which had promoted their coalescence within the Hubble time \(\tau_{\rm H}\). The information on the BH kicks is crucial in understanding the formation of massive BHs.
The paper is structured as follows. In section 2 we describe the main features of our binary population synthesis (BPS) models. The calculated results of BPS are presented in section 3. We then discuss our results in section 4, and summarize our main conclusions in section 5.
## 2 Model
All calculations are carried out by using the BPS code BSE, originally developed by Hurley et al. (2000, 2002) and its updated version BSEEMP1(Tanikawa et al., 2020, 2021), with the extension to very massive ( up to \(M\sim 1300\,\mathrm{M}_{\odot}\)) and extremely metal-poor stars (down to \(Z=10^{-8}\mathrm{Z}_{\odot}\)), based on the stellar models computed by the HOSHI code. The BSEEMP code also takes advantage of new stellar-wind and remnant-formation prescriptions, as well as the implementation of pair-instability and pulsational-pair-instability supernova (PISN/PPISN), which all play a role in stellar/binary evolution.
Footnote 1: [https://github.com/attrnkw/bseemp](https://github.com/attrnkw/bseemp).
### Stellar wind mass loss
The masses of BHs are predominantly set by their presupernova masses, which are mainly affected by stellar wind mass loss and binary interactions. Mass loss via stellar winds can significantly influence the fate of massive stars (Fryer et al., 2002). Here we use the semi-empirical stellar wind prescription (referred as Vink et al. winds) in Belczynski et al. (2010), and consider metallicity dependence for Luminous Blue Variables (LBVs) (Tanikawa et al., 2021). It has been demonstrated that this wind prescription results in more massive pre-supernova objects and heavier BHs compared with the traditional one (Belczynski et al., 2010). We ignore the influence of stellar rotation on wind loss.
For massive O and B stars, the wind mass loss rate \(\dot{M}_{\rm W}\) in units of \(\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) is (Vink et al., 2001)
\[\begin{array}{ll}\mathrm{log}(\dot{M}_{\rm W,OB})=&-6.688+2.210\log(L/10^{5} )\\ &-1.339\log(M/30)-1.601\log(V/2.0)\\ &+0.85\log(Z/\mathrm{Z}_{\odot})+1.07\log(T/20000),\end{array} \tag{1}\]
with \(12500\,\mathrm{K}\leq T\leq 25000\,\mathrm{K}\). Here \(L\) and \(M\) are the luminosity and the mass in Solar units respectively, \(Z\) is metallicity, \(T\) is the effective temperature of the star, and \(V=v_{\infty}/v_{\rm esc}=1.3\) is the ratio of the wind velocity at infinity to the escape velocity from the star.
For hotter stars with \(25000\,\mathrm{K}\leq T\leq 50000\) K,
\[\begin{array}{ll}\mathrm{log}(\dot{M}_{\rm W,OB})=&-6.697+2.194\log(L/10^{5} )\\ &-1.313\log(M/30)-1.226\log(V/2.0)\\ &+0.85\log(Z/\mathrm{Z}_{\odot})+0.933\log(T/40000)\\ &-10.92[\log(T/40000)]^{2}\end{array} \tag{2}\]
with \(V=2.6\).
For LBVs beyond the Humphreys and Davidson limit (\(L>6\times 10^{5}\) and \(10^{-5}RL^{0.5}>1.0\), where \(R\) is the stellar radius in solar units),
\[\dot{M}_{\rm W,LBV}=f_{\rm LBV}\times 10^{-4}(Z/\mathrm{Z}_{\odot})^{0.86} \mathrm{M}_{\odot}\,\mathrm{yr}^{-1}, \tag{3}\]
where \(f_{\rm LBV}=1.5\) is a calibration factor. Obviously, lower \(f_{\rm lbv}\) results in weaker LBV wind, leaving a heavier remnant (Belczynski et al., 2010).
The reduced Wolf-Rayet star mass loss with small H-envelope mass takes the form of metallicity-dependent power law,
\[\dot{M}_{\rm W,WR}=10^{-13}L^{1.5}\left(\frac{Z}{\mathrm{Z}_{\odot}}\right)^{ m}(1.0-\mu)\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}, \tag{4}\]
with
\[\mu=\left(\frac{M-M_{\rm He}}{M}\right)\min\left\{5.0,\max[1.2,(\frac{L}{7 \times 10^{4}})^{-0.5}]\right\},\]
\(m=0.86\) describing the dependence of wind mass loss on metallicity, and \(M_{\rm He}\) the He core mass of the star (Vink and de Koter, 2005).
For other stars, we use the wind prescriptions described in Hurley et al. (2000).
### Black hole formation
Stars are powered by burning their core fuels to heavier elements step by step. For massive stars, this process continues until an iron core is built up in the stellar center. As the fusion of iron does not produce further energy, burning halts. Later, stars contract on their own weights, leading to accelerating processes of electron capture and core element dissociation. These processes dramatically reduce the pressure that should have resisted their self gravity, triggering an runaway core-collapse process. Collapse halted by nuclear
forces and neutron degeneracy pressure and form a proto-NS. Explosion launches after the "bounce" of the core, and part or all of the expelled stellar envelope will fall back and accrete onto the proto-NS, which may eventually collapse into a BH (Fryer et al., 2012).
There are still many uncertainties associated with the physics of the SN mechanism. Here we use the delayed SN prescription of Fryer et al. (2012) (hereafter F12-delayed), where the explosion did not lunch until over \(\sim 250\) ms after the collapse. For a massive star with pre-SN mass of \(M_{\rm SN}\) and CO core mass of \(M_{\rm CO}\), the expected BH remnant mass \(M_{\rm BH}\) is estimated as follows,
\[M_{\rm BH}=0.99M_{\rm BH,bar}=0.99(M_{\rm proto}+M_{\rm fb}), \tag{5}\]
where \(M_{\rm BH,bar}\) is the baryonic mass2, \(M_{\rm proto}\) is the proto-NS mass after core collapse,
Footnote 2: The baryonic mass is reduced by the neutrinos that are lost (Burrows & Lattimer, 1986), and we assume that for BHs the gravitational mass \(M_{\rm BH}\) is 99% of the baryonic mass \(M_{\rm BH,bar}\) for our considered massive BBHs.
\[M_{\rm proto}=\left\{\begin{array}{ll}1.2{\rm M}_{\odot}&M_{\rm CO}<3.5{\rm M }_{\odot}\\ 1.3{\rm M}_{\odot}&3.5\leqslant M_{\rm CO}<6.0{\rm M}_{\odot}\\ 1.4{\rm M}_{\odot}&6.0\leqslant M_{\rm CO}<11.0{\rm M}_{\odot}\\ 1.6{\rm M}_{\odot}&M_{\rm CO}\geqslant 11.0{\rm M}_{\odot},\end{array}\right. \tag{6}\]
and \(M_{\rm fb}\) is the amount of material falls back to the proto-NS,
\[M_{\rm fb}=\left\{\begin{array}{ll}0.2{\rm M}_{\odot}&M_{\rm CO}<2.5{\rm M} _{\odot}\\ 0.5M_{\rm CO}-1.05{\rm M}_{\odot}&2.5\leqslant M_{\rm CO}<3.5{\rm M}_{\odot} \\ (f_{1}M_{\rm CO}+f_{2})(M_{\rm SN}-M_{\rm proto})&3.5\leqslant M_{\rm CO}<11.0{ \rm M}_{\odot}\\ M_{\rm SN}-M_{\rm proto}&M_{\rm CO}\geqslant 11.0{\rm M}_{\odot},\end{array}\right. \tag{7}\]
where \(f_{1}=0.133-\frac{0.093}{M-M_{\rm proto}}\), and \(f_{2}=-11f_{1}+1\). We define \(f_{\rm fb}=M_{\rm fb}/(M_{\rm SN}-M_{\rm proto})\) as the fallback fraction during the BH formation, which is important in determining the BH's natal kick in some kick prescriptions.
Stars with He core mass \(M_{\rm He}\) in the range of \(\sim 35-60\,{\rm M}_{\odot}\) are subjected to PPISNe (Heger & Woosley, 2002; Yussof et al., 2013; Belczynski et al., 2016; Marchant et al., 2019; Stevenson et al., 2019; Leung et al., 2019), with most of the mass above the core stripped by a set of pulsations, leaving behind the BHs with mass significantly smaller than they would be if only accounting for the core-collapse SNe. We adopt the prescription of PPISNe in Marchant et al. (2019), who computed an array of H-free metal-poor (\(0.1{\rm Z}_{\odot}\)) single-star models based on the standard \({}^{12}C(\alpha,\gamma)O\)\({}^{16}\) reaction rate to evaluate the PPISN mass loss. And the BH masses after PPISNe can be estimated as:
\[M_{\rm BH}=M_{\rm He}\sum_{i=0}^{7}\zeta_{i}(\frac{M_{\rm He}}{{\rm M}_{\odot} })^{i}, \tag{8}\]
where \(\zeta_{i}\) are the polynomial fitting coefficients of Marchant et al. (2019)'s PPISN prescription given by Stevenson et al. (2019) (as listed in Table 2). Note that the remnant mass is a non-monotonic function of the initial stellar mass.
More massive stars with \(60\,{\rm M}_{\odot}\leqslant M_{\rm He}\leqslant 135\,{\rm M}_{\odot}\) are subjected to PISNe, the entire star is completely disrupted with no remnant left3. Stars with \(M_{\rm He}>135\,{\rm M}_{\odot}\) are assumed to directly collapse to BHs.
Footnote 3: Belczynski et al. (2020a) recently suggested that if the \({}^{12}C(\alpha,\gamma)O\)\({}^{16}\) reaction rate is \(3\sigma\) lower than its standard rate, star with Helium core mass \(M_{\rm He}\sim 90{\rm M}_{\odot}\) can avoid PISN and evolve to a mass gap BH.
### Supernova kicks
As BH natal kicks suffer from a lack of stringent constraints from both observation and theory (Willems et al., 2005; Fragos et al., 2009; Repetto et al., 2012, 2017; Repetto & Nelemans, 2015; Mandel, 2016; Belczynski et al., 2016c), we adopt three different natal kick prescriptions (\(kick_{F}\)) as follows (Banerjee et al., 2020a):
1. Standard fallback-controlled kick (hereafter \(k1\)) The BH natal kick velocities \(v_{\rm kick,BH}\) are scaled linearly with
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Model} & \(m_{1}({\rm M}_{\odot})\) & \(m_{2}({\rm M}_{\odot})\) & \(M_{\rm tot}({\rm M}_{\odot})\) & \(|\overline{\chi}|\) & \(|\overline{\chi}|^{2}|\) & \(\chi_{\rm eff}\) \\ \hline \multicolumn{1}{c}{A20} & \(85_{-14}^{+21}\) & \(66_{-18}^{+17}\) & \(150_{-17}^{+29}\) & \(0.69_{-0.62}^{+0.27}\) & \(0.73_{-0.64}^{+0.24}\) & \(0.08_{-0.36}^{+0.27}\) \\ \hline \multicolumn{1}{c}{\multirow{3}{*}{NC21}} & \(Prior_{q-M}\) & \(168_{-61}^{+15}\) & \(16_{-3}^{+33}\) & \(184_{-30}^{+15}\) & \(0.85_{-0.25}^{+0.11}\) & - & \(-0.51_{-0.11}^{+0.24}\) \\ \cline{2-8} & \(Prior_{m_{1,2}}\) (\(q^{*}<4\)) & \(100_{-18}^{+17}\) & \(57_{-16}^{+17}\) & \(156_{-15}^{+21}\) & \(0.72_{-0.59}^{+0.25}\) & - & \(-0.16_{-0.40}^{+0.42}\) \\ \cline{2-8} & \(Prior_{m_{1,2}}\) (\(q^{*}>4\)) & \(166_{-35}^{+16}\) & \(16_{-3}^{+14}\) & \(183_{-27}^{+15}\) & \(0.87_{-0.16}^{+0.10}\) & - & \(-0.53_{-0.12}^{+0.14}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The derived primary BH mass \(m_{1}\), secondary BH mass \(m_{2}\), total mass \(M_{\rm tot}\), dimensionless spin parameters of individual BH and effective spin parameter \(\overline{\chi}_{1}^{*}\), \(\overline{\chi}_{2}^{*}\), and \(\chi_{\rm eff}\) for GW190521 in the source frame. Data are cited from (Abbott et al., 2020a, A20) and (Nitz & Capano, 2021, NC21), respectively. In the latter work, \(Prior_{q^{*}-M}\) denotes the prior uniform in mass ratio and total mass, \(Prior_{m_{1,2}}\) the prior uniform in component mass (\(m_{1,2}\)), respectively. Each value is within the 90% credible interval. Note here \(q^{*}\) is the ratio of the larger mass to the smaller mass.
\begin{table}
\begin{tabular}{c c} \hline \hline Coefficient & Value \\ \hline \(\zeta_{0}\) & 7.39643451 \(\times 10^{3}\) \\ \(\zeta_{1}\) & -1.13694590 \(\times 10^{3}\) \\ \(\zeta_{2}\) & 7.45060098 \(\times 10^{1}\) \\ \(\zeta_{3}\) & -2.69801221 \(\times 10^{0}\) \\ \(\zeta_{4}\) & 5.83107626 \(\times 10^{-2}\) \\ \(\zeta_{5}\) & -7.52206933 \(\times 10^{-4}\) \\ \(\zeta_{6}\) & 5.36316755 \(\times 10^{-6}\) \\ \(\zeta_{7}\) & -1.63057326 \(\times 10^{-8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Coefficients in Equation 8
the NS natal kick velocities \(v_{\rm kick,NS}\) by a factor \((1-f_{\rm fb})\)(Fryer et al., 2012; Giacobbo et al., 2018),
\[v_{\rm kick,BH}=v_{\rm kick,NS}(1-f_{\rm fb}). \tag{9}\]
(2) Convection-asymmetry-driven natal kick (hereafter \(k2\)) The BH natal kicks are produced by the convection asymmetries of the collapsing SN core (Scheck et al., 2004; Fryer & Kusenko, 2006), so
\[v_{\rm kick,BH}=\left\{\begin{array}{ll}v_{\rm kick,NS}\frac{<M_{\rm NS \geq}}{M_{\rm BH}}(1-f_{\rm fb})&\mbox{if $M_{\rm CO}\leq 3.5\mathrm{M_{\odot}}$,}\\ k_{\rm conv}v_{\rm kick,NS}\frac{<M_{\rm NS\geq}}{M_{\rm BH}}(1-f_{\rm fb})& \mbox{if $M_{\rm CO}>3.5\mathrm{M_{\odot}}$.}\end{array}\right. \tag{10}\]
In this equation, \(k_{\rm conv}\) is an efficiency factor (somewhere between 2 and 10, and we set \(k_{\rm conv}=5\) here), and \(<M_{\rm NS}>\) is a typical NS mass, taken to be \(1.4\,\mathrm{M_{\odot}}\).
(3) Neutrino-driven natal kick (hereafter \(k3\))
The BH natal kicks are produced through asymmetric neutrino emission (Fuller et al., 2003; Fryer & Kusenko, 2006),
\[v_{\rm kick,BH}=v_{\rm kick,NS}\frac{\min(M_{\rm eff},M_{\rm BH})}{M_{\rm BH}}, \tag{11}\]
where \(M_{\rm eff}\) (usually between \(5\,\mathrm{M_{\odot}}\) and \(10\,\mathrm{M_{\odot}}\)) is the effective remnant mass, and we let \(M_{\rm eff}=7\,\mathrm{M_{\odot}}\)(Banerjee et al., 2020).
To constrain the allowed velocity range in different kick prescriptions, we take a flat distribution of \(v_{\rm kick,NS}\) in the range of \(0-1000\,\mathrm{kms^{-1}}\), and assume that the supernova kicks are isotropically distributed and the mass is instantaneously lost at the moment of SN. Then \(v_{\rm kick,BH}\) can be obtained from the equations mentioned above for different kick prescriptions. When we calculate the merger rate density we adopt a more realistic, predetermined Maxwellian distribution for the NS kick velocity.
Note also that in both \(k1\) and \(k2\) prescriptions there is no natal kick for BHs formed through direct core collapse (\(f_{\rm fb}=1.0\)).
### Natal BH spins
The BH spins are modeled following Tanikawa et al. (2021). We assume zero natal spin of the zero-age main sequence (ZAMS) star, which then evolves due to stellar evolution, stellar winds, and binary interactions. Newborn BHs inherit their progenitor's spin angular momenta, except for the PPISN events where zero BH spin parameters are assumed. If the spin angular momenta of the BH progenitors are larger than those of extreme Kerr BHs, the BH spin parameters are forced to be unity. Thus the BH spin parameter can be expressed as:
\[\overrightarrow{\chi}=\left\{\begin{array}{ll}0&\mbox{PPISN},\\ \min(\frac{c}{GM^{2}}|\overrightarrow{S}|,1)\frac{\overrightarrow{L}}{| \overrightarrow{L}|}&\mbox{otherwise},\end{array}\right. \tag{12}\]
where \(\overrightarrow{S}\) and \(M\) are the spin angular momentum and the mass of the BH progenitor just before its collapse respectively, \(c\) the speed of light, \(G\) the gravitational constant, and \(\overrightarrow{L}\) the binary orbital angular momentum.
The BH natal kicks would tilt \(\overrightarrow{\chi}\) from \(\overrightarrow{L}\). We choose the coordinate in which the \(z\)-axis is parallel to the orbital angular momentum vector just before the second BH formed, that is, the normalized orbital angular momentum vector is \((0,0,1)\). Then the normalized spin vectors of the first and second formed BH \(\overrightarrow{\chi}\) and \(\overrightarrow{\chi}\) can be written as:
\[\begin{array}{ll}\overrightarrow{\chi}_{1}^{\rm s}=(\sin\theta_{1}^{\prime} \cos\phi_{1}^{\prime},\sin\theta_{1}^{\prime}\sin\phi_{1}^{\prime},\cos\theta _{1}^{\prime}),\\ \overrightarrow{\chi}_{2}^{\rm s}=(0,0,1),\end{array} \tag{13}\]
where \(\theta_{1}^{\prime}\) is the angle between the first BH spin vector and binary orbital angular momentum vector just before the second BH forms, \(\phi_{1}^{\prime}\) is randomly chosen between 0 and \(2\pi\). Finally the angles \(\theta_{1}\) (\(\theta_{2}\)) between the first (second) formed BH spin and the final BBH orbital angular momentum vector just after the second BH formation can be expressed as:
\[\begin{array}{ll}\cos\theta_{1}=\overrightarrow{\chi}^{\rm s}\cdot\frac{ \overrightarrow{L}}{|\overrightarrow{L}|},\\ \cos\theta_{2}=\overrightarrow{\chi}^{\rm s}\cdot\frac{\overrightarrow{L}}{| \overrightarrow{L}|}.\end{array} \tag{14}\]
And we do not consider possible BH spin alignment with orbital angular momentum due to tides or mass transfer. The effective spin parameter \(\chi_{\rm eff}\) of merging BBHs which reflects the spin-orbit alignment is defined as:
\[\chi_{\rm eff}\equiv\frac{m_{1}|\overrightarrow{\chi}_{1}^{\rm s}|\cos \theta_{1}+m_{2}|\overrightarrow{\chi}_{2}^{\rm s}|\cos\theta_{2}}{m_{1}+m_{2}}, \tag{15}\]
where \(m_{1}\) and \(m_{2}\) are the merging BBH masses.
### Common-envelope evolution
For semi-detached binaries, a common-envelope (CE) phase occurs when the mass transfer becomes dynamically unstable or when the two stars (with at least one of them being a giant-like star) collide at orbital periastron before Roche lobe overflow (RLOF) (Hurley et al., 2002). The CE phase plays a fundamental role in the formation of GW190521-like systems. Due to the extreme initial mass ratio \(q\sim 0.1\), mass transfer from the massive primary star to the secondary star is usually dynamically unstable and leads to CE evolution. In addition, systems with large orbital eccentricities are likely to collide at periastron. Binaries can survive the CE phase if the accretor's orbital energy is large enough to unbind the stellar envelope4. Here we adopt the \(\alpha_{\rm CE}\lambda\) formalism (de Kool, 1990), which can be expressed as:
Footnote 4: See Hurley et al. (2002) and Tanikawa et al. (2022) for a more comprehensive description of BSEBMP prescriptions of mass transfer during RLOF and CE.
\[E_{\rm bind}=\alpha_{\rm CE}\biggl{(}-\frac{\mathrm{G}M_{\rm c}M_{2}}{2a_{\rm f,CE}}+\frac{\mathrm{G}M_{1}M_{2}}{2a_{\rm i,CE}}\biggr{)}, \tag{16}\]
where the envelop's binding energy,
\[E_{\rm bind}=\int_{M_{\rm c}}^{M_{1}}(-\frac{GM(r)}{r}+\alpha_{\rm ch}U)dm=- \frac{\mathrm{G}M_{1}M_{\rm env}}{\lambda R_{\rm RL}}. \tag{17}\]
Here \(\alpha_{\rm CE}\) is the efficiency of converting the released orbital energy to eject CE, \(\lambda\) the binding energy parameter depending on envelope's structure (see Ivanova et al., 2013, for details), \(M_{1}\), \(M_{\rm c}\) and \(M_{\rm env}=(M_{1}-M_{\rm c}\) ) the masses of the donor, donor's core and envelope respectively, \(M_{2}\) the mass of the accretor, \(a_{\rm i,CE}\) and \(a_{\rm f,CE}\) the binary separation before and after the CE phase respectively, \(R_{\rm RL}\) the donor's RL radius at the onset of the CE phase, \(U\) the specific internal energy (including both thermal and recombination energies)
of envelope, and \(\alpha_{\rm th}\) the efficiency with which thermal energy can be used to eject the envelope. In this work \(\alpha_{\rm th}=1\) is assumed.
Following Xu & Li (2010) and Wang et al. (2016), we calculate the binding energy parameter \(\lambda\) for stars more massive than \(60\,{\rm M}_{\odot}\) with metallicity \(Z=0.02,0.001,0.0001\) using the stellar evolution code MESA (version 11701, Paxton et al. 2011, 2015, 2018, 2019). We then calculate \(\lambda_{\rm b}\) and \(\lambda_{\rm g}\) with \(E_{\rm bind}\) including and excluding the internal energy term \(U\), respectively. Detailed models and fitting results of \(\lambda\) are presented in Appendix A. We only use \(\lambda_{\rm b}\) in our following population synthesis calculations.
In order to explore the dependence of our results on \(\alpha_{\rm CE}\), we have run a set of simulations with \(\alpha_{\rm CE}=0.5\), 1.0, and 3.0 (\(\alpha_{\rm CE}>1.0\) would occur when additional energy or angular momentum depositing into the giant's envelope is considered, see e.g., Soker 2004).
### Population synthesis
We perform BPS simulations of binary stars with the BSEEMP code. We assume that all the stars are in binaries, and exclude binary systems with at least one of the two components fills its RL at the beginning of evolution. The initial masses \(M_{\rm i,1}\) of the primary stars are distributed in the range of \([300:900]\,{\rm M}_{\odot}\) following the Kroupa (2001) law. For the masses of the secondary stars \(M_{\rm i,2}=qM_{\rm i,1}\), we adopt a flat distribution of the mass ratio \(q\)(Sana et al., 2012), and limit \(M_{\rm i,2}=[10:60]\,{\rm M}_{\odot}\). We consider various metallicities with \(Z=0.0002\), 0.0004, 0.0008, 0.0016, 0.0032, 0.0063, 0.0126 and 0.02. The initial orbital semi-major axis \(a_{\rm i}\) is assumed to be distributed uniformly in log-space and restricted to \([3:10^{7}]\,{\rm R}_{\odot}\). We consider both initially circular (hereafter '\(e_{\rm i}=0\)' model) and eccentric (hereafter '\(e_{\rm i}=0\sim 1\)' model) orbit configurations in each case. In the latter model, the eccentricity follows a uniform distribution between 0 and 1.
We focus only on GW190521-like systems with the primary and secondary BH masses \(m_{1}=[150:180]\,{\rm M}_{\odot}\) and \(m_{2}=[10:20]\,{\rm M}_{\odot}\), so it is reasonable to only simulate binaries with a limited initial parameter ranges, and make sure that the simulated population parameters are complete to form GW190521-like systems.
Incorporating the three kick prescriptions (\(kick_{\rm F}=k1,k2\) and \(k3\)) and three values of \(\alpha_{\rm CE}=(0.5\), 1.0 and 3.0), we perform 18 sets of BPS simulations of \(10^{7}\) primordial binaries at each metallicity. We list the initial parameters of models with \(Z\leq 0.0016\) in Table 3. For higher metallicity, we found that there is no GW190521-like system formed in our simulation.
We assume a nonconservative mass transfer prescription with the accretion efficiency being 0.5. We follow the evolution of the primordial binaries until the formation of BBHs. For a newborn BBH system with component masses \(m_{1}\) and \(m_{2}\), orbital semi-major axis \(a_{0}\) and eccentricity \(e_{0}\), the inspiral time delay \(t_{\rm inspiral}(a_{0},e_{0})\), namely the time elapsed between the birth and the merger of the BBH, can be calculated (Peters, 1964):
\[t_{\rm inspiral}(a_{0},e_{0})=\frac{12}{19}\frac{c_{0}^{4}}{\beta}\int_{0}^{e_{ 0}}\frac{e^{29/19}[1+(121/304)e^{2}]^{1181/2299}}{(1-e^{2})^{3/2}}de, \tag{18}\]
where
\[c_{0}=a_{0}\ e_{0}^{-12/19}(1-e_{0}^{2})\biggl{(}1+\frac{121}{304}e_{0}^{2} \biggr{)}^{-870/2299}, \tag{19}\]
and
\[\beta=\frac{64}{5}\frac{G^{3}m_{1}m_{2}(m_{1}+m_{2})}{c^{5}}. \tag{20}\]
### Merger rate density
We then follow the procedure of Giacobbo & Mapelli (2018b) to estimate the cumulative merger rate density \(\mathcal{R}(z\leq z_{\rm det})\) of GW190521-like BBH systems within a given redshift \(z_{\rm det}\),
\[\mathcal{R}(z\leq z_{\rm det})= \sum_{z=0.1}^{z=z_{\rm det}}(\frac{1}{t_{\rm lb}(z)-t_{\rm lb}(z- \Delta z)} \tag{21}\] \[\sum_{z=15}^{z=z_{\rm det}}(\frac{f_{\rm bin}}{2}\frac{\mathcal{ SFR}(z)W_{\rm b}}{M_{*}})[t_{\rm lb}(z+\Delta z)-t_{\rm lb}(z)]),\]
where \(t_{\rm lb}(z)\) is the look-back time for binaries formed at redshift \(z\), \(M_{*}\simeq 0.55\,{\rm M}_{\odot}\) the mean mass of a stellar system in population with the Kroupa (2001) IMF, \(f_{\rm bin}=0.7\) the fraction of stars in binaries (Sana et al., 2012), \(W_{\rm b}\) the contribution of specific binaries we are interested in (Hurley et al., 2002), \(\mathcal{SFR}(z)\) the cosmic star formation rate density as a function of \(z\), usually peaked at \(z\sim 1.9\) and declined exponentially at later time. We assume that in our models the star formation commenced at \(z\) = 15.
The BBH progenitor binaries formed at redshift \(z_{\rm f}\) would merge as GW sources at \(z_{\rm m}\) (\(z_{\rm m}<z_{\rm f}\)) after a delay time \(t_{\rm delay}\), which is defined as the interval between the formation of the progenitor binary and the coalescence of the BBH, i.e., \(t_{\rm delay}=t_{\rm inspiral}+T\), where \(T<10\) Myr is the lifetime of progenitor system, if the orbital angular momentum loss is efficient enough. So we can get the look-back
\begin{table}
\begin{tabular}{c c c c c} \(Z\) & 0.0002 & 0.0004 & 0.0008 & 0.0016 \\ \hline \(M_{\rm i,1}\,[{\rm M}_{\odot}]\) & \(300-450\) & \(350-500\) & \(400-700\) & \(500-900\) \\ \hline \(M_{\rm i,2}\,[{\rm M}_{\odot}]\) & \multicolumn{3}{c}{\(20-60\)} \\ \hline \(a_{\rm i}\,[{\rm R}_{\odot}]\) & \multicolumn{3}{c}{\(10^{3.5}-10^{7}\)} \\ \hline \(kick_{\rm F}\) & \multicolumn{3}{c}{\(k1,\ k2,\ k3\)} \\ \hline \(\alpha_{\rm CE}\) & \multicolumn{3}{c}{\(0.5\), 1.0, 3.0} \\ \end{tabular}
\end{table}
Table 3: The initial parameters of our BPS models for both ‘\(e_{\rm i}=0\)’ and ‘\(e_{\rm i}=0\sim 1\)’ models. Here \(M_{\rm i,1}\) and \(M_{\rm i,2}\) are the initial primary and secondary masses respectively, and \(a_{\rm i}\) is the initial orbital semi-major axis, all in Solar units. Because there is no BH as massive as \(\sim 150\,{\rm M}_{\odot}\) formed in our calculation at \(Z\geq 0.0032\), owing to the significant wind mass loss and PISN under our evolutionary assumptions, we only display the runs of \(Z=0.0002\), 0.0004, 0.0008, and 0.0016 which can form BHs with mass in the range of interest.
times at their formation
\[t_{\rm hb}(z=z_{\rm f})=\tau_{\rm H}\int_{0}^{z_{\rm f}}\frac{1}{(1+z)E(z)}{\rm d}z, \tag{22}\]
where \(E(z)=[\Omega_{\rm m}(1+z)^{3}+\Omega_{\lambda}]^{1/2}\), and at their merger
\[t_{\rm merg}=t_{\rm hb}(z=z_{\rm m})=t_{\rm lb}(z=z_{\rm f})-t_{\rm delay}. \tag{23}\]
In our calculation, we employ the flat \(\Lambda\)CDM model with \(H_{0}=67.8\,{\rm kms^{-1}Mpc^{-1}}\), \(\Omega_{\rm m}=0.3\) and \(\Omega_{\lambda}=0.7\), where \(\tau_{\rm H}=1/H_{0}=14.4\,{\rm Gyr}\) is the Hubble time (Planck Collaboration et al., 2016). We adopt the cosmic \(\mathcal{SFR}(z)\) density in Madau & Dickinson (2014):
\[\mathcal{SFR}(z)=\frac{0.015(1+z)^{2.7}}{1+((1+z)/2.9)^{5.6}}\,{\rm M}_{\odot }{\rm yr}^{-1}{\rm Mpc}^{-3}, \tag{24}\]
and the metallicity as a function of redshift \(z\) in Belczynski et al. (2016). For the portions of distributions extending beyond the metallicity range \([0.0002,0.02]\), we use the recorded information of the systems at \(Z=0.0001\) or \(0.02\). We exclude the mergers in the near future (\(t_{\rm merg}<0\)).
### Character strain
The characteristic strain of the GW signals at the \(n\)th harmonic can be calculated following Kremer et al. (2019),
\[h_{c,n}^{2}=\frac{2}{3\pi^{4/3}}\frac{G^{5/3}}{c^{3}}\frac{M_{c,z}^{5/3}}{D_{ L}^{2}}\frac{1}{f_{n,z}^{1/3}\left(1+z\right)^{2}}\left(\frac{2}{n}\right)^{2/3} \frac{g(n,e)}{F(e)}, \tag{25}\]
where \(M_{c,z}=M_{c}(1+z)=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}(1+z)\) is the observed chirp mass at redshift \(z\), and \(D_{L}\) is the luminosity distance to the source calculated by
\[D_{L}(z)=\frac{c(1+z)}{H_{0}}\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime})}, \tag{26}\]
\(f_{n,z}=\frac{f_{n}}{1+z}=\frac{n_{f,\rm th}}{1+z}\) is the observed frequency of the \(n\)th harmonic (\(f_{n}\) is the frequency of the \(n\)th harmonic in the source frame and \(f_{\rm orb}\) is the source-fame orbital frequency), \(g(n,e)\) is the function of eccentricity, and \(F(e)\) is the eccentricity correction factor defined to be (Peters & Mathews, 1963):
\[F(e)=\sum_{n=1}^{\infty}g(n,e)=\frac{1}{(1-e^{2})^{7/2}}\left(1+\frac{73}{24}e ^{2}+\frac{37}{96}e^{4}\right). \tag{27}\]
As the GW power is sharply peaked at the peak frequency \(f_{\rm peak}\)(Peters & Mathews, 1963), we calculate the characteristic strain of our modeled GW sources at the peak frequency \(f_{\rm peak}\) for simplicity (Hamers, 2021),
\[\begin{split} f_{\rm peak}=&\frac{\sqrt{G\left(m_{1 }+m_{2}\right)}}{\pi}\times\\ &\frac{1-1.01678e+5.57372e^{2}-4.9271e^{3}+1.68506e^{4}}{[a\,(1- e^{2})]^{1.5}}.\end{split} \tag{28}\]
Thus \(n=n_{\rm peak}=f_{\rm peak}/f_{\rm orb}\)(Wang et al., 2022).
During the inspiral, the eccentricity changes due to gravitational radiation (Peters, 1964),
\[\frac{de}{dt}=-\frac{19}{12}\frac{\beta}{c_{0}^{4}}\frac{e^{-29/19}(1-e^{2})^ {3/2}}{[1+\frac{121}{304}e^{2}]^{1181/2299}}, \tag{29}\]
and the orbital separation evolves with eccentricity
\[a(e)=\frac{c_{0}e^{12/19}}{(1-e^{2})}\left[1+\frac{121}{304}e^{2}\right]^{870/2 299}, \tag{30}\]
where \(c_{0}\) is determined by the initial condition \(a(e_{0})=a_{0}\) (see Eq. [19]).
## 3 Results
For each model we regard the merging BBHs with component masses \(m_{1}=[150:180]\,{\rm M}_{\odot}\) and \(m_{2}=[10:20]\,{\rm M}_{\odot}\) as GW190521-like systems (as shown in Fig.1). According to the analysis of Nitz & Capano (2021), the primary BH mass in GW190521 is \(\sim 170\,{\rm M}_{\odot}\). We simulate binary evolution with the primary mass \(\leq 900\,{\rm M}_{\odot}\) at different metallicities and find that such massive BHs can only form at relatively low metallicities (\(Z\leq 0.0016\)), with the pre-collapse core-helium masses heavier than \(135\,{\rm M}_{\odot}\), while stars at higher metallicity (\(Z\geq 0.0032\)) will undergo PISNe, triggering complete disruption of the star and leaving no compact remnants. This is because stars formed in lower metallicity environments can reach higher central temperatures, which results in larger core masses than their counterparts at higher metallicity. So our following discussions only refer to the data set of the models with \(Z\leq 0.0016\).
Table 4 lists the predicted numbers of GW190521-like systems and kick velocity distributions under different conditions. Their main features can be summarized as follows.
\(\bullet\) Most of GW190521-like systems form through the "MT+CE" channel. Here, "MT" means that stars in a binary interact via stable RLOF or wind accretion, and the "CE" phase is triggered by eccentric collision of both stars at periastron instead of dynamically unstable mass transfer caused by the expansion of the donor star. Binary stars in eccentric orbits may collide at periastron before either one fills its RL, and such collisions lead to CE evolution if at least one of the stars is a giant-like star (Hurley et al., 2002). Thus only the \(k3\) kick prescription that produces non-zero BH1 natal kick velocities works in this channel. Besides, the number of BBH mergers decreases as \(\alpha_{\rm CE}\) increases, because larger \(\alpha_{\rm CE}\) leads to wider orbits after the CE phase, making it more difficult for the BBHs to merge. To reproduce GW190521-like systems in this channel requires \(v_{\rm kick,1}\simeq 0-50\,{\rm kms^{-1}}\) and \(v_{\rm kick,2}\simeq 0-700\,{\rm kms^{-1}}\).
\(\bullet\) Only about \(0.1\%\) of GW190521-like systems form through "MT+ MT" channel. The systems with moderate low eccentricities can avoid collision at periastron until the BBH formation. So this channel is independent of the value of \(\alpha_{\rm CE}\). The predicted natal kick velocities are \(v_{\rm kick,1}=0\) and \(v_{\rm kick,2}\simeq 30-90\,{\rm kms^{-1}}\) (\(kick_{\rm F}=k1,k2\)), and \(v_{\rm kick,1}\simeq 0-50\,{\rm kms^{-1}}\) and \(v_{\rm kick,2}\simeq 16-400\,{\rm kms^{-1}}\) (\(kick_{\rm F}=k3\)).
\(\bullet\) For very low mass ratio binaries (\(q<0.1\)), the secondary star usually does not have enough energy to drive off the CE if it is triggered by dynamically unstable MT, so there is no system formed via the "CE+MT" channel in the "\(i_{\rm i}=0\)' model. In the "\(e_{\rm i}=0\sim 1\)' model, the non-zero eccentricity makes CE evolution possible just like in the "MT+CE" channel. The number of surviving systems also decreases with increasing \(\alpha_{\rm CE}\). The predicted natal kick
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\alpha_{\rm CE}\) & \(kick_{\rm F}\) & MT+CE & MT+MT & CE+MT & CE+CE \\ & \(v_{\rm kick}\) [km\(s^{-1}\)] & & & & \\ \hline & \(k1\) & - & 19 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 41.305\(-\)63.905 & & \\ \cline{2-6}
0.5 & \(k2\) & - & 32 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.9\(-\)80.6 & & \\ \cline{2-6} & \(k3\) & 17279 & 26 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & 1.695\(-\)46.487, 0.004\(-\)691.626 & 0.015\(-\)40.476, 16.843\(-\)114.77 & & \\ \cline{2-6} & \(k1\) & - & 19 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 41.305\(-\)63.905 & & \\ \cline{2-6}
1.0 & \(k2\) & - & 32 & - & - \\ \cline{2-6} \(e_{\rm l}=0\) & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.9\(-\)80.6 & & \\ \cline{2-6} & \(k3\) & 14862 & 28 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & 2.499\(-\)46.499, 0.028\(-\)691.626 & 3.942\(-\)40.476, 21.428\(-\)383.477 & & \\ \cline{2-6} & \(k1\) & - & 20 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 41.305\(-\)63.905 & & \\ \cline{2-6}
3.0 & \(k2\) & - & 32 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.9\(-\)80.6 & & \\ \cline{2-6} & \(k3\) & 8671 & 24 & - & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & 1.969\(-\)46.605, 0.111\(-\)693.735 & 2.265\(-\)44.953, 19.369\(-\)214.626 & & \\ \hline & \(k1\) & - & 11 & 134 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.286\(-\)72.81 & 0, 171.063\(-\)394.826 & \\ \cline{2-6}
0.5 & \(k2\) & - & 22 & 105 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 30.589\(-\)90.388 & 0, 166.311\(-\)413.726 & \\ \cline{2-6} & \(k3\) & 18394 & 17 & 235 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & 0.379\(-\)46.509, 0.092\(-\)697.315 & 4.734\(-\)45.848, 34.513\(-\)142.709 & 0.092\(-\)44.373, 99.196\(-\)422.095 & - \\ \cline{2-6} & \(k1\) & - & 10 & 45 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.286\(-\)72.81 & 0, 146.125\(-\)249.277 & \\ \cline{2-6}
1.0 & \(k2\) & - & 24 & 63 & - \\ \cline{2-6} \(e_{\rm l}=0\sim 1\) & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 30.589\(-\)90.388 & 0, 134.284\(-\)301.761 & \\ \cline{2-6} & \(k3\) & 15887 & 26 & 63 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & 0.149\(-\)46.187, 0.032\(-\)687.68 & 1.202\(-\)46.288, 19.738\(-\)152.097 & 0.021\(-\)41.708, 102.228\(-\)260.19 & \\ \cline{2-6} & \(k1\) & - & 11 & 19 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 36.286\(-\)72.81 & 0,104.945\(-\)171.183 & \\ \cline{2-6}
3.0 & \(k2\) & - & 23 & 22 & - \\ & \(v_{\rm kick,1},v_{\rm kick,2}\) & & 0, 30.589\(-\)90.388 & 0, 109.604\(-\)186.818 & \\ \cline{2-6} & \(k3\) & 9250 & 15 & 28 & - \\ \cline{2-6} \(v_{\rm kick,1},v_{\rm kick,2}\) & 0.899\(-\)46.42, 0.017\(-\)675.649 & 10.485\(-\)45.848, 19.738\(-\)190.988 & 2.076\(-\)41.044, 59.636\(-\)247.057 & \\ \hline \end{tabular}
\end{table}
Table 4: Numbers of GW190521-like systems evolved via each evolution channel from ZAMS binaries to BBHs. “MT+CE”: system experiences stable mass transfer (via RLOF or wind mass loss) and later once CE phase, “MT+MT”: system without CE evolution, “CE+ MT”: system experiences once CE phase and later stable RLOF or wind accretion, “CE+CE”: system experiences twice CE phases. The corresponding minimum and maximum of \(v_{\rm kick,1}\) and \(v_{\rm kick,2}\) are also shown followed their numbers.
velocities are \(v_{\rm kick,1}=0\) and \(v_{\rm kick,2}\simeq 100-400\,\)km\({}^{-1}\) (\(kick_{\rm F}=k1,k2\)), and \(v_{\rm kick,1}\simeq 0-45\,\)km\({}^{-1}\) and \(v_{\rm kick,2}\simeq 60-420\,\)kms\({}^{-1}\) (\(kick_{\rm F}=k3\)).
* No GW190521-like system form though the "CE+CE" channel.
Table 5 presents the inferred parameters of GW190521-like systems that will merge within \(z=1.1\) and their progenitors. As most of them are formed with \(kick_{\rm F}=k3\), we only show the results with the \(k3\) prescription. The number and its subscripts and superscripts represent the \(50th\), \(16th\) and \(84th\) percentiles of each parameter. It is seen that the results do not show significant differences in in the '\(e_{\rm i}=0\)' and '\(e_{\rm i}=0\sim 1\)' models.
The analysis of Abbott et al. (2020a,b) suggested that GW190521 merged at the redshift of \(0.82^{+0.28}_{-0.34}\), while Nitz & Capano (2021) predicted a luminosity distance of \(1.06^{+1.4}_{-0.28}\)Gpc (\(z\simeq 0.21^{+0.23}_{-0.05}\)). According to their restrictions on the redshift, we display the calculated merger rate density \(\mathcal{R}\) of GW190521-like systems at \(z\leq 0.48\) and \(z\leq 1.1\) in Table 6, which lie in the range of \(4\times 10^{-5}-5\times 10^{-2}\) Gpc\({}^{-3}\)yr\({}^{-1}\). As mentioned above, the merger rate density with the \(k3\) kick prescription is much more than with the \(k1\) or \(k2\) prescription.
Fig. 2 shows the distribution of \(e_{0}\) and \(a_{0}\) at the birth of the BBHs (with \(Z=0.0002\) and \(kick_{\rm F}=k3\)). The upper and lower panels correspond to the '\(e_{\rm i}=0\)' and '\(e_{\rm i}=0\sim 1\)' models, and the left, middle, and right panels correspond to \(\alpha_{\rm CE}=0.5\), \(1.0\), and \(3.0\), respectively. The distribution of \(e_{0}\) tends to be wider with increasing \(\alpha_{\rm CE}\) in both '\(e_{\rm i}=0\)' and '\(e_{\rm i}=0\sim 1\)' models. For \(\alpha_{\rm CE}=3.0\), there are many BBHs with \(e_{0}>0.8\), while for \(\alpha_{\rm CE}=0.5\) and \(1.0\), few BBHs form with extremely eccentric and wide orbits ( \(a_{0}\sim 10^{4}\)R\({}_{\odot}\)). Most of the BBHs have \(e_{0}\leq 0.4\) and \(a_{0}\) concentrated within \(\sim 10-100\) R\({}_{\odot}\). BBHs with shorter orbits and moderate eccentricities usually merge earlier than others, as shown by the colorbars.
Fig. 3 \(-\) 5 show the results of \(Z=0.0004\), \(0.0008\) and \(0.0016\), respectively. They reflect the similar tendency as in Fig. 2. A comparison of Fig. 2 \(-\) 5 shows that, as the metallicity increases, there are more BBHs with large eccentric and wide orbits (\(e_{0}>0.8\) and \(a_{0}>100\)R\({}_{\odot}\)). This is because the progenitors of GW190521-like system at higher \(Z\) are more massive and thus have larger size than those at low \(Z\).
Current and upcoming missions such as the ground-based aLIGO, Cosmic Explorer (CE) (Reitze et al., 2019), Einstein telescope (ET) (Punturo et al., 2010) and space-borne DECIGO (Seto et al., 2001) and LISA (Amaro-Seoane et al., 2017) would detect thousand of merger events of BBHs per year (Evans et al., 2021). We explore whether GW190521-like systems could be detected by these instruments. In Fig. 6, the left three panels present the evolution of the eccentricity of GW190521-like systems during the inspiral prior to merge as a function of the peak frequency. The GW emission dominates the evolution of the binary semi-major axis and eccentricity, leading to efficient circularization of the BBH systems before the merger. The right three panels present the GW signal characteristic strain at the \(n_{\rm peak}th\) harmonic along with the orbital evolution as a function of the peak frequency, which is the key ingre
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(\alpha_{\rm CE}\) & \(M_{\rm i,1}\) (M\({}_{\odot}\)) & \(M_{\rm i,2}\) (M\({}_{\odot}\)) & \(\log a_{\rm i}\) (R\({}_{\odot}\)) & \(e_{\rm i}\) & \(\log a_{0}\) (R\({}_{\odot}\)) \\ \hline \multirow{3}{*}{\(e_{\rm i}=0\)} & 0.5 & \(433.71^{+147.60}_{-55.03}\) & \(42.95^{+3.01}_{-2.42}\) & \(4.16^{+0.39}_{-0.28}\) & 0 & \(1.89^{+0.17}_{-0.08}\) \\ & 1.0 & \(419.53^{+198.54}_{-46.43}\) & \(42.80^{+3.68}_{-3.14}\) & \(4.19^{+0.36}_{-0.28}\) & 0 & \(1.88^{+0.10}_{-0.07}\) \\ & 3.0 & \(444.77^{+224.67}_{-67.40}\) & \(37.83^{+4.08}_{-2.79}\) & \(4.16^{+0.35}_{-0.24}\) & 0 & \(1.84^{+0.09}_{-0.07}\) \\ \hline \multirow{3}{*}{\(e_{\rm i}=0\sim 1\)} & 0.5 & \(450.16^{+232.91}_{-69.61}\) & \(43.07^{+3.11}_{-3.13}\) & \(4.19^{+0.34}_{-0.29}\) & \(0.49^{+0.24}_{-0.32}\) & \(1.90^{+0.54}_{-0.09}\) \\ & 1.0 & \(422.61^{+206.80}_{-47.97}\) & \(43.18^{+3.33}_{-3.38}\) & \(4.22^{+0.37}_{-0.28}\) & \(0.47^{+0.24}_{-0.33}\) & \(1.88^{+0.15}_{-0.07}\) \\ & 3.0 & \(442.68^{+229.05}_{-67.86}\) & \(37.71^{+4.36}_{-2.57}\) & \(4.22^{+0.44}_{-0.29}\) & \(0.46^{+0.27}_{-0.28}\) & \(1.84^{+0.11}_{-0.07}\) \\ \hline \hline \multicolumn{5}{c}{\(e_{0}\)} & \(m_{\rm 1}\)(M\({}_{\odot}\)) & \(m_{\rm 2}\)(M\({}_{\odot}\)) & \(v_{\rm kick,1}\) (kms\({}^{-1}\)) & \(v_{\rm kick,2}\) (kms\({}^{-1}\)) & \(t_{\rm delay}\) (Gyr) \\ \hline \(0.35^{+0.29}_{-0.25}\) & \(164.14^{+8.36}_{-10.20}\) & \(17.26^{+1.58}_{-1.52}\) & \(33.77^{+5.90}_{-11.27}\) & \(229.66^{+109.71}_{-165.78}\) & \(5.65^{+3.24}_{-1.84}\) \\ \(0.27^{+0.31}_{-0.18}\) & \(163.55^{+9.28}_{-8.30}\) & \(17.15^{+2.00}_{-18.82}\) & \(34.03^{+6.13}_{-10.45}\) & \(180.34^{+165.35}_{-117.76}\) & \(5.94^{+3.09}_{-2.09}\) \\ \(0.24^{+0.27}_{-0.17}\) & \(164.13^{+9.84}_{-9.38}\) & \(14.61^{+1.93}_{-2.67}\) & \(33.83^{+6.22}_{-8.96}\) & \(186.98^{+163.48}_{-134.74}\) & \(5.81^{+3.14}_{-2.06}\) \\ \hline \(0.38^{+0.51}_{-0.27}\) & \(165.51^{+8.21}_{-11.03}\) & \(17.24^{+1.61}_{-1.56}\) & \(27.56^{+10.36}_{-12.94}\) & \(
dient determining whether such mergers can be seen with the GW detectors, in the '\(e_{\rm i}=0\)' model. The evolutionary tracks (calculated with Eqs. [29] and [30]) are gradually overlapped by the sensitivity curves of LISA, DECIGO, ET, CE, A+LIGO, and aLIGO during the orbital shrinking and circularizing stage. The systems become largely circularized before entering the sensitivity band of ET, CE, A+LIGO, and aLIGO, and any residual eccentricity is expected to have a negligible effect on their detectability (Mandel et al. 2008). The most distant detectable GW190521-like mergers are at the redshfit \(\sim 4.4\). Fig. 7 shows the results in the '\(e_{\rm i}=0\sim 1\)' model, and there are no significant differences compared with Fig. 6.
Another important characteristic of merging BBHs is their spins, which get imprinted in the GW signal (Cutler & Flanagan 1994). The BH progenitors gain and lose their spin angular momenta through stellar evolution, mass transfer and tidal interactions (Hurley et al. 2002; Belczynski et al. 2020b; Tanikawa et al. 2021). The spin angular momenta are generally parallel to the orbital angular momentum, until the kicks to the BHs cause their spin axes tilted. According to traditional tidal theory (Zahn 1977; Hut 1981), the torque depends on the ratio of the stellar radius \(R\) to the separation \(a\) of both stars, that is, \(\propto(R/a)^{6}\). Because the progenitor of BH1 is very massive and the initial orbit is very wide, spinning up of the BH1's progenitor is ineffective, so the merging BBHs generally have small \(|\chi_{\rm eff}|\). In the '\(e_{\rm i}=0\)' model, \(\chi_{\rm eff}=0\sim 0.1\) (\(kick_{\rm F}=k3\)) and \(-0.09\sim 0.1\) (\(kick_{\rm F}=k1,k2\)); in the '\(e_{\rm i}=0\sim 1\)' model, \(\chi_{\rm eff}=-0.08\sim 0.1\) (\(kick_{\rm F}=k3\)) and \(-0.1\sim 0.1\) (\(kick_{\rm F}=k1,k2\)). They are in contradiction with Nitz & Capano (2021)'s prediction that the BH spin is anti-aligned with the orbital angular momentum and \(\chi_{\rm eff}=-0.51^{+0.24}_{-0.11}\). However, the mechanism of tidal interactions are not well understood. If we adopt the Geneva model (Eggenberger et al. 2008) in which angular momentum is mainly transported by meridional currents (see also Belczynski et al. 2020a), \(|\chi_{\rm I}^{\rm n}|\) can increase from \(\sim 0\) to \(\sim 0.25\). As the spin of the merger product is dominated by the contribution of the more massive BH1, the estimated \(\chi_{\rm eff}\) changes to be \(-0.3\sim 0.32\). We also note that, by using the Tayler-Spruit magnetic dynamo angular transport, Belczynski et al. (2020a) inferred the natal spins of BBHs (\(m_{1}=84.9\) M\({}_{\odot}\), \(m_{2}=64.6\) M\({}_{\odot}\)) that
would merge within Hubble time to be \(|\overrightarrow{\chi_{1}}|=0.052\) and \(|\overrightarrow{\chi_{2}}|=0.523\).
## 4 Discussion
The natal kick plays a key role in the life of a compact star binary, as it affects not only the orbital parameters and systemic velocity, but also the binary evolutionary path (Brandt & Podsiadlowski, 1995). There is a general consensus that NSs are usually born with large kick velocities \(v_{\rm kick,NS}\sim 200-500\,{\rm kms}^{-1}\)(Lyne & Lorimer, 1994). However, the origin of the SN kick is in debate. One possible mechanism is the asymmetric material ejection during the SN explosion, triggered by the large-scale hydrodynamic perturbation or convection instabilities in the SN core (Burrows & Hayes, 1996; Goldreich et al., 1997; Scheck et al., 2004; Nordhaus et al., 2012; Gessner & Janka, 2018). Other investigations suggest that it may be related to the anisotropic neutrino emission from the proto-NS induced by strong magnetic filed (Kusenko & Segre, 1996; Lai & Qian, 1998; Maruyama et al., 2011). Besides, the topological current may be responsible for the natal kick (Charbonneau & Zhitnitsky, 2010).
The kick velocity distribution is also in active study. Arzoumanian et al. (2002) studied the velocity distribution of radio pulsars based on large-scale 0.4 GHz pulsar surveys, and found a two-component velocity distribution with characteristic velocities of 90 and 500 \({\rm kms}^{-1}\). Hobbs et al. (2005) analyzed a catalogue of 233 pulsars with proper motion measurements, and suggested the NS natal kick distribution with a Maxwellian one-dimensional dispersion \(\sigma_{\rm NS}=265\,{\rm kms}^{-1}\), which is widely used in later studies. From the analysis of the proper motions of 28 pulsars using very long baseline array interferometry data, Verbunt et al. (2017) showed that a distribution with two Maxwellians improves significantly on a single Maxwellian for the young pulsar velocities.
Whether stellar-mass BHs receive such large kicks is also a matter of debate. A growing number of studies have been devoted to investigate the natal kicks of BHs relying on a variety of methods and data sets, such as the study of massive runaway and walkaway stars (Blaauw, 1961; De Donder et al., 1997; Renzo et al., 2019; Aghakhanloo et al., 2022), BH X-ray binaries (Mirabel et al., 2001; Jonker & Nelemans, 2004; Repetto et al., 2012; Wong et al., 2012, 2014; Atri et al., 2019; Kimball et al., 2022), astrometric microlensing (Andrews & Kalogera, 2022), and merging BBH GW events (Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2021).
In light of the observational constraints on the NS/BH natal kick velocities, several phenomenological and analytic kick prescriptions are proposed, mainly depending on the SN ejecta mass and remnant mass (e.g., Bray & Eldridge, 2018; Giacobbo & Mapelli, 2020; Mandel & Muller, 2020; Richards et al., 2022). Because the kick-induced orbital eccentricity determines the time-scale over which BBHs are expected to merger via GW radiation, the merging history of BBHs provide a probe to the natal kick received by BHs. Based on the premise that GW190521 is an IMRI with component masses of \(\sim 170\,{\rm M}_{\odot}\) and \(\sim 16\,{\rm M}_{\odot}\)(Nitz & Capano, 2021), we examine the isolated binary evolution channel with three kick prescriptions. In the \(k1\) and \(k2\) prescriptions the BH natal kick is determined by the fallback fraction \(f_{\rm fb}\), so massive BH experienced totally fallback (\(f_{\rm fb}=1.0\)) would receive no kick, while in \(k3\) prescription BHs always receive a kick produced through asymmetric neutrino emission.
Our calculations indicate that, to produce the merger event, the less massive BH should receive a natal kick with velocity of a few hundred \({\rm kms}^{-1}\), thus preferring the \(k3\) prescription. This is of particular interest since in most cases both BHs formed through totally fallback, and the conclusion is not sensitive to the choice of the CE efficiency \(\alpha_{\rm CE}\).
We predict the merger rate density of GW190521-like systems \({\cal R}(z\leq 1.1)\sim 4\times 10^{-5}-5\times 10^{-2}\,{\rm Gpc}^{-3}{\rm yr}^{-1}\) if the BH natal kick is weighted to follow a Maxwell distribution of the NS kick with \(\sigma_{\rm NS}=265\)\({\rm kms}^{-1}\). Under the interpretation that GW190521 is an almost equal mass ratio system, LIGO/Virgo collaboration reported the merger rate density of GW190521-like systems to be \(0.13^{+0.30}_{-0.10}\,{\rm Gpc}^{-3}{\rm yr}^{-1}\) with the effective spin parameter \(\chi_{\rm eff}=0.08^{+0.26}_{-0.36}\)(Abbott et al., 2020a,b). By employing a new estimate of the PPISN mass loss, Belczynski et al. (2020a) obtained a merger rate density of \(\sim 0.04\,{\rm Gpc}^{-3}{\rm yr}^{-1}\) for such events via isolated binary evolution. Tanikawa et al. (2021) estimated the merger rate density of Pop III BBHs (with total mass \(\sim 130-260\,{\rm M}_{\odot}\) and composing at least one \(130-200\,{\rm M}_{\odot}\) IMBH) about \(0.01\,{\rm Gpc}^{-3}{\rm yr}^{-1}\). Hijikawa et al. (2022) performed a BPS calculation for very massive Population III stars and derived the property of the BBH mergers, adopting constant values for \(\alpha_{\rm CE}\lambda\) in their CE evolution. In their 'low mass + high mass' model, the resultant compact binaries consist of a stellar mass BH (below the PISN mass gap) and an IMBH (above the PISN mass gap) with mass ratio ranging from 0.15 to 0.35. The predicted merger rate density peaks at \(z\sim 10\) with a value of \((1-10)\,{\rm Gpc}^{-3}{\rm yr}^{-1}\), and declines to nearly zero at \(z\leq 3\) because of the very short delay time
Figure 1: The posterior distribution of GW190521 from Nitz & Capano (2021) under \(Prior_{q-M}\) prior, overlaid yellow region is the component masses of our calculated GW190521-like systems.
(less than 10 Myr). In our Population II evolution channel, the merger rate peaks at \(z\sim 2\) with the delay time ranging from \(\sim 1.4-12.1\) Gyr.
## 5 Summary
The third observing run operated by aLIGO and advanced Virgo discovered a massive BBH merger event GW190521, with a remnant total mass of \(150^{+29}_{-17}\) M\({}_{\odot}\), falling in the IMBH regime (Abbott et al., 2020), and the component masses were estimated to be \((m1,m2)=(85^{+21}_{-14},66^{+17}_{-18})\) M\({}_{\odot}\) within 90% credible region (see also Barrera & Bartos, 2022; Gamba et al., 2021). Nitz & Capano (2021), however, showed that GW190521 may be alternatively an IMRI, with the component masses of \(m_{1}\sim 170\) M\({}_{\odot}\) and \(m_{2}\sim 16\) M\({}_{\odot}\), which happen to straddle the PISN mass gap. In the most recent analysis, Gamba et al. (2022) revealed the BH masses to be \(81^{+62}_{-25}\) M\({}_{\odot}\) and \(52^{+32}_{-32}\) M\({}_{\odot}\) under the hypothesis that it was generated by the merger of two non-spinning BHs on hyperbolic orbits. So the nature of GW190521 is still uncertain.
Assuming the configuration of Nitz & Capano (2021) for GW190521 (or similar systems to be discovered in the future), we perform BPS simulation to interpret the formation of GW190521-like systems via isolated binary evolution channel. Our analyses prefer that this merger event had evolved from primordial binary systems in metal-poor environment with \(Z\leq 0.0016\). The majority of them are formed via an initial phase of stable RLOF before the formation of the BH1, followed by a CE phase triggered by collision at periastron when BH1's companion is a giant-like star in a close eccentric orbit. The initial ZAMS progenitor masses are expected to be \(M_{\rm i,1}\sim 300-800\) M\({}_{\odot}\) and \(M_{\rm i,2}\sim 20-60\) M\({}_{\odot}\), respectively, which are metallicity dependent. By using the fallback-independent kick prescription, the merger event requires the primary and secondary BHs to receive natal kicks with velocities \(v_{\rm kick,1}<50\) kms\({}^{-1}\) and \(v_{\rm kick,2}<700\) kms\({}^{-1}\). Our results support the hypothesis that BHs formed by direct core collapse receive considerable large natal kick. The predicted merger rate density for GW190521-like systems is \(4\times 10^{-5}-5\times 10^{-2}\) Gpc\({}^{-3}\)yr\({}^{-1}\) at \(z_{\rm m}\leq 1.1\). We also find that, using the traditional treatment of tidal interaction results in very small effective spin parameter, but if using the Geneva model instead, \(\chi_{\rm eff}\) ranges from \(-0.3\) to \(0.32\), roughly located within the interval \(-0.51^{+0.11}_{+0.24}\) estimated by Nitz & Capano (2021).
## Acknowledgments
We thank the anonymous referee for their useful comments, which helped improve the manuscript. We are also grateful to Shi-Jie Gao for essential help with the calculation of \(\lambda\)
Figure 2: The distribution of the eccentricity versus the orbital semi-major axis at the BBH formation, runs for \(kick_{\rm P}=k3\) and metallicity \(Z=0.0002\) is shown. The shaded region in each panel represents the area of theoretical parameter space (\(e_{0}\,a_{0}\)) which satisfies \(t_{\rm inspiral}(e_{0}\,a_{0})\leq\tau_{\rm H}\), while the color-coded scatters label the modeled GW190521-like systems with the colors denoting their \(t_{\rm inspiral}(e_{0}\,a_{0})\) values and the size denotes the weight of each system in the population. Columns from left to right correspond to the simulations with \(\alpha_{\rm CE}=0.5\), \(1.0\) and \(3.0\), respectively. The top and bottom panels correspond to ‘\(e_{\rm i}=0\)’ and ‘\(e_{\rm i}=0\sim 1\)’ models. The horizontal histograms represent the merger rate density \(\mathcal{R}\)-weighted distribution of \(e_{0}\).
Figure 4: Same as Fig. 2, but for \(Z=0.0008\).
Figure 3: Same as Fig. 2, but for \(Z=0.0004\).
## Data Availability
All data underlying this article will be shared on reasonable request to the corresponding authors.
|
2305.11440 | Coordinated Frequency-Constrained Stochastic Economic Dispatch for
Integrated Transmission and Distribution System via Distributed Optimization | When large-scale uncertain centralized and distributed renewable energy
sources are connected to a power system, separate dispatching of the
transmission power system (TPS) and the active distribution network (ADN) will
lower the network security and frequency security of the system. To address
these problems, this paper proposes a coordinated frequency-constrained
stochastic economic dispatch (CFC-SED) model for an integrated transmission and
distribution (ITD) system. In this model, the dynamic frequency security
constraints and network security constraints of the ITD system are constructed,
and the joint chance constraints are adopted to handle the uncertainty. Then,
the control parameters of inverter-based resources, the base point power, and
the regulation reserve of all dispatchable resources in the ITD system are
jointly optimized for the minimum operating cost. TPS and ADNs can deliver base
point power bidirectionally and provide frequency regulation support
bidirectionally, which extend the existing reserve assumption in ITD dispatch
and enhance the operational security of the ITD system. Moreover, based on the
alternating direction of multiplier algorithm, a two-layer distributed
optimization framework is proposed to solve the CFC-SED model. Case studies
show that the CFC-SED model can fully utilize the potential of multiple
regulation resources to improve the security performance of the ITD system, and
TPS and ADNs can be coordinated efficiently through the proposed distributed
optimization framework. | Ye Tian, Zhengshuo Li | 2023-05-19T05:46:34Z | http://arxiv.org/abs/2305.11440v1 | Coordinated Frequency-Constrained Stochastic Economic Dispatch for Integrated Transmission and Distribution System via Distributed Optimization
###### Abstract
When large-scale uncertain centralized and distributed renewable energy sources are connected to a power system, separate dispatching of the transmission power system (TPS) and the active distribution network (ADN) will lower the network security and frequency security of the system. To address these problems, this paper proposes a coordinated frequency-constrained stochastic economic dispatch (CFC-SED) model for an integrated transmission and distribution (ITD) system. In this model, the dynamic frequency security constraints and network security constraints of the ITD system are constructed, and the joint chance constraints are adopted to handle the uncertainty. Then, the control parameters of inverter-based resources, the base point power, and the regulation reserve of all dispatchable resources in the ITD system are jointly optimized for the minimum operating cost. TPS and ADNs can deliver base point power bidirectionally and provide frequency regulation support bidirectionally, which extend the existing reserve assumption in ITD dispatch and enhance the operational security of the ITD system. Moreover, based on the alternating direction of multiplier algorithm, a two-layer distributed optimization framework is proposed to solve the CFC-SED model. Case studies show that the CFC-SED model can fully utilize the potential of multiple regulation resources to improve the security performance of the ITD system, and TPS and ADNs can be coordinated efficiently through the proposed distributed optimization framework.
Economic dispatch, Integrated transmission and distribution system, Reserve, Frequency security.
## Nomenclature
_Indices and sets_
\begin{tabular}{l l} _T/D_ & \begin{tabular}{l} Subscript or sub-subscript markers for distinguishing \\ parameters/variables of the transmission power system \\ (TPS) and active distribution network (ADN) \\ Indices of boundary bus, energy storage (ES), thermal \\ _Tb,Te,Tg,T,T,Tv_ \\ \end{tabular} & \begin{tabular}{l} In-type \\ \end{tabular} \\ _Tb,De,De,Di,Di,Di,Di_ & \begin{tabular}{l} In-type \\ \end{tabular} \\ _Db,De,Di,Di,Di,Di_ & \begin{tabular}{l} Transmission line, and dispatchable photovoltaic (DPV) \\ \end{tabular} \\ _Db,De,Di,Di,Di,Di_ & \begin{tabular}{l} Sets of ESs, thermal units, transmission lines, DWFs, and \\ bus nodes in TPS \\ \end{tabular} \\ _E\({}_{T}\),G\({}_{T}\),Line\({}_{D}\),W\({}_{T}\),N_ & \begin{tabular}{l} Sets of ESs, distributed generators, transmission lines, \\ DWFs, and \\ bus nodes in ADN \\ \end{tabular} \\ _E\({}_{D_{1}}\),G\({}_{D}\),Line\({}_{D}\),PV\({}_{D}\),N_ & \begin{tabular}{l} Depvds, and bus nodes in ADN \\ \end{tabular} \\ _N\({}_{B}\),N\({}_{MR}\)_ &
\begin{tabular}{l} Sets of boundary buses connected with ADNs in TPS and inverter-based resources (IBRs) in TPS or ADNs \\ \end{tabular} \\ _Parameters_
\begin{tabular}{l l} \(\mathbf{A}_{s}\),\(\mathbf{A}_{u}\),\(\mathbf{B}_{s}\),\(\mathbf{B}_{u}\) & Linear coefficient matrices of voltage phase angle and voltage amplitude with respect to nodal injected active/reactive power \\ \end{tabular} \\ \end{tabular}
This work was supported by the National Key R&D Program of China under Grant 2022YFB2402900. Y. Tian and Z. Li are with the School of Electrical Engineering, Shandong University, Jinana 250061, China. Zhengshuo Li is the corresponding author (e-mail: [email protected]).
\(R_{n}^{\prime},R_{n}^{\prime},R_{n}^{\prime},R_{n}^{\prime},R_{n}^{\prime}\) Up/down reserve capacities of units \(T_{R}\) and \(Dg\)
\(R_{n}^{\prime\prime},R_{n}^{\prime\prime},R_{n}^{\prime\prime},R_{n}^{\prime \prime}\) Up/down reserve capacities of units \(T_{e}\) and \(De\)
\(R_{n}^{\prime},R_{n}^{\prime},R_{n}^{\prime},R_{n}^{\prime\prime}\) Up/down regulation reserves of the \(b^{th}\) ADN
\(\Delta p_{m}^{m}\) The \(m^{th}\) segment frequency security region
\(\Delta p_{n}\) Regulation power from ADN to TPS at boundary bus \(b\)
## I Introduction
When large-scale renewable energy is integrated into transmission power systems (TPSs) and active distribution networks (ADNs) [1], the uncertainty throughout the transmission and distribution sectors increases significantly. In such circumstances, if the TPS and ADNs are dispatched separately, various problems will emerge, such as boundary power mismatch between the TPS and ADN and potential line congestion [2],[3]. Therefore, the optimal dispatch of an integrated transmission and distribution (ITD) system has received extensive attention e.g., the researches related to unit commitment and economic dispatch (ED) problems of ITD system [3]-[6], so that the multiple dispatchable resources in TPS and ADN are allocated more reasonably, and, as a result, the reliability of the entire system is improved. In addition, since the TPS and ADN are managed by different operators, various distributed algorithms applied to ITD have also been widely studied, such as alternating direction of multipliers algorithm (ADMM) [5], analytical target cascading [4], and heterogeneous decomposition [3].
Existing studies have mainly focused on the reserve capacity and base point power optimization, without considering the dynamic regulation requirements of the ITD. As the penetration proportion of renewable energy increases, the enhanced power disturbance and the reduced inertia of the system aggravate the risk of frequency irregularities. Thus, it is essential to ensure the frequency security for the coordinated operation of ITD. In the past few years, the importance of considering frequency regulation requirements in ED has been explored [7, 8, 9, 10]. For example, the authors of [11] studied dynamic ED considering ITD and the automatic regulation effect. As an increasing number of traditional thermal units are replaced by both centralized grid-connected renewable energy in TPSs and distributed grid-connected renewable energy in ADNs, the ED model that only relies on thermal units to regulate frequency cannot reliably deal with real-time power disturbances. This results in **unsatisfactory dynamic frequency performance** (e.g., rate of change of frequency [RoCoF] and maximum frequency deviation) during intra-dispatch periods. Fortunately, changing the control parameters (e.g., virtual inertia and droop coefficient) of inverter-based resources (IBRs) at the ED level can mitigate these issues [12, 13] to improve the dynamic frequency performance during intra-dispatch periods. However, previous studies have only focused on the regulation contribution of centralized grid-connected resources in TPS and have not considered the potential regulation capability of distributed generation sources in ADNs.
To ensure the reliable dynamic frequency performance of an ITD system, it is necessary to develop a frequency-constrained ED for the ITD system. The goals should be to make full use of the regulation resources in ADN as a supplement to TPS (or use the resources in TPS to improve the voltage issues of ADN) and guarantee the optimal economy and safety of the entire system. However, there are several challenges to achieving these goals: _1)_ Centralized grid-connected renewable energy in TPS and distributed grid-connected renewable energy in ADN both have obvious uncertainties (e.g., the distributed photovoltaic output steeply declines due to sudden dark clouds in ADN), which will affect the safety of ITD. _2)_ Network security problems, such as voltage violation, line congestion, and both active and reactive boundary power mismatch [14], should be avoided when generation resources in ITD (e.g., grid-connected windfarm and distributed photovoltaic) are dispatched to provide frequency regulation support. _3)_ TPS and ADNs need to be coordinated in both base point operation and frequency regulation to guarantee the operational safety of the entire ITD system. _4)_ The ITD dispatch model, which contains dynamic frequency constraints and joint chance constraints dealing with uncertainty, should be solved in an efficient and distributed pattern. Although studies have been conducted to address some of the above challenges [3, 5, 11], they relied on traditional assumptions, such as regarding ADNs as uncertain loads or adjustable generators on the TPS side. For example, in [11] and [15], ADNs are assumed to be adjustable generators to provide reserve support to TPS, whereas in [5], ADNs are treated as disturbance loads in TPS, and TPS provides reserve support to ADNs, which limits the potential of regulatory resources in the ITD. Moreover, to the best of our knowledge, **there is no relevant research on chance-constrained ITD dispatch considering dynamic frequency constraints, nor on frequency regulation coordination-related models and algorithms**.
To address the aforementioned problems, this paper develops a coordinated frequency-constrained stochastic ED (CFC-SED) model for an ITD system. In the CFC-SED model, joint chance constraints (JCCs) [16] are adopted to handle the uncertainty in TPS and ADNs, and the dynamic frequency security constraints of ITD are constructed reasonably. The new and complex cooperative constraints between TPSs and ADNs are introduced, **which differentiates this work from the existing ITD studies**. Additionally, the base point power and reserve capacity of multiple dispatchable resources as well as the control parameters of IBRs (e.g., dispatchable renewable energy units and energy storage (ES)) are jointly optimized to ensure the optimal economy and safety of the entire ITD system. Finally, a two-layer distributed cooperative optimization framework based on the ADMM algorithm is proposed to solve this CFC-SED model in a distributed pattern. The major contributions of this paper are summarized as follows.
1. A novel CFC-SED model for ITD is proposed, in which the multiple dispatchable resources in ITD are reasonably coordinated considering dynamic frequency constraints, JCCs, network security constraints, and cooperative constraints to improve the dispatch reliability and economy. Compared with existing literature (e.g., [3, 5, 11]), this CFC-SED model allows the TPS and ADNs not only to deliver bidirectional base point power, but also to **provide bidirectional frequency regulation support** when the system power disturbance occurs. Therefore, the frequency regulation ability of ITD is fully exploited, and the
operation safety of the entire ITD system is significantly enhanced, which is also verified in the case studies.
2. A two-layer distributed cooperative optimization framework is proposed to solve the CFC-SED model. It allows flexible choices of different algorithms, such as ADMM or other distributed cooperative algorithm, to be adopted in the outer layer to achieve effective distributed cooperation between TPS and ADNs, and the sample average approximation [17] (SAA) or its variant (mix-SAA [18]), to be adopted in the inner layer to handle the chance constraints regarding the TPS and ADNs.
The rest of this paper is organized as follows. Section II articulates the operational structure and formulation of the proposed CFC-SED model. Section III introduces the proposed two-layer distributed solution framework and the algorithm. Section IV presents case studies and analysis. Section V summarizes the conclusions and directions for future work.
## II Modeling of the CFC-SED model for ITD
### _Operational Structure of CFC-SED_
The operational structure of the CFC-SED model is illustrated in Fig. 1, where the _dispatch results_ indicate the optimal base point power, regulation reserve, and inverter control parameter. The dynamic frequency regulation requirements (i.e., regulation reserve, virtual inertia, and droop capacity), network security requirements (e.g., bus node voltage and line power flow security), and the uncertain power in TPS and ADNs are considered in this CFC-SED model. Then the base point power, regulation reserve, and inverter control parameters of the dispatchable resources in TPS, including thermal units, centralized grid-connected windfarm, and ES station, and the dispatchable resources in ADNs, including distributed generation units, distributed photovoltaic, and ES, are jointly optimized for the optimal cooperation of ITD.
To achieve the objective of optimal operation and fully utilize the regulation potential of all regulation resources in the ITD, TPS and ADNs are coordinated to optimize base point power and regulation support, as shown in Fig. 1. Bidirectional base point power exchange between TPS and ADNs has been extensively studied in [3, 5, 11], so this paper only focuses on analyzing the bidirectional support between TPS and ADNs during frequency regulation.
Specifically, when a positive disturbance (net load increase or generation reduction) occurs in both TPS and ADNs, there are two possible conditions, as shown in Fig. 2a and 2b: _a)_ the ADN can suppress its own disturbance and provides additional regulation reserve or inertia support to TPS; or _b)_ the ADN cannot suppress its own disturbance and requires TPS to provide regulation support to ADN. Under these conditions, the boundary power in the positive direction decreases (i.e., the base point power minus the positive power variation) in Fig. 2a because ADN provides regulating power support to TPS; conversely, the boundary power in the positive direction increases in Fig. 2b.
Similarly, when a negative disturbance (net load reduction or generation increase) occurs in both TPS and ADNs, the power variation between TPS and ADNs is shown in Fig. 2c and 2d, respectively. In these figures, the power variation takes a negative value because the power from TPS to ADN will increase when ADN provides regulation support to TPS with a negative disturbance.
When the power disturbances in TPS and ADN are in opposite directions, the power variation between TPS and ADN can also be represented by Fig. 2. For example, if a positive disturbance occurs in TPS and a negative disturbance occurs in ADN, the boundary power from TPS to ADN in the positive direction will decrease, as shown in Fig. 2a.
Finally, notice that the power variation range between TPS and ADNs is subject to the transmission capacity of the boundary bus, as illustrated in Fig. 2e.
Based on the above analysis of new and complex cooperative mechanisms considering frequency regulation, the mathematical formulation of the centralized CFC-SED model is presented below. This CFC-SED model optimizes the control parameters of IBRs, the base point power, and the primary and secondary frequency regulation reserves of all dispatchable resources to achieve the minimum operation cost of ITD.
### _Constraints in Base Point Operation Case_
#### Ii-B1 Conventional Operational Constraints
The conventional operational constraints of ITD in the base case have been widely adopted [3, 5, 11] and can be expressed in a compact form, as shown in (1) and (2). The equality constraint in (1) represents active power balance in TPS. The inequality constraint in (1) represents active/reactive base point power limit, reserve capacity, and ramping rate limit constraints of thermal units; active/reactive base point power limit constraints of dispatchable windfarms (DWFs); and base
Fig. 1: Operational structure of the CFC-SED model.
Fig. 2: Illustration of the bidirectional regulation support in ITD. In these figures, the solid line indicates the base point boundary power, the arrow points to the positive power direction (if the base point power takes a negative value, it means that the power is reverted from ADN to TPS), and the dotted line indicates the power variation.
point power, reserve capacity, and state of charge limit constraints for ES in TPS. The equality constraint in (2) represents active/reactive power balance in the \(b^{th}\) ADN (the subscript \(b\) is omitted for simplicity). The inequality constraint in (2) represents the generation capacity limitation of distributed generators, dispatchable distributed photovoltaics (DPVs), ES, and reactive power compensation in ADN:
\[\begin{split}\mathbf{E}_{{{}_{R}}}\mathbf{{{}_{T}}}\mathbf{{{}_{T}}}\mathbf{{{}_{B }}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}} \mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{ }_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{ }_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{ {}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\mathbf{{{}_{B}}}\
\[\Delta p_{\xi}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{T}^{C}+\beta_{T}^{H}H_{T }^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,. \tag{17a}\] \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{T }^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17b) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17c) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17d) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17e) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17f) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17g) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17g) \[\Delta\overline{p}_{D,b}^{\text{max}}= \Delta\overline{f}_{\text{max}}\left(\beta_{D,b}^{C}+\beta_{P,L}^{H}H_{ T}^{\text{rand}}+\beta_{P,L}^{P,L}D^{\text{rand}}\right)\,.\] (17g) \[0\leq H_{t}\leq\overline{H}_{t},\,0\leq D_{t}\leq\overline{D}_{t} \,,k\in\{W_{T},PV_{0},E_{T},E_{B}\}\,. \tag{17g}\]
As mentioned previously, the linear RoCoF constraint, maximum frequency deviation, and quasi-steady-state frequency deviation constraints [12] in ITD can be formulated as (16a), (16b), and (16c), respectively, where (16c) is expressed through a set of linearized segment frequency security margins [23] of TPS and ADNs, which are shown in (17g) and (17g), respectively, and the detailed derivation is shown in [23]. Constraint (17g) represents the variable range of the virtual inertia and droop coefficients of DWF or DPV and ES. Further, since the uncertainty distributions of \(\zeta_{T},\zeta_{D,b}\) are independent from each other, constraint (16) can be transformed into (18), which is physically interpreted as letting the dynamic frequency performance of PFR be guaranteed for the worst case.
\[H_{T}^{\text{rand}}+\sum_{k\in\mathcal{N}_{n}}H_{D,b}^{\text{rand}} \geq \frac{1}{2\overline{M}_{\text{max}}}\max\{\zeta_{T}^{\text{max}}+\sum_{k\in \mathcal{N}_{n}}\zeta_{D,b}^{\text{max}}-\zeta_{T}^{\text{min}}-\sum_{k\in \mathcal{N}_{n}}\zeta_{D,b}^{\text{min}}\}\,. \tag{18a}\] \[D_{T}^{\text{rand}}+\sum_{k\in\mathcal{N}_{n}}D_{D,b}^{\text{rand}} \geq\frac{1}{\overline{M}_{\text{max}}}\max\{\zeta_{T}^{\text{max}}+\sum_{k \in\mathcal{N}_{n}}\zeta_{D,b}^{\text{max}}-\zeta_{T}^{\text{min}}-\sum_{k\in \mathcal{N}_{n}}\zeta_{D,b}^{\text{min}}\}\,.\] (18b) \[\Delta\overline{p}_{T,b}^{\text{max}}= \sum_{k\in\mathcal{N}_{n}}\Delta\overline{p}_{D,b}^{\text{max}} \geq\max\{\zeta_{T}^{\text{max}}+\sum_{k\in\mathcal{N}_{n}}\zeta_{D,b}^{\text{ max}}-\zeta_{T}^{\text{min}}-\sum_{k\in\mathcal{N}_{n}}\zeta_{D,b}^{\text{min}}\}\,,\, \forall m\,. \tag{18c}\]
Moreover, to ensure the sufficient regulation ability of ITD, the regulation reserve constraints need to be involved in the CFC-SED model according to (15). Since the maximum output of DWF is uncertain, the upward reserve constraint of DWF is modeled as JCC (19) with the confidence level being \(I\)-\(\delta_{T}^{I\!R}\) to enhance dispatch reliability, where \(\mathbb{P}_{\!\!\!\!/_{T}}\) is the probability distribution with respect to the random variable \(\bar{P}_{\!\!\!\!/_{T}}^{w}\). The up/down PFR reserve constraints of ESs and thermal units are formulated as deterministic constraints (20) and (21), respectively, as in [24]. The PFR reserve constraints of all regulation resources in ADN are consistent with TPS and are no longer expressed to avoid repetitiveness.
\[\mathbb{P}_{\!\!\!\!/_{T}}\left\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
when \(\Delta p_{b}^{\text{\tiny{min}}}\!<\!0\), the TPS is needed to provide regulation reserve to ADN.
#### -B4 Network Security Constraints Under Uncertainty
When an uncertain disturbance occurs, the successive PFR and SFR processes will cause changes in line power flow and node voltages and may endanger the network security. Hence, the network security should also be imposed to secure the ITD in the PFR and SFR processes.
_a) Nodal Voltage Security Constraints Associated with PFR_
Due to the short PFR process [22], we assume the effect of PFR on the transmission line flow is negligible, while the node voltage security constraints (27) during the PFR process are considered. Specifically, constraints (25) and (26) represent the maximum PFR regulation power of IBRs and thermal units, respectively. Constraint (27) enforces the voltage phase and magnitude changes caused by the maximum PFR regulation power within the limitation, where the maximum nodal injected power variation \(\Delta p_{t,0}^{\text{\tiny{PFR}}}\) is composed of \(\Delta p_{t}^{\text{\tiny{PFR}}}\) and \(\Delta\bar{p}_{t}^{\text{\tiny{PFR}}}\). To avoid duplication, subscripts \(T/D\) are used to distinguish constraints in TPS and ADN.
\[\Delta\bar{p}_{t}^{\text{\tiny{PFR}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\begin{split}\text{Cost}_{r}^{a}&=\sum_{\begin{subarray}{c} \text{{{{C}}}_{{{T}}_{R}}}{{{C}}_{{{T}}_{R}}}+{{C}}_{{{T}}_{R}}^{*}{{R}}_{{{T} }_{R}}+{{C}}_{{{T}}_{R}}^{*}{{R}}_{{{T}}_{R}}^{*}\\ \text{Cost}_{r}^{w}&=\sum_{\begin{subarray}{c} \text{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ } }}}}} \end{subarray}{{{{{\tau}}}}}}}}\\ \text{Cost}_{r}^{e}}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}}{{{C}}_{{{T}}_{R}}}}{{{P}}_{{{{{\tau}}_{R}}}}+{{C} }_{{{{T}}_{R}}}^{*}{{R}}_{{{T}}_{R}}}^{*}+{{C}}_{{{T}}_{R}}^{*}{{R}}_{{{T}}_{R }}^{*}\\ \end{subarray}{{{\tau}}_{R}}}}\\ \text{Cost}_{r}^{e}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}}{{{C}}_{{{T}}_{R}}}{{{P}}_{{{{\tau}}_{R}}}+{{C}}_{{{ T}}_{R}}^{*}{{R}}_{{{T}}_{R}}+{{C}}_{{{T}}_{R}}^{*}{{R}}_{{{T}}_{R}}^{*}\\ \end{subarray}{{{\tau}}_{R}}}}\\ \text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{{\tau}}_{R}}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}_{R}}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}_{R}}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}_{R}}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}_{R}}}}\\ \text{{{{\tau}_{R}}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{\tau}}_{R}}\\ \text{{{\tau}}_{R}}\\ \end{subarray}}}\text{Cost}_{r}^{o}&=\sum_{\begin{subarray}{c} \text{{{{{\tau}}}_{R}}}\\ \text{{{{\tau}}_{R}}}\\ \text{{{{\tau}}_{R}}
safe margin, and SFR reserve of ADNs) expected by TSO are matched to the available capacity of ADNs, and these physical meanings are more accessible to operators.
Then, the principles of the distributed standard ADMM algorithm can be illustrated by Table I. First, the FC-SED models of TPS and ADNs are solved separately using the inner SAA method to obtain the initial value of cooperative variables in Step 1. The initialized mean value \(\overline{y}^{0}\) of the cooperative variables between TPS and ADNs and the initialized Lagrange multipliers \(\underline{x}_{T}^{0}\), \(\underline{x}_{D}^{0}\) are calculated in Step 2.
The iterative operations are performed starting from Step 3. In Step 4, the FC-SED model with penalty term in the objective function is solved simultaneously to obtain the value of cooperative variables in the \(k^{\text{th}}\) iteration, where \(\underline{x}_{D,b}^{k,l}\), \(\underline{\rho}_{D,\overline{y}}\), \(\overline{y}_{b}^{k,l}\) represent the \(b^{\text{th}}\) sub-vector of \(\underline{x}_{D}^{k,l}\), \(\underline{\rho}_{D}\), \(\overline{y}^{k,l}\). In Step 5, the mean value \(\overline{y}^{k}\) of cooperative variables and the Lagrange multipliers \(\underline{x}_{T}^{k},\underline{x}_{D}^{k}\) in the \(k^{\text{th}}\) iteration are updated. In Step 6, the convergence gap is calculated. If the iterative tolerance is satisfied, terminate the iteration and output the dispatch results of TPS and ADNs. Otherwise, return to Step 3 and iterate until the error of cooperative variables is within the tolerance.
### _Inner-Layer SAA Method and Tractability Process_
In this paper, the SAA method is adopted in the inner layer to transform each nonlinear FC-SED model into a tractable mixed-integer linear programming model. Detailed information on this process is shown in [17].
However, due to the binary indicator variables associated with sampling scenarios in the transformed mixed-integer linear programming model, the convergence of ADMM cannot be guaranteed. Inspired by [5], a tractable iterative solution method for the CFC-SED model is shown in Table II. First, by fixing binary indicator variables, ADMM is used to solve the continuous CFC-SED model to obtain the optimal value of the cooperative variable. Then, the FC-SED models of TPS and ADNs are solved independently with the above optimal cooperative variable to obtain new values of the binary indicator variables. This process is iterated, if the values of the binary indicator variables in the two iterations remain unchanged, the iteration stops, which has been validated in [5].
In summary, the distributed optimal coordination of ITD can be performed effectively via the above two-layer optimization framework, which will be further verified in the case studies.
## IV Case Study
### _Simulation Settings_
In this section, the proposed CFC-SED model and the two-layer optimization algorithm are tested in the T30-D2, T118-D9, and T300-D10 systems with a resolution of 15 min. The T30-D2 system indicates a 30-bus TPS connected with two 33-bus ADNs, where three 50 MW DWFs and three 5 MW/20 MWh ESs are connected to TPS, and four 10 MW DPVs and 2 MW/2 MWh ESs are connected to each ADN. The T118-D9 system indicates a 118-bus TPS connected with nine 69-bus ADNs, where six 300 MW DWFs and 30 MW/30 MWh ESs are connected to TPS, and four 5 MW DPVs and 2 MW/2 MWh ESs are connected to each ADN. The T300-D10 system indicates a 300-bus TPS connected with ten 69-bus ADNs. The network parameters and generation parameters of thermal units and distributed generators are derived from the IEEE test system in MTPOWER, and the charging/discharging efficiency values of the ESs are all set to 0.90/0.95, respectively. The regulation parameters of thermal units, DWFs, DPVs, and ESs are shown in [12]. In addition, the threshold values of the maximum RoCoF, the maximum frequency deviation, and the steady-state frequency are set to 0.5 Hz/s, 0.5 Hz, and 0.3 Hz, respectively. The maximum and minimum voltage thresholds are set to 1.05 /0.95 p.u. The significance level \(\delta^{jR}\) is set to 0 for enhanced voltage security from a conservative perspective, and the significance levels \(\delta^{jR}\), \(\delta^{jTD}_{SR}\), \(\delta^{k}\) are all set to 0.05.
To test the validity of the model, the active/reactive demand data and the renewable energy data from the California ISO are applied in the simulation. The forecasting errors of active demands and renewable generations are assumed to follow a beta distribution [27], and the corresponding parameters are calculated based on historical data. The sampling scenarios for SAA (500 sampling scenarios by default) associated with uncertainty are generated using Monte Carlo simulation, and the penalty factor for ADMM is set to 5.
The independent FC-SED (IFC-SED) is set as the comparison, in which a frequency-constrained stochastic ED model regarding either the TPS or an ADN is performed, ignoring the coordination between TPS and ADNs.
The codes are implemented on a computer with an Intel Core i7-11700 CPU and 16 GB RAM and solved with MATLAB and CPLEX 12.10.0, and the frequency dynamic response is verified in SIMULINK.
### _Security Verification of the Proposed CFC-SED Model_
_1) Mitigating System Frequency_
As analyzed in Section II, the bidirectional regulation support between TPS and ADN is achieved in the CFC-SED. Obviously, the significant difference between CFC-SED and IFC-SED occurs when the regulation resources and regulation capacities are limited in TPS but are abundant in ADN (e.g., a large number of DPVs are installed), so we select and analyze this representative scenario below.
To verify the frequency security of the entire ITD system under different operating conditions when the regulation capacities in TPS are limited, four possible disturbance cases that may occur in ITD are shown in Table III, where the absolute values of disturbance in cases 1-4 are 30% of the net load of TPS and ADN. Then, the corresponding dynamic frequency performance indices, i.e., RoCoF, maximum frequency deviation (MFD), and quasi-steady frequency deviation (QFD), of the system under the dispatch results of CFC-SED and IFC-SED of TPS are shown in Table IV. The dynamic frequency response curves in case 1 are demonstrated in Fig. 5.
_2) Mitigation of ADN's Voltage Security in the SFR Process_
To verify the node voltage security of ADN in the SFR process, Monte Carlo simulation is used to generate 10 test scenarios to represent uncertain disturbance. Based on the dispatch result of the IFC-SED model regarding ADN and the CFC-SED model, the steady-state node voltage amplitudes of ADN 1 in test scenarios are shown in Fig. (a)a and (b)b, respectively. The _IFC-SED model regarding ADN_ indicates that the base point power and reserve of all regulation units in ADN 1 are dispatched by DSO separately (i.e., ADN 1 does not participate in coordination), but the boundary power is the actual boundary power provided by TPS in actual operation.
_2) Mitigation of ADN's Voltage Security in the SFR Process_
To verify the node voltage security of ADN in the SFR process, Monte Carlo simulation is used to generate 10 test scenarios to represent uncertain disturbance. Based on the dispatch result of the IFC-SED model regarding ADN and the CFC-SED model, the steady-state node voltage amplitudes of ADN 1 in test scenarios are shown in Fig. (a)a and (b)b, respectively. The _IFC-SED model regarding ADN_ indicates that the base point power and reserve of all regulation units in ADN 1 are dispatched by DSO separately (i.e., ADN 1 does not participate in coordination), but the boundary power is the actual boundary power provided by TPS in actual operation.
Fig. 5: Frequency dynamic response under disturbances of case 1 in the T30-D2 system.
Fig. 6: (a) Allocation of regulation capacity between TPS and ADNs. (b) Base point and actual boundary power under disturbances.
Fig. 7: The voltage amplitude of ADN1 in the T30-D2 system associated with (a) the IFC-SED regarding the ADN and (b) the proposed CFC-SED model.
Due to the mismatch between the expected boundary power of ADN and the actual boundary power provided by TPS, the voltage amplitudes of nodes 15, 16, and 17 are below the minimum thresholds in some scenarios in Fig. (a)a. Furthermore, the voltage amplitudes of nodes 29-32 are below the minimum thresholds in most scenarios in Fig. (a)a, which will cause load shedding and even trigger blackout accidents. In contrast, the CFC-SED model can ensure that the node voltage amplitudes are within the threshold in various disturbance scenarios because it can perform optimal security dispatch on the premise of cooperating with TPS and ADNs.
### _Verification of the Two-Layer Optimization Framework_
The proposed two-layer distributed optimization framework is used to solve the CFC-SED model in the T118-D9 and T300-D10 systems, and the calculation time, iteration times, and optimal errors are shown in Table V, where the _optimal error_ denotes the relative error of the objective function compared to the centralized solution. As shown in Table V, the CFC-SED model can be solved within about 0.2% optimal error. The reported calculation time can be further reduced if larger optimal errors are allowed.
## V Conclusions
In this paper, a CFC-SED model for ITD is proposed, where the base point power and regulation reserve of all dispatchable resources as well as the control parameter of IBRs in ITD are jointly optimized for optimal economy and safety. By constructing tailor-made frequency security constraints and network security constraints, TPS and ADN can deliver base point power bidirectionally and provide frequency regulation support bidirectionally, which are beneficial to enhance the regulation ability and operational safety of the ITD system. In addition, a distributed optimization framework and algorithm is proposed to solve CFC-SED. Simulation results show that the proposed CFC-SED model significantly improves the operational safety level compared with traditional separate dispatch, and the ITD system can cooperate efficiently through the two-layer optimization framework.
As this paper focuses on the operational security improvement of the system using the CFC-SED model, the market clearing or design issues to incentivize distributed photovoltaic to participate in such coordination is out of the scope and can be studied in future works.
|
2307.10127 | Systematic scanning Glauber dynamics for the mean-field Ising model | We study the mixing time of systematic scan Glauber dynamics Ising model on
the complete graph. On the complete graph $K_n$, at each time, $k \leq n$
vertices are chosen uniformly random and are updated one by one according to
the uniformly randomly chosen permutations over the $k$ vertices. We show that
if $k = o(n^{1/3})$, the high temperature regime $\beta < 1$ exhibits cutoff
phenomena. For critical temperature regime $\beta = 1$, We prove that the
mixing time is of order $n^{3/2}k^{-1}$. For $\beta > 1$, we prove the mixing
time is of order $nk^{-1}\log n$ under the restricted dynamics. | Sanghak Jeon | 2023-07-19T16:49:17Z | http://arxiv.org/abs/2307.10127v1 | # Randomized systematic scan dynamics on the mean-field Ising model
###### Abstract.
We study the mixing time of systematic scan Glauber dynamics Ising model on the complete graph. On the complete graph \(K_{n}\), at each time, \(k\leq n\) vertices are chosen uniformly random and are updated one by one according to the uniformly randomly chosen permutations over the \(k\) vertices. We show that if \(k=o(n^{1/3})\), the high temperature regime \(\beta<1\) exhibits cutoff phenomena. For critical temperature regime \(\beta=1\), We prove that the mixing time is of order \(n^{3/2}k^{-1}\). For \(\beta>1\), we prove the mixing time is of order \(nk^{-1}\log n\) under the restricted dynamics.
_E-mail addresses:_ [email protected].
_2023 Mathematics Subject Classification_. Primary: 60J10; Secondary: 60C05, 60G42.
_Keywords and phrases_. Markov chain, mixing time, Ising model, systematic scan, cutoff phenomenon.
## 1. Introduction
The Ising model has risen in Statistical Physics to explain the ferromagnetism, and its mixing time has been extensively studied, yet there are only few cutoff results([1], [2], [10], [11], [12], [13]). In this paper we propose the Ising model under systematic scan, suggest the mixing time order for all regimes, and prove the existence of the total variation cutoff phenomenon for a high temperature regime.
One of the common ways to evolve the Ising model is the Glauber dynamics: choosing a vertex \(v\) uniformly randomly from a given graph and update its spin according to the spins of the vertices connected to \(v\). From a different point of view, rather than choosing a vertex uniformly randomly, we can consider a model with which is updated in a deterministic way. This is so-called systematic scan, and a folklore belief on several problems is that models with a random update dynamics and systematic scan share a similar have similar metastability. To be more precise, for the Ising model on a finite graph \(G=(V,\mathcal{E})\), fix \(k\leq|V|\) and uniformly randomly choose \(k\) vertices among \(V\), uniformly randomly choose a permutation over those \(k\) vertices and update one by one along the permutation.
The question that comes along with a given Markov chain is how fast it converges to its stationary distribution. Total variation distance on Markov chain measures the difference between the worst probability distribution and the stationary measure:
\[d(t):=\max_{x\in X}\left\|P_{x}^{t}-\pi\right\|_{TV}=\frac{1}{2}\max_{x\in X} \sum_{y\in X}|P^{t}(x,y)-\pi(y)|,\]
and the mixing time associated with the distance
\[t_{\rm mix}(\epsilon):=\min\{t:d(t)\leq\epsilon\}\]
quantifies the convergence rate. Often \(t_{\rm mix}(1/4)\) is written as \(t_{\rm mix}\). In addition, with a sequence of graphs \(G_{n}\) we can investigate their asymptotic behavior around the mixing time. For a sequence of
Markov chains \(\{X_{n}\}\) with the distance \(d_{n}(t)\), we say the chain exhibits _cutoff_ at \(\{t_{n}\}\) with window size \(\{w_{n}\}\) if \(w_{n}=o(t_{n})\) and
\[\lim_{\gamma\to\infty}\liminf_{n\to\infty}d_{n}(t_{n}-\gamma w_{n}) =1,\] \[\lim_{\gamma\to\infty}\limsup_{n\to\infty}d_{n}(t_{n}+\gamma w_{n}) =0.\]
There is an equivalent statement for cutoff phenomena, but it does not provide any information about the window size. See [10] chapter 18 for more details on cutoff phenomena.
To this end we can pose a problem on systematic scan Ising model: _Do Ising models with Glauber dynamics and with systematic scan dynamics have the same order of the mixing time, and cutoff phenomena?_ One of the main theorems of this paper is that the complete graph Ising model equipped with systematic scan dynamics has a total variation cutoff in the high temperature regime, \(\beta<1\).
**Theorem 1.1**.: _Suppose \(\beta<1\). If \(k=o(n^{1/3})\), then the model exhibits cutoff at \(t_{n}=[2k(1-\beta)]^{-1}n\log n\) with window size \(w_{n}=n/k\)._
For the complete graph Ising model, \(\beta=1\) is known for not exhibiting a cutoff. Hence we only propose both the upper and lower bound of the mixing time under systematic scan:
**Theorem 1.2**.: _Suppose \(\beta=1\). If \(k=o(n^{1/4})\), then there exists constants \(c_{1},c_{2}>0\) such that \(c_{1}n^{3/2}k^{-1}\leq t_{\text{mix}}(1/4)\leq c_{2}n^{3/2}k^{-1}\)._
For the last regime, \(\beta>1\), the mixing time is known to be exponential in \(n\). [10] suggested a restricted version of Glauber dynamics and they showed the mixing time is \(O(n\log n)\). The restricted dynamics can be also applied to systematic scan case, and we prove the mixing time order is \(nk^{-1}\log n\) in this model.
**Theorem 1.3**.: _Suppose \(\beta>1\). If \(k=o(n^{1/2})\), then there exists constants \(c_{1},c_{2}>0\) such that \(c_{1}nk^{-1}\log n\leq t_{\text{mix}}(1/4)\leq c_{2}nk^{-1}\log n\), where \(t_{\text{mix}}(1/4)\) is the mixing time under the restricted dynamics._
There are two remarks on this dynamics compared to the block dynamics: Randomized systematic scan dynamics updates \(k\) vertices one by one while block dynamics updates \(k\) vertices at once. Unlike the block dynamics, randomized systematic scan dynamics generates a small amount of error during \(k\) numbers of single-site updates. This error accumulation makes magnetization analysis more difficult. Secondly, if \(k\) is large, computer modeling on randomized scan dynamics is more efficient than block dynamics, as one update with randomized scan Glauber dynamics can be divided into \(k\) single-site updates, which requires much less memory storage.
[1] showed the mixing time of the Curie-Weiss model in high temperature regime is of order \(n\log n\) under systematic scan. We conjecture the highest order term for the mixing time for \(k=n\) case is \([2(1-\beta)]^{-1}\log n\), which is consistent to Theorem 1.1. We expect there must be a cutoff under both systematic scan and the randomized systematic scan. All three abovementioned Theorems 1.1, 1.2, and 1.3 are proven by showing both upper bound and lower bound inequalities. All the upper bound are mostly derived from coupling argument, however as \(k\) numbers of vertices are updated at each time, several calculations are required to finalize the proofs. The lower bound arguments are heavily relying on the estimates on magnetizations. Section 3, 4, and 5 contains Theorem 1.1, 1.2, and 1.3 and their proofs respectively. Section 6 briefly discuss the case \(k=n\), which is the model without coupon collecting properties. One of the proofs of the lemma which is crucial for Theorem 1.3 is moved to the Appendix, Section 7.
### Background and previous researches
The mixing time and cutoff results for the Ising model on the complete graph, also known as Curie-Weiss model, were established on [10], [11] and [12], showing that there is a cutoff only in the high temperature regime. For the lattice, Lubetzky and Sly [14] showed that there is a cutoff on \((\mathbb{Z}/n\mathbb{Z})^{d}\) with strong spatial mixing, which always holds when \(d=2\). There is a result on the Ising model on a regular tree also, yet people are working on \(3\) or higher dimensional lattice. [12] suggested a lower bound argument for the mixing time of the Ising model over an arbitrary graph, and [20] suggested another lower bound argument, in terms of the separation distance mixing time.
Systematic scan naturally emerged from Markov chains, particularly from the card shuffling problems. Since systematic scan excludes coupon-collecting properties from the dynamics, one might expect systematic scan to accelerate the mixing. However, there are only few results on systematic scan dynamics, and several results showed that the random updates chain and systematic scan chain have the same mixing time order. For example, [21] and [15] are systematic scan version of [16], which showed the mixing time of two schemes are in the same order. There are some techniques for systematic scan [15], [17], [18]; however systematic scan dynamics still poses challenges to be analyzed. [22] is one of few results on the Ising model and systematic scan simultaneously. [22] studied the mixing time of the Ising model over \(d-\)dimensional lattice, under the systematic scan.
Particularly for the Ising model over the complete graph, It has been well known that the model's mixing time is of order \(n\log n\) under the Glauber dynamics. Furthermore [14] proved that the order of the mixing time under systematic scan Glauber dynamics is the same as the original one, by utilizing the Dobrushin-Shlosman condition. Nonetheless the existence of cutoff remained uncovered since the lack of symmetry prevents the coupling argument and vertex rematching - the fundamental idea of [10].
## 2. Preliminaries
In this section we present some definitions and several properties, which are used for the remainder of the paper. The Ising model on a given finite graph \(G=(V,\mathcal{E})\) with a parameter \(\beta>0\) is a probability distribution over the state space \(\Omega:=\{+1,-1\}^{V}\) with the probability of \(\sigma\in\Omega\) given by the Gibbs measure
\[\pi(\sigma)=\frac{1}{Z(\beta)}\exp\left(\beta\sum_{(vw)\in\mathcal{E}}J(v,w) \sigma(v)\sigma(w)\right),\]
where \(Z(\beta)\) is a normalizing constant. We assume there are no external fields. \(\beta\) is often interpreted as an inverse-temperature, is chosen to be non-negative for ferromagnetism.
One of the common ways to evolve the Ising model is the Glauber dynamics, which is the following: from configuration \(\sigma\), a vertex \(v\in V\) is uniformly chosen, and a new configuration is selected from the set \(\{\eta\in\Omega:\eta(w)=\sigma(w),w\neq v\}\). The new configuration and \(\sigma\) agree on all vertices except at \(v\), and at \(v\) the new spin is \(\pm 1\) with probability
\[p(\sigma;v)=\frac{e^{\pm\beta S^{\nu}(\sigma)}}{e^{\beta S^{\nu}(\sigma)}+e^{ -\beta S^{\nu}(\sigma)}}, \tag{1}\]
where \(S^{\nu}(\sigma):=(1/n)\sum_{w:(ww)\in\mathcal{E}}\sigma(w)\). Therefore, the new spin at \(v\) only depends on the current spins of the neighboring vertices of \(v\). Pick an element from \(\Omega\) and evolve with the Glauber dynamics generates a discrete-time Markov chain.
From now on, the Ising model under the randomized systematic scan dynamics will be denoted by \(\{X_{t}\}_{t=0}^{\infty}\). We use \(\mathbb{P}_{\sigma}\) and \(\mathbb{E}_{\sigma}\) to denote the probability measure and associated expectations given \(X_{0}=\sigma\). We define a coupling of the dynamics as a process \((X_{t},\widetilde{X}_{t})_{t\geq 0}\), where each \(\{X_{t}\}\) and
\(\{\bar{X}_{t}\}\) are versions of the dynamics. Similarly we write \(\mathbb{P}_{\sigma,\widetilde{\sigma}}\) and \(\mathbb{E}_{\sigma,\widetilde{\sigma}}\) for the probability measure and associated expectation respectively, starting from \(\sigma\) and \(\widetilde{\sigma}\).
For a configuration \(\sigma\in\{-1,1\}^{V}\), denote \(\sigma(v)\) as a spin on the vertex \(v\). As we mainly focus on complete graphs, sometimes we denote \(\sigma(i)\) for \(i\in\{1,...,n\}\) as a spin on the \(i\)-th vertex. Define _magnetization_ of \(\sigma\) as the average of all spins of \(\sigma\):
\[S(\sigma):=\frac{1}{n}\sum_{v\in V}\sigma(v).\]
We are going to denote the magnetization of \(X_{t}\) as \(S_{t}:=S(X_{t})\), for simplicity's sake. Due to the symmetry of \(K_{n}\), defining those functions are helpful to abbreviate equations:
\[\begin{split} p_{+}(x)&:=\frac{1+\tanh\beta x}{2} \\ p_{-}(x)&:=\frac{1-\tanh\beta x}{2}=1-p_{+}(x) \end{split} \tag{2}\]
Then the probability \(p(\sigma;v)\) from Equation (1) equals \(p_{\pm}\left(S(\sigma)-\sigma(v)/n\right)\).
We uniformly randomly choose \(k\leq n\) vertices among \(n\) vertices and pick a permutation uniformly randomly on these selected vertices. We update \(k\) vertices in that order and repeat the entire process. This randomized systematic scan updates \(k\) vertices at each time \(t\), we may try to split it into \(k\) numbers of single-site updates; to elaborate, we can consider \((k+1)\) states \(Y_{0}=X_{0},Y_{1},...,Y_{k-1},Y_{k}=X_{1}\) such that \(Y_{i+1}\) is derived from the single-site update from \(Y_{i}\). As \(i\) number of vertices of \(Y_{i}\) cannot be updated, \(\{Y_{i}\}\) is not a Markov chain. However, since \(S(Y_{i+1})-S(Y_{i})\in\{\pm 2/n,0\}\), we can apply several birth-and-death chains results. Although those results need modifications as \(S(Y_{i})\) is not a Markov chain, considering \(\{Y_{i}\}\) still useful for coupling arguments and hitting time estimates. We denote these \(\{Y_{i}\}\) as _intermediate states_. Sometimes we consider the extended version of intermediate states \(\{Y_{i}\}_{0\leq i\leq kT}\) such that \(Y_{ki}=X_{i}\) for all \(i\leq T\). For each intermediate state \(Y_{i}\), \(i(\text{mod }k)\) vertices cannot be selected for the update. Call these vertices as _not available vertices_ and denote as \(\mathcal{N}_{i}\).
Define the _grand coupling_\(\{X_{\sigma,t}\}_{\sigma\in\Omega,t\geq 0}\) from all states of \(\Omega\) as the following: for each \(\sigma\in\Omega\), the coordinate process \((X_{\sigma,t})_{t\geq 0}\) is the Glauber dynamics that starts from \(\sigma\). For each time \(t\geq 0\), let \(\{Y_{\sigma,i}\}_{0\leq i\leq k}\) be all intermediates such that \(Y_{\sigma,0}=X_{\sigma,t}\). Randomly choose the ordered set of vertices \(\{v_{1},...,v_{k}\}\) to update. \(Y_{\sigma,i}\) denotes the configuration started from \(X_{\sigma,t}\) with updates on \(\{v_{1},...,v_{i}\}\). Define \(U_{1},U_{2},...,U_{k}\) as copies of random variables which are uniform on \([0,1]\) and let \(U_{i}\) determine the spin \(Y_{\sigma,i}(v_{i})\) for all \(\sigma\in\Omega\) by
\[Y_{\sigma,i}(v_{i})=\begin{cases}+1&\text{if}\quad 0\leq U\leq p_{+}(S(Y_{ \sigma,i-1})-n^{-1}Y_{\sigma,i-1}(v_{i}))\\ -1&\text{if}\quad p_{+}(S(Y_{\sigma,i-1})-n^{-1}Y_{\sigma,i-1}(v_{i}))<U\leq 1,\end{cases}\]
and \(Y_{\sigma,i}(v)=Y_{\sigma,i-1}(v)\) for all \(v\neq v_{i}\). Finally accept \(X_{\sigma,t+1}=Y_{\sigma,k}\). The construction ensures that \(X_{\sigma,t+1}(w)=X_{\sigma,t}(w)\) for all vertices but \(\{v_{1},...,v_{k}\}\). For any two given configurations \(\sigma\) and \(\widetilde{\sigma}\), define _monotone coupling_\((X_{\sigma,t},X_{\widetilde{\sigma},t})\) as the projection of the grand coupling with starting configurations \(\sigma\) and \(\widetilde{\sigma}\).
For two spin configurations \(\sigma\) and \(\widetilde{\sigma}\), define their Hamming distance as the number of disagreeing vertices:
\[\text{dist}(\sigma,\widetilde{\sigma}):=\frac{1}{2}\sum_{j=1}^{n}|\sigma(j)- \widetilde{\sigma}(j)|.\]
Then Hamming distance contracts under monotone coupling in high temperature regime.
**Proposition 2.1**.: _Under the randomized systematic scan dynamics, the monotone coupling \((X_{t},\widetilde{X}_{t})\) with \((X_{0},\widetilde{X}_{0})=(\sigma,\widetilde{\sigma})\) satisfies_
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}\big{[}\mathrm{dist}(X_{t},\widetilde{X} _{t})\big{]}\leq\frac{1}{\beta}\Big{[}1+(\beta-1)\Big{(}1+\frac{\beta}{n}\Big{)} ^{k}\Big{]}^{t}\mathrm{dist}(\sigma,\widetilde{\sigma}). \tag{3}\]
**Remark**.: _Proposition 2.1 remains true for any \(k\leq n\). If \(\beta<1\),_
\[\rho:=\frac{1}{\beta}\Big{[}1+(\beta-1)\Big{(}1+\frac{\beta}{n}\Big{)}^{k} \Big{]}\leq 1-\frac{k(1-\beta)}{n}<1\]
_holds, thus Hamming distance tends to decrease in the high temperature regime._
Proof.: We consider the special case when \(\mathrm{dist}(\sigma,\widetilde{\sigma})=1\) and \(t=1\). Let \(I\) be a vertex that has two different spins: \(1=\sigma(I)\neq\widetilde{\sigma}(I)=-1\). With probability \(\frac{n-k}{n}\) the vertex \(I\) is not selected to be updated; let \(\mathcal{P}\) be the ordered \(k\) vertices set to be updated. From \(X_{0}\) to \(X_{1}\), if \(I\notin\mathcal{P}\), in terms of intermediate states \(\{Y_{i}\}_{0\leq i\leq k}\) and \(\{\widetilde{Y}_{i}\}_{0\leq i\leq k}\),
\[\begin{split}\mathbb{E}_{\sigma,\widetilde{\sigma}}& \big{[}\mathrm{dist}(Y_{i+1},\widetilde{Y}_{i+1})|\mathrm{dist}(Y_{i}, \widetilde{Y}_{i})\big{]}\\ &=\mathrm{dist}(Y_{i},\widetilde{Y}_{i})+\big{|}p_{+}(S(Y_{i})-n^ {-1}Y_{i}(v))-p_{+}(S(\widetilde{Y}_{i})-n^{-1}\widetilde{Y}_{i}(v))\big{|}\\ &\leq\mathrm{dist}(Y_{i},\widetilde{Y}_{i})+\tanh\Big{(}\beta \frac{|S(Y_{i})-S(\widetilde{Y}_{i})|}{2}\Big{)}\\ &\leq\Big{(}1+\frac{\beta}{n}\Big{)}\mathrm{dist}(Y_{i}, \widetilde{Y}_{i}),\end{split} \tag{4}\]
for some \(v\neq I\). Otherwise, suppose \(I\) is chosen to be \(j\)-th updated vertex. Equation (4) holds for every indices \(i\neq j-1\), but for \(j-1\),
\[\begin{split}\mathbb{E}_{\sigma,\widetilde{\sigma}}& \big{[}\mathrm{dist}(Y_{j},\widetilde{Y}_{j})|\mathrm{dist}(Y_{j-1}, \widetilde{Y}_{j-1})\big{]}\\ &=\mathrm{dist}(Y_{j-1},\widetilde{Y}_{j-1})-\Big{(}1-\big{|}p_{+ }(S(Y_{j-1})-n^{-1}Y_{j-1}(v))-p_{+}(S(\widetilde{Y}_{j-1})-n^{-1}\widetilde{ Y}_{j-1}(v))\big{|}\Big{)}\\ &\leq\mathrm{dist}(Y_{j-1},\widetilde{Y}_{j-1})+\tanh\Big{(}\beta \frac{S(Y_{j-1})-S(\widetilde{Y}_{j-1})-2/n}{2}\Big{)}-1\\ &\leq\Big{(}1+\frac{\beta}{n}\Big{)}\big{(}\mathrm{dist}(Y_{j-1}, \widetilde{Y}_{j-1})-1\big{)},\end{split}\]
due to the fact that \(S(Y_{i})\geq S(\widetilde{Y}_{i})+2/n\) for any \(i\leq j-1\). Other than \(i=j-1\) we can apply Equation (4) hence we have
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}[\mathrm{dist}(X_{1},\widetilde{X}_{1}) |\mathrm{dist}(X_{0},\widetilde{X}_{0})]\leq\left(1+\frac{\beta}{n}\right)^{k} -\left(1+\frac{\beta}{n}\right)^{j} \tag{5}\]
for \(I\in\mathcal{P}\). Combining Equation (4) and Equation (5),
\[\begin{split}\mathbb{E}_{\sigma,\widetilde{\sigma}}& \big{[}\mathrm{dist}(X_{1},\widetilde{X}_{1})|\mathrm{dist}(X_{0}, \widetilde{X}_{0})\big{]}\\ &\leq\frac{1}{n}\sum_{i=0}^{k-1}\Big{[}\Big{(}1+\frac{\beta}{n} \Big{)}^{k}-\Big{(}1+\frac{\beta}{n}\Big{)}^{i+1}\Big{]}+\frac{n-k}{n}\Big{(}1 +\frac{\beta}{n}\Big{)}^{k}\\ &=\frac{1}{\beta}\Big{\{}1+\frac{\beta}{n}\Big{[}1-\Big{(}1+\frac {\beta}{n}\Big{)}^{k}\Big{]}+(\beta-1)\Big{(}1+\frac{\beta}{n}\Big{)}^{k}\Big{\}} \\ &\leq\frac{1}{\beta}\Big{[}1+(\beta-1)\Big{(}1+\frac{\beta}{n} \Big{)}^{k}\Big{]}.\end{split} \tag{6}\]
To finish the proof, for general \(\sigma,\widetilde{\sigma}\), we can choose \(\mathrm{dist}(\sigma,\widetilde{\sigma})+1\) numbers of states so that they form a sequence from \(\sigma\) to \(\widetilde{\sigma}\) that the Hamming distances between any consecutive states are \(1\). Equation (6) and trigonometric inequality finishes this case, and recursive application gives Equation (3).
For simplicity of notation, we write \(S_{t}:=S(X_{t})\) from now on. As \(|S_{t}-\widetilde{S}_{t}|\leq(2/n)\mathrm{dist}(X_{t},\widetilde{X}_{t})\) at any time \(t\), we can transform Proposition 2.1 to an equation in terms of magnetization.
**Proposition 2.2**.: _Suppose \(\beta<1\). For any two configurations \(\sigma\) and \(\widetilde{\sigma}\), under the monotone coupling,_
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}\left[|S_{t}-\widetilde{S}_{t}|\right] \leq\frac{2}{n}\rho^{t}\mathrm{dist}(\sigma,\widetilde{\sigma})\leq 2\rho^{t}, \tag{7}\]
_where \(\rho=1-k(1-\beta)/n<1\)._
The constant \(\rho\) comes from Proposition 2.1. Furthermore, for any two configurations \(\sigma\) and \(\widetilde{\sigma}\), with the same constant \(\rho<1\) we can prove
\[|\mathbb{E}_{\sigma}[S_{1}]-\mathbb{E}_{\widetilde{\sigma}}[S_{1}]|\leq\rho|S _{0}-\widetilde{S}_{0}| \tag{8}\]
in an analogous way.
Often magnetization becomes supermartingale, so the lemma below is utilized throughout the paper. This lemma is suggested from [10] chapter 18 with its proof so we omit the proof here.
**Lemma 2.1**.: _Let \((W_{t})_{t\geq 0}\) be a non-negative supermartingale and \(\tau\) be a stopping time such that_
1. \(W_{0}=k\)__
2. \(W_{t+1}-W_{t}\leq B\)__
3. \(\mathrm{Var}(W_{t+1}|\mathcal{F}_{t})>\sigma^{2}>0\) on the event \(\tau>t\)_._
_If \(u>4B^{2}/(3\sigma^{2})\), then_
\[\mathbb{P}_{k}(\tau>u)\leq\frac{4k}{\sigma\sqrt{u}}.\]
Lemma 2.1 requires a lower bound on a variance. Each step of the original Glauber dynamics generates a variance of magnetization of order \(1/n^{2}\). Since the randomized systematic scan consists of \(k\) numbers of single-site update, we can guess the variance should be of order \(k/n^{2}\).
**Lemma 2.2**.: _Suppose \(k=o(n)\). For any randomized scan dynamics chain \(X_{t}\) with arbitrary starting state \(\sigma\), \(\mathrm{Var}_{\sigma}[S_{1}]\) is of order \(k/n^{2}\)._
Proof.: Define \(v_{i}:=\max\mathrm{Var}_{\sigma}[S(Y_{i})]\) for \(t=0,1,...,(k-1)\), where \(Y_{i}\) is an intermediate state and the maximum is taken over all \(\sigma\in\Omega\) and all possible vertex update permutations. We have \(v_{1}=\Omega(n^{-2})\). Let \(\mathcal{R}=S(Y_{i+1})-S(Y_{i})\in\{-2/n,0,2/n\}\). In regards to the variation identity,
\[v_{i+1} =\max_{\sigma\in\Omega}\mathrm{Var}_{\sigma}[S(Y_{i+1})]\] \[=\max_{\sigma\in\Omega}\left\{\mathbb{E}\big{[}\mathrm{Var}_{ \sigma}[S(Y_{i+1})|\mathcal{R}]\big{]}+\mathrm{Var}\big{[}\mathbb{E}_{\sigma} [S(Y_{i+1})|\mathcal{R}]\big{]}\right\}\]
The first term inside the parenthesis satisfies
\[\mathbb{E}\big{[}\mathrm{Var}_{\sigma}[S(Y_{i+1})|\mathcal{R}]\big{]}=\mathbb{ E}\big{[}\mathrm{Var}_{\sigma}[S(Y_{i})]\big{]}\leq v_{i}\]
because the variance is invariant under constant translation. Also the second term is bounded below by \(O(n^{-2})\); There are only three cases to consider for the variance calculation. For any \(k\), \(0\leq i\leq k-1\), and \(\sigma\), there is \(\epsilon=\epsilon(\beta)>0\) which satisfies
\[\mathbb{P}_{\sigma}[S(Y_{i+1})=S(Y_{i})+2/n]+\mathbb{P}_{\sigma}[S(Y_{i+1})=S( Y_{i})-2/n]\in[\epsilon,1-\epsilon].\]
Therefore there exists \(c_{1}>0\) which satisfies
\[v_{i+1}\leq v_{i}+\frac{c_{1}}{n^{2}} \tag{9}\]
with \(v_{1}=\Omega(1/n^{2})\). For the lower bound, analogously define \(w_{i}:=\min\operatorname{Var}_{\sigma}[S(Y_{i})]\) then
\[w_{t+1}\geq w_{t}+\frac{c_{2}}{n^{2}} \tag{10}\]
for some \(c_{2}=c_{2}(\beta)>0\). Equation (9) and Equation (10) finishes the proof.
We can derive a general variance bound by combining Equation (8) with Lemma 2.2. This can be done with the lemma below.
**Lemma 2.3**.: _Suppose \((Z_{t})\) be a (general) Markov chain which takes values from \(\mathbb{R}\). Define \(\mathbb{P}_{z}\) and \(\mathbb{E}_{z}\) to be probability and expectation started from \(z\). Suppose there exists some \(0<\rho<1\) such that for any initial states \((z,z^{\prime})\),_
\[\Big{|}\mathbb{E}_{z}[Z_{t}]-\mathbb{E}_{z^{\prime}}[Z_{t}]\Big{|}\leq\rho^{t} |z-z^{\prime}|. \tag{11}\]
_Then \(v_{t}:=\sup_{z_{0}}\operatorname{Var}_{z_{0}}(Z_{t})\) satisfies_
\[v_{t}\leq v_{1}\min\big{\{}t,(1-\rho^{2})^{-1}\big{\}}.\]
[10] mentioned Lemma 2.3 and its proof so we omit here. Combining Lemma 2.2 and Lemma 2.3, we have \(\operatorname{Var}_{\sigma}[S_{t}]=O(1/n)\) at any time \(t\geq 0\) if \(\beta<1\). The last tool for cutoff in high temperature regime is the information of the number of each spins on an arbitrary vertices subset.
**Lemma 2.4**.: _Suppose \(\beta<1\). (i) For all \(\sigma\in\Omega\) and \(v\in V\),_
\[|\mathbb{E}_{\sigma}[S_{t}]|\leq 2e^{-kt(1-\beta)/n}\qquad\text{and}\qquad| \mathbb{E}_{\sigma}[X_{t}(v)]|\leq 2e^{-kt(1-\beta)/n}.\]
_(ii) For any subset \(A\subset V\), define_
\[M_{t}(A):=\frac{1}{2}\sum_{v\in V}X_{t}(v).\]
_Then we have \(|\mathbb{E}_{\sigma}[M_{t}(A)]|\leq|A|e^{-kt(1-\beta)/n}\). Furthermore, if \(t\geq[2(1-\beta)k]^{-1}n\log n\), then \(\operatorname{Var}_{\sigma}[M_{t}(A)]\leq Cn\) for some constant \(C>0\). (iii) For any subset \(A\) of vertices and all \(\sigma\in\Omega\),_
\[\mathbb{E}_{\sigma}\big{[}|M_{t}(A)|\big{]}\leq ne^{-kt(1-\beta)/n}+O(\sqrt{n }).\]
Proof.: (i) Denote \(1\) to be the configuration of all pluses. Consider the monotone coupling \((X_{t},\widetilde{X}_{t})\) where \(X_{0}=\mathbb{1}\) and \(\widetilde{X}_{0}\) following the stationary distribution \(\mu\). Since the distribution on \(\widetilde{X}_{t}\) remains the same and symmetric, \(\mathbb{E}_{\mu}[\widetilde{S}_{t}]=0\) holds. From Proposition 2.2,
\[\mathbb{E}_{1}[S_{t}]\leq\mathbb{E}_{1,\mu}\big{[}|S_{t}-\widetilde{S}_{t}| \big{]}+\mathbb{E}_{\mu}[\widetilde{S}_{t}]\leq 2\Big{(}1-\frac{k(1-\beta)}{n} \Big{)}^{t}\leq 2e^{-kt(1-\beta)/n}.\]
Similarly we consider \(-\mathbb{1}\), the configuration with all minuses, then \(-2e^{-kt(1-\beta)/n}\leq\mathbb{E}_{-1}[S_{t}]\). The monotonicity ensures \(\mathbb{E}_{-1}[S_{t}]\leq\mathbb{E}_{\sigma}[S_{t}]\leq\mathbb{E}_{1}[S_{t}]\). For \(\mathbb{E}_{\sigma}[X_{t}(v)]\), the symmetry of \(K_{n}\) gives
\[\mathbb{E}_{1}[S_{t}]=\mathbb{E}_{1}\left[X_{t}(v)\right]\]
for any \(v\in V\). Appealing to the monotonicity gives the second inequality.
(ii) The expectation argument comes from (i). Consider the monotone coupling between \(1\), \(-\mathbb{1}\), and \(\sigma\) again. Denote \(M_{\sigma,t}(A)\) as \(M_{t}(A)\) starts from \(\sigma\) then
\[M_{-1,t}(A)\leq M_{\sigma,t}(A)\leq M_{1,t}(A).\]
As a consequence of Lemma 2.3, we have \(\mathrm{Var}_{\sigma}[S_{t}]=O(1/n)\). Hence \(\mathbb{E}_{1}[M_{t}([n])^{2}]=n^{2}/4[\mathrm{Var}_{1}\,[S_{t}]+(\mathbb{E}_{1} S_{t})^{2}=O(n)\) when \(t\geq[2(1-\beta)]^{-1}n\log n\). Due to the symmetry of complete graphs,
\[\mathbb{E}\Big{[}M_{1,t}([n])^{2}\Big{]}=n+\binom{n}{2}\mathbb{E}[X_{1,t}(1)X_ {1,t}(2)].\]
Therefore \(|\mathbb{E}[X_{1,t}(1)X_{1,t}(2)]|=O(1/n)\). Consider the same expansion, then
\[\mathbb{E}M_{1,t}(A)^{2}=|A|+\binom{|A|}{2}\mathbb{E}[X_{1,t}(1)X_{1,t}(2)] \leq n+\binom{n}{2}O(1/n)=O(n).\]
Similar argument comes out for \(\mathbb{E}M_{-1,t}(A)\). From the monotonicity,
\[M_{\sigma,t}(A)^{2}\leq M_{-1,t}(A)^{2}+M_{1,t}(A)^{2}. \tag{12}\]
Now taking expectation on both sides of Equation (12) we have \(\mathbb{E}[M_{\sigma,t}(A)^{2}]=O(n)\). The first expectation argument implies \(|\mathbb{E}M_{\sigma,t}(A)|=O(\sqrt{n})\) when \(t\geq[2(1-\beta)k]^{-1}\), so we can get the variance estimate for large time \(t\).
(iii) Again, consider monotone coupling between \(X_{t}\) and \(\widetilde{X}_{t}\), where \(X_{0}=\sigma\) and \(\widetilde{X}_{0}\) which follows the stationary distribution \(\mu\). Then
\[\mathbb{E}[|M_{t}(A)|]\leq\mathbb{E}[|M_{t}(A)-\widetilde{M}_{t}(A)|]+\mathbb{ E}[|\widetilde{M}_{t}(A)|].\]
Applying Cauchy-Schwartz inequality considering \(|\widetilde{M}_{t}(A)-M_{t}(A)|\leq\mathrm{dist}(X_{t},\widetilde{X}_{t})\) yields
\[\mathbb{E}[|M_{t}(A)|]\leq\mathbb{E}[\mathrm{dist}(X_{t},\widetilde{X}_{t})]+ \sqrt{\mathbb{E}[\widetilde{M}_{t}(A)^{2}]}.\]
Applying Theorem 2.1 subsequently gives
\[\mathbb{E}[|M_{t}(A)|]\leq n\rho^{t}+\sqrt{\mathbb{E}[\widetilde{M}_{t}(A)^{2 }]}.\]
Since the variables \(\{\widetilde{X}_{t}(i)\}_{i=1}^{n}\) are positively correlated under \(\mu\),
\[\mathbb{E}[\widetilde{M}_{t}(A)^{2}]\leq\frac{n^{2}}{4}\mathbb{E}[\widetilde{ S}_{t}^{2}]=\frac{n^{2}}{4}\mathrm{Var}[\widetilde{S}_{t}]=O(n),\]
therefore we can get
\[\mathbb{E}[|M_{t}(A)|]\leq ne^{-(1-\beta)kt/n}+O(\sqrt{n}).\]
Lemma 2.5 claims that for arbitrary two chains, if their magnetizations agree, then there is a coupling that they can agree in \(O(n\log n/k)\) time with high probability. It becomes useful for \(\beta\geq 1\) case.
**Lemma 2.5**.: _For any \(\sigma,\widetilde{\sigma}\in\Omega\) such that \(S(\sigma)=S(\widetilde{\sigma})\), there exists a coupling for \((X_{t},\widetilde{X}_{t})\) with starting points \((\sigma,\widetilde{\sigma})\),_
\[\lim_{n\to\infty}\mathbb{P}_{\sigma,\widetilde{\sigma}}\left[\min_{t}\{t:X_{ t}=\widetilde{X}_{t}\}>\frac{\gamma n\log n}{k}\right]=0,\]
_for some constant \(\gamma=\gamma(\beta)>0\)._
Proof.: For each time \(t\) before updating \(k\) vertices, rematch vertices of \(X_{t}\) and \(\widetilde{X}_{t}\) such that if \(X_{t}(v)=\widetilde{X}_{t}(v)\), then match \(v\) from \(X_{t}\) with \(v\) from \(\widetilde{X}_{t}\). If \(X_{t}(v)\neq\widetilde{X}_{t}(v)\), from \(S_{t}=\widetilde{S}_{t}\) we have
\[D_{t}:=|\{v:(X_{t}(v),\widetilde{X}_{t}(v))=(1,-1)\}|=|\{w:(X_{t}(w), \widetilde{X}_{t}(w))=(-1,1)\}|,\]
therefore we can rematch vertices one to one from \(\{v:(X_{t}(v),\widetilde{X}_{t}(v))=(1,-1)\}\) to \(\{w:(X_{t}(w),\widetilde{X}_{t}(w))=(-1,1)\}\). We consider monotone coupling on these chains, with the rematched vertices. Their magnetizations remain agreed under the coupling, and the number of vertices whose spins are not
matched, \(2D_{t}\), decreases. More precisely, \(D_{t}\) is supermartingale, and there exists \(c=c(\beta)>0\) such that
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}[D_{t+1}|D_{t}]\leq\Big{(}1-\frac{ck}{n} \Big{)}D_{t}.\]
Therefore, combining with the Markov inequality, we get
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\left[\min_{t}\{t:X_{t}=\widetilde{X}_{ t}\}>\frac{\gamma n\log n}{k}\right]\leq\mathbb{P}_{\sigma,\widetilde{\sigma}}[D_{ \gamma n\log n/k}\geq 1]\leq n\exp(-c\gamma\log n).\]
Choosing proper \(\gamma=\gamma(\beta)\) makes the rightmost term go to \(0\) as \(n\to\infty\).
It is well known that the magnetization under the original Glauber dynamics tends to go to zero, so the absolute value of it becomes supermartingale. Similarly we need a bound to measure how much the magnetization changes by randomized systematic scan dynamics.
**Lemma 2.6**.: _(Magnetization estimate) For any given chain \(X_{t}\) starting from \(\sigma\),_
\[\Big{|}\mathbb{E}_{\sigma}[S_{1}]-\big{(}1-\frac{k}{n}\big{)}S(\sigma)-\frac{ k}{n}\tanh\beta S(\sigma)\Big{|}\leq\frac{2k}{n}\tanh\beta\frac{2k}{n}=O \Big{(}\frac{k^{2}}{n^{2}}\Big{)}. \tag{13}\]
Proof.: Consider the intermediate states \(Y_{0}=\sigma,...,Y_{k}=X_{1}\). For any vertex selected to be updated, the probability that the new spin becomes \(1\) is
\[p_{+}\Big{(}S(Y_{i})\pm\frac{1}{n}\Big{)}=\frac{1+\tanh\beta(S(Y_{i})\pm 1/n)}{2},\]
thus the expectation of the updated spin of the vertex is
\[p_{+}\Big{(}S(Y_{i})\pm\frac{1}{n}\Big{)}-p_{-}\Big{(}S(Y_{i})\pm\frac{1}{n} \Big{)}=\tanh\beta\Big{(}S(Y_{i})\pm\frac{1}{n}\Big{)}\]
and we know \(|S(Y_{i})-S_{0}|\leq\frac{2i}{n}\).
Since \(k\) vertices are chosen uniformly randomly among \(n\) vertices at \(t=0\), the expected sum of spins of vertices which are not chosen to be updated is \((n-k)S_{0}\). Therefore
\[(n-k)S_{0}+k\tanh\beta\Big{(}S_{0}-\frac{1}{n}-\frac{2(k-1)}{n} \Big{)}\] \[\leq n\mathbb{E}_{\sigma}[S_{1}]\leq(n-k)S_{0}+k\tanh\beta\Big{(} S_{0}+\frac{1}{n}+\frac{2(k-1)}{n}\Big{)},\]
which becomes Equation (13) by appealing \(|\tanh x-\tanh y|\leq 2\tanh|(x-y)/2|\).
Sometimes we need to investigate the magnetization not only for \((X_{t})\) but for all its intermediates \((Y_{i})\). Since \(|S(Y_{i+1})-S(Y_{i})|\leq 2/n\), we can get an estimate for the time \(\tau_{\text{mag}}:=\min\{i\geq 0:S(Y_{i})=S(\widetilde{Y}_{i})\}\). However, due to unavailable vertices, we are not sure whether we can manipulate the coupling so that those two chains' magnetization remain equal. The following lemma partially gives an answer for such a case.
**Lemma 2.7**.: _Assume \(k=o(\sqrt{n})\). Suppose we have two chains \(X_{t}\) and \(\widetilde{X}_{t}\)(not necessarily independent) such that \(S_{0}>\widetilde{S}_{0}\). Define their intermediate states \(\{Y_{i}\}\) and \(\{\widetilde{Y}_{i}\}\) such that \((Y_{kt},\widetilde{Y}_{kt})=(X_{t},\widetilde{X}_{t})\) for all \(t\geq 0\). Consider \(\tau_{\text{almost}}:=\min\{i\geq 0:|S(Y_{i})-S(\widetilde{Y}_{i})|\leq 2/n\}\) and \(\tau_{\text{exact}}:=\min\{i\geq 0:S(Y_{i})=S(\widetilde{Y}_{i})\}\) under any given coupling. Then,_
1. _There exists a coupling for intermediates from_ \(\tau_{\text{exact}}\) _to_ \(T:=\lceil\tau_{\text{exact}}/k\rceil\) _which ensures_ \(S_{T}=\widetilde{S}_{T}\) _with high probability._
2. _There exists a coupling for intermediates from_ \(\tau_{\text{almost}}\) _to_ \(T^{\prime}:=\lceil\tau_{\text{almost}}/k\rceil\) _which ensures that_ \(|S_{T^{\prime}}-\widetilde{S}_{T^{\prime}}|\leq 4k/n\) _with high probability, for large enough_ \(n\)
**Remark**.: _Both \(\tau_{\text{almost}}\) and \(\tau_{\text{exact}}\) are defined on intermediates. Lemma 2.7 implies that if the two magnetizations on intermediates are matched, magnetizations on the original chains can be matched soon with high probability. For the case (ii), Lemma 2.2 and Markov inequality ensures the magnetization matching after \(O(n/k)\) times._
Proof.: (i) Let \(l:=\tau_{\text{exact}}\) (mod k). At time \(i=\tau_{\text{exact}}\), rematch vertices according to their current spins without considering its update history. For the remaining \((k-l)\) updates, apply monotone coupling for \((Y_{i},\widetilde{Y}_{i})\): randomly choose one of the available vertices from \(Y_{i}\) and check the corresponding vertices of \(\widetilde{Y}_{i}\). If that vertex from \(\widetilde{Y}_{i}\) is an unavailable vertex, stop monotone coupling and run two Markov chains independently. If not, update two corresponding vertices under rematched monotone coupling.
This coupling preserves the magnetization matching unless it choose unavailable vertices during \((k-l)\) updates. In the worst case, the probability that the monotone coupling does not stop at \(i=j\) is \((n-2j)/(n-j)=1-j/(n-j)\). After it survives, the probability that the coupling does not stop at \(i=(j+1)\) is \((n-2j-1)/(n-j-1)=1-j/(n-j-1)\). To summarize, the probability that the coupling does not stop until \(i=k\) is greater than or equal to
\[\Big{(}1-\frac{l}{n-l}\Big{)}\Big{(}1-\frac{l}{n-l-1}\Big{)}...\Big{(}1-\frac {l}{n-k}\Big{)}.\]
This is greater than
\[\Big{(}1-\frac{k}{n-k}\Big{)}^{k}\sim\exp\left(-\frac{k^{2}}{n-k}\right)\to 1. \tag{14}\]
Since \(k=o(\sqrt{n})\), there is a coupling that ensures \(\mathbb{P}[S_{T}=\widetilde{S}_{T}]\to 1\) as \(n\to\infty\).
(ii) In case of \(\tau_{\text{almost}}=\tau_{\text{exact}}\), there is nothing to prove. Without loss of generality, suppose \(S(Y_{\tau_{\text{almost}}})-S(\widetilde{Y}_{\tau_{\text{almost}}})=2/n\). Rematch vertices so that \(Y_{\tau_{\text{almost}}}\) and \(\widetilde{Y}_{\tau_{\text{almost}}}\) are monotone. Apply monotone coupling from \(\tau_{\text{almost}}\) to \(kT^{\prime}\) in the same way as (i): stop the coupling and update independently when an unavailable vertex is chosen to be updated. Monotone coupling succeeds to time \(kT^{\prime}\) with probability \(1-O(k^{2}/n)\). Only one vertex is mismatched at time \(\tau_{\text{almost}}\) so an analogous argument from Proposition 2.1 ensures that
\[|S_{T^{\prime}}-\widetilde{S}_{T^{\prime}}|\leq\frac{2}{n}\left(1+\frac{\beta} {n}\right)^{k-l}\leq\frac{4}{n}.\]
for large enough \(n\).
## 3. Main results: high temperature regime
In this section, we prove Theorem 1.1. Assume \(\beta<1\) throughout this section.
### Mixing time upper bound for \(\beta<1\)
Here we set the exact upper bound statement for the mixing time first.
**Theorem 3.1**.: _Suppose \(k=o(\sqrt[3]{n})\). Then_
\[\lim_{\gamma\to\infty}\limsup_{n\to\infty}d_{n}\Big{(}\frac{n\log n}{2k(1-\beta )}+\frac{\gamma n}{k}\Big{)}=0. \tag{15}\]
The proof of Theorem 3.1 consists of two parts: for any given two configuration chains we couple their magnetizations in the first part(magnetization matching phase), then set up another coupling so that their spins exactly match with high probability(two-coordinate chain phase). Each parts consists of two small steps, since two chains often cross each other during \(k\) updates. For any two chains \(X_{t}\) and \(\widetilde{X}_{t}\),
1. Update \(X_{t}\) and \(\widetilde{X}_{t}\) under the grand coupling. In \(\frac{n\log n}{2k(1-\beta)}\) time, \(|S(X_{t})-S(\tilde{X}_{t})|\leq\frac{4k}{n}\) holds with high probability.
2. Rematch vertices of \(X_{t}\) and \(\widetilde{X}_{t}\) and run two chains independently. After additional \(\gamma_{1}n/k\) time, \(S(X_{t})=S(\tilde{X}_{t})\) holds with high probability.
3. Setup a coupling so that \(S(X_{t})=S(\widetilde{X}_{t})\) remains true. Define two-coordinate chain from \(X_{t}\) and \(\widetilde{X}_{t}\) (call them \(U_{t}^{X}\) and \(U_{t}^{\widetilde{X}}\)). After additional \(\gamma_{2}n/k\) time, \(|U_{t}^{X}-U_{t}^{\widetilde{X}}|\leq k\) holds with high probability.
4. Setup another coupling for \(U_{t}^{X}\) and \(U_{t}^{\widetilde{X}}\). After additional \(n/k\) time, \(U_{t}^{X}=U_{t}^{\widetilde{X}}\) holds with high probability, and this becomes equivalent to \(X_{t}=\widetilde{X}_{t}\).
In particular, (1), (2), and (3) requires the restriction \(k=o(\sqrt{n})\). \(k=o(\sqrt[3]{n})\) is necessary only for (4).
**Theorem 3.2**.: _(magnetization matching phase) Suppose \(k=o(\sqrt{n})\). For any two configurations \(\sigma\) and \(\widetilde{\sigma}\), there exists a coupling \((X_{t},\widetilde{X}_{t})\) starting with \(X_{0}=\sigma\) and \(\widetilde{X}_{0}=\widetilde{\sigma}\), which satisfies \(S_{t^{*}}=\widetilde{S}_{t^{*}}\) with probability \(1-O(\gamma^{-1/2})\), where \(t^{*}=[2k(1-\beta)]^{-1}n\log n+\gamma n/k\)._
Proof.: For convenience, define \(t(\gamma):=[2k(1-\beta)]^{-1}n\log n+\gamma n/k\). For any given configuration \(\sigma\) and \(\widetilde{\sigma}\) consider the monotone coupling \((X_{t},\widetilde{X}_{t})\) with \(X_{0}=\sigma\) and \(\widetilde{X}_{0}=\widetilde{\sigma}\). From Proposition 2.2,
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}\left[|S_{t(0)}-\widetilde{S}_{t(0)}| \right]\leq c_{1}n^{-1/2} \tag{16}\]
holds for some \(c_{1}>0\). Without loss of generality assume \(S_{t(0)}\geq\widetilde{S}_{t(0)}\) and define a stopping time
\[\tau_{\text{pre}}:=\min\{t\geq t(0):|S_{t}-\widetilde{S}_{t}|\leq 4k/n\}\]
If \(\tau_{\text{pre}}\leq t(\gamma)\), from time \(\tau_{\text{pre}}\) rematch vertices according to their spins and update together under the monotone coupling. Otherwise run two chains \((X_{t})\) and \((\widetilde{X}_{t})\) under the monotone coupling until \(t(\gamma)\) and run independently for \(t(\gamma)<t\leq\tau_{\text{pre}}\). Since \(S_{t}\geq\widetilde{S}_{t}\) for \(t\leq\tau_{\text{pre}}\), the process \((S_{t}-\widetilde{S}_{t})_{t(\gamma)\leq t<\tau_{\text{pre}}}\) has a non-positive drift and is non-negative. \(t<\tau_{\text{pre}}\) ensures \(S_{t+1}-\widetilde{S}_{t+1}>0\) so Lemma 2.1 and Lemma 2.2 can be applied, provided that \(k=o(\sqrt{n})\),
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{pre}}>t(\gamma)|X_{t(0)}, \widetilde{X}_{t(0)}]\leq\frac{c|S_{t(0)}-\widetilde{S}_{t(0)}|}{\frac{\sqrt{ k}}{n}\sqrt{\frac{\gamma n}{k}}}=\frac{c\sqrt{n}|S_{t(0)}-\widetilde{S}_{t(0)}|}{ \sqrt{\gamma}}.\]
Taking expectation over \(X_{t(0)}\) and \(\widetilde{X}_{t(0)}\) with Equation (16) gives
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{pre}}>t(\gamma_{1})]\leq O (\gamma_{1}^{-1/2}). \tag{17}\]
After we reach to \(\tau_{\text{pre}}\), the number of plus signs of \(X_{\tau_{\text{pre}}}\) and \(\widetilde{X}_{\tau_{\text{pre}}}\) are different by at most \(2k\), by the definition of \(\tau_{\text{pre}}\). Define another stopping time
\[\tau_{\text{almost}}:=\min\{i\geq k\tau_{\text{pre}}:|S(Y_{i})-S(\widetilde{Y }_{i})|\leq 2/n\}.\]
From Lemma 2.7, there is a coupling from \(\tau_{\text{almost}}\) to \(k\lceil\tau_{\text{almost}}/k\rceil\) with high probability that ensure the magnetization difference is at most \(4/n\). Without loss of generality suppose \(S(Y_{k\tau_{\text{pre}}})>S(\widetilde{Y}_{k\tau_{\text{pre}}})\). Consider a sequence \(\{S_{t}-\widetilde{S}_{t}+4k/n\}_{t\geq\tau_{\text{pre}}}\) by running \(X_{t}\) and \(\widetilde{X}_{t}\) independently, then we can apply Lemma 2.1 for \(\tau:=\min\{t\geq k\tau_{\text{pre}}:S_{t}-\widetilde{S}_{t}+4k/n\geq 4k/n\}\):
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau>\tau_{\text{pre}}+\gamma_{2}n/k] \leq\frac{4\frac{8k}{n}}{\frac{\sqrt{k}}{n}\sqrt{\gamma_{2}n}}=\frac{32\sqrt{k }}{\sqrt{\gamma_{2}n}}=O(\gamma_{2}^{-1/2}).\]
However \(k\tau\geq\tau_{\text{almost}}\), hence
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{almost}}>k\tau_{\text{pre}}+ \gamma_{2}n]\leq O(\gamma_{2}^{-1/2}). \tag{18}\]
From \(\tau_{\text{almost}}\) we couple intermediates as we discussed in Lemma 2.7. Then we can have at time \(t=T:=k\lceil\tau_{\text{almost}}/k\rceil\) the magnetization difference is less then or equal to \(4/n\) with high probability. If it is \(0\) then the magnetization matching phase is done. If not, we can rematch vertices from time \(t=T\) so that \(X_{T}(v)-\widetilde{X}_{T}(v)\) are either all non-negative or all non-positive. Then Proposition 2.2 and Markov inequality ensures
\[\begin{split}\mathbb{P}_{\sigma,\widetilde{\sigma}}[S_{t+T}= \widetilde{S}_{t+T}|\mathcal{F}_{T}]&\leq n\mathbb{E}_{\sigma, \widetilde{\sigma}}\left[|S_{t+T}-\widetilde{S}_{t+T}|\big{|}|S_{T}- \widetilde{S}_{T}|\right]\\ &\leq n\left(1-\frac{k(1-\beta)}{n}\right)^{t}|S_{T}- \widetilde{S}_{T}|\leq 4e^{-\frac{kt(1-\beta)}{n}}\end{split} \tag{19}\]
Hence \(t=\gamma_{3}n/k\) is enough to assure \(S_{t+T}=\widetilde{S}_{t+T}\) with high probability. To sum up, define \(\tau_{\text{mag}}:=\min\{t\geq 0:S_{t}=\widetilde{S}_{t}\}\) then combining Equation (17), (18), and (19) gives
\[\begin{split}\mathbb{P}_{\sigma,\widetilde{\sigma}}[& \tau_{\text{mag}}<t(3\gamma)]\\ &\geq\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{pre}}<t( \gamma)]\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{almost}}<k\tau_{ \text{pre}}+\gamma n]\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{mag} }<\lceil\tau_{\text{almost}}/k\rceil+\gamma n/k]\\ &=1-O(\gamma^{-1/2})\to 1.\end{split} \tag{20}\]
as \(\gamma\to\infty\).
After we reach \(\tau_{\text{mag}}\), we transform two chains \((X_{t},\widetilde{X}_{t})\) into another Markov chain \((U_{t},\widetilde{U}_{t})\) and try to setup a coupling on \(U_{t}\). Following lemmas are necessary to move on to the two-coordinate chains \((U_{t},\widetilde{U}_{t})\). We omit the proof of Lemma 3.1; See [10] page 239 for the proof.
**Lemma 3.1**.: _Suppose \(k=o(n)\). For any subset \(\Omega_{0}\subset\Omega=\{-1,1\}^{n}\) with stationary distribution \(\mu\),_
\[\max_{\sigma\in\Omega}\|\mathbb{P}_{\sigma}(X_{t_{0}+t}\in\cdot)-\mu\|_{TV} \leq\max_{\sigma_{0}\in\Omega_{0}}\|\mathbb{P}_{\sigma_{0}}(X_{t}\in\cdot)- \mu\|_{TV}+\max_{\sigma\in\Omega}\mathbb{P}_{\sigma}(X_{t_{0}}\notin\Omega_{0}).\]
Consider the set \(\Omega_{0}=\{\sigma\in\Omega:|S(\sigma)|\leq 1/2\}\). From Lemma 2.4, there is a constant \(\theta_{0}>0\) such that \(|\mathbb{E}_{\sigma}[S_{\theta_{0}n/k}]|\leq 1/4\). Since \(k=o(n)\),
\[\begin{split}\mathbb{P}_{\sigma}(X_{\theta_{0}n/k}\notin\Omega_{0 })&=\mathbb{P}_{\sigma}(|S_{\theta_{0}n/k}|>1/2)\\ &\leq\mathbb{P}_{\sigma}(|S_{\theta_{0}n/k}-\mathbb{E}_{\sigma}[S _{\theta_{0}n/k}]|>1/4)\leq 16\text{Var}_{\sigma}(S_{\theta_{0}n/k})=O(n^{-1}). \end{split}\]
The last equation comes from the consequence of Lemma 2.3. Both the number of positive spins and that of negative spins from any \(\sigma_{0}\in\Omega_{0}\) are in between \(n/4\) and \(3n/4\). More formally, define
\[u_{0}:=|\{v\in V:\sigma_{0}(v)=1\}|,\qquad v_{0}:=|\{v\in V:\sigma_{0}(v)=-1\}|\]
as numbers of each spins, and
\[\Lambda_{0}:=\{(u,v)\in\mathbb{Z}^{2}:n/4\leq u,v\leq 3n/4,u+v=n\}.\]
Then \(\sigma_{0}\in\Omega_{0}\) if and only if \((u_{0},v_{0})\in\Lambda_{0}\). With this definition we move on two-coordinate chain \((u,v)\) from the original chain \((X_{t})\).
**Definition 3.1**.: _(two-coordinate chain) Fix a configuration \(\sigma_{0}\in\Omega_{0}\). For \(\sigma\in\Omega\), define_
\[U_{\sigma_{0}}(\sigma) :=|\{v\in V:\sigma(v)=\sigma_{0}(v)=1\}|\] \[V_{\sigma_{0}}(\sigma) :=|\{v\in V:\sigma(v)=\sigma_{0}(v)=-1\}|.\]
From now on, we shall write simply \(U(\sigma)\) for \(U_{\sigma_{0}}(\sigma)\) and \(V(\sigma)\) for \(V_{\sigma_{0}}(\sigma)\).
For any randomized systematic scan dynamics chain \(\{X_{t}\}\), we can define a process \((U_{t},V_{t})_{t\geq 0}\) by
\[U_{t}=U(X_{t})\qquad\text{and}\qquad V_{t}=V(X_{t}).\]
This is a Markov chain on \(\{0,...,u_{0}\}\times\{0,...,v_{0}\}\). Denote the stationary measure for this chain as \(\pi_{2}\), and note that this chain also determines the magnetization of the original chain \(\{X_{t}\}\):
\[S_{t}=\frac{2(U_{t}-V_{t})}{n}-\frac{u_{0}-v_{0}}{n}.\]
The first thing to check is how the original Markov chain and two-coordinate chain are related to each other. The total variation distances of the original chain and that of the two-coordinate chain turn out to be equal. This is also from [10], page 241. We omit the proof.
**Lemma 3.2**.: _Suppose \((X_{t})\) is the Glauber dynamics that starts from \(\sigma_{0}\) and \((U_{t},V_{t})\) is the corresponding two-coordinate chain that starts from \((u_{0},v_{0})\). Then_
\[\|\mathbb{P}_{\sigma_{0}}(X_{t}\in\cdot)-\mu\|_{TV}=\|\mathbb{P}_{(u_{0},v_{0 })}((U_{t},V_{t})\in\cdot)-\pi\|_{TV},\]
_where \(\mu\) and \(\pi\) are the stationary distributions of two chains, respectively._
Thanks to Lemma 3.2, it suffices to bound from above the total variation distance of the two-coordinate chain. Let's determine \(\sigma_{0}\) later and consider the two coordinate chain first. The difference of \(U\) becomes non-negative supermartingale for a short period of time.
**Lemma 3.3**.: _Suppose two configuration \(\sigma\) and \(\widetilde{\sigma}\) satisfy \(S(\sigma)=S(\widetilde{\sigma})\) and \(U(\widetilde{\sigma})-U(\sigma)>0\). Define_
\[\Xi_{1}:=\left\{\sigma^{*}\in\Omega:\min\{U(\sigma^{*}),u_{0}-U(\sigma^{*}),V (\sigma^{*}),v_{0}-V(\sigma^{*})\}\geq\frac{n}{16}\right\}\]
_and_
\[R(\sigma_{1},\sigma_{2}):=U(\sigma_{1})-U(\sigma_{2}),\qquad\qquad\tau_{k}= \min_{t\geq 0}\{t:R(\widetilde{X}_{t},X_{t})\leq k\}.\]
_There exists a Markovian coupling \((X_{t},\widetilde{X}_{t})_{0\leq t\leq\tau_{k}}\) of the randomized systematic scan dynamics with initial state \(X_{0}=\sigma\) and \(\widetilde{X}_{t}=\tilde{\sigma}\), such that_
1. \(S_{t}=\widetilde{S}_{t}\) _at any time_ \(t\)_._
2. _For all_ \(t\leq\tau_{k}\)_, intermediate states_ \(\{Y_{i}\}_{0\leq i\leq k\tau_{k}}\) _and_ \(\{\widetilde{Y}_{i}\}_{0\leq i\leq k\tau_{k}}\) _such that_ \(Y_{ik}=X_{i}\) _and_ \(\widetilde{Y}_{ik}=\widetilde{X}_{i}\) _for all_ \(i\geq 0\) _satisfies_ \[\mathbb{E}_{\sigma,\tilde{\sigma}}\left[R(\widetilde{Y}_{i+1},Y_{i+1})-R( \widetilde{Y}_{i},Y_{i})\Big{|}Y_{i},\widetilde{Y}_{i}\right]\leq 0,\] _for any_ \(i\)_._
3. _There exists a constant_ \(c>0\) _independent from_ \(k\) _and_ \(n\) _so that on the event_ \(\{X_{t}\in\Xi,\widetilde{X}_{t}\in\Xi\}\)_,_ \[\mathbb{P}_{\sigma,\tilde{\sigma}}\left[R(\widetilde{Y}_{i+1},Y_{i+1})-R( \widetilde{Y}_{i},Y_{i})\neq 0\Big{|}Y_{i},\widetilde{Y}_{i}\right]\geq c.\]
**Remark**.: _The lemma ensures the chain \(R(\widetilde{Y}_{i},Y_{i})\) to be supermartingale until it becomes less than \(k\). Note that the statement works on all intermediate states, not only for \(R(\widetilde{X}_{t},X_{t})\)._
Proof.: We couple \((X_{t},\widetilde{X}_{t})\) by coupling \((Y_{i},\widetilde{Y}_{i})_{0\leq i\leq k\tau_{k}}\) for every \(i\). From \((Y_{i},\widetilde{Y}_{i})\) to \((Y_{i+1},\widetilde{Y}_{i+1})\), choose a vertex \(I_{i}\) randomly from \(V\setminus\mathcal{N}_{i}\) and assign the next spin \(\mathcal{S}\) according to the probability from equation (2);
\[\mathbb{P}(\mathcal{S}=1)=p_{+}(S(Y_{i})-Y_{i}(I_{i})/n).\]
In other words, \(Y_{i+1}\) becomes
\[Y_{i+1}(v)=\begin{cases}Y_{i}(v)&v\neq I_{i},\\ \mathcal{S}&v=I_{i}.\end{cases}\]
For \(\widetilde{Y}_{i+1}\), we select \(\widetilde{I}_{i}\) randomly from \(\{v:\widetilde{Y}_{i}(v)=Y_{i}(v)\}\setminus\widetilde{\mathcal{N}_{i}}\) and set
\[\widetilde{Y}_{i+1}(v)=\begin{cases}\widetilde{Y}_{i}(v)&v\neq\widetilde{I}_{i },\\ \mathcal{S}&v=\widetilde{I}_{i}.\end{cases}\]
The first condition of Lemma 3.3 is done since this coupling ensures \(S(Y_{i})=S(\widetilde{Y}_{i})\) for any \(i\). Next, consider
\[A_{i} =|\{v:\sigma_{0}(v)=1,Y_{i}(v)=1\}\setminus\mathcal{N}_{i}|\] \[B_{i} =|\{v:\sigma_{0}(v)=1,Y_{i}(v)=-1\}\setminus\mathcal{N}_{i}|\] \[C_{i} =|\{v:\sigma_{0}(v)=-1,Y_{i}(v)=1\}\setminus\mathcal{N}_{i}|\] \[D_{i} =|\{v:\sigma_{0}(v)=-1,Y_{i}(v)=-1\}\setminus\mathcal{N}_{i}|,\]
and \(\widetilde{A}_{i}\), \(\widetilde{B}_{i}\), \(\widetilde{C}_{i}\), \(\widetilde{D}_{i}\) analogous way. From \(S(Y_{i})=S(\widetilde{Y}_{i})\) we have \(A_{i}+C_{i}=\widetilde{A}_{i}+\widetilde{C}_{i}\) and \(B_{i}+D_{i}=\widetilde{B}_{i}+\widetilde{D}_{i}\).
Denote \(R_{i}:=R(\widetilde{Y}_{i},Y_{i})\), and \(i^{\prime}=i(\text{mod k})\) for convenience. Then
\[\mathbb{P}_{X_{0},\widetilde{X}_{0}}[R_{i+1}-R_{i}=-1|Y_{i}, \widetilde{Y}_{i}] =\frac{C_{i}}{n-i^{\prime}}\frac{\widetilde{A}_{i}}{\widetilde{A} _{i}+\widetilde{C}_{i}}p_{-}\Big{(}S(Y_{i})-\frac{1}{n}\Big{)}+\frac{B_{i}}{n -i^{\prime}}\frac{\widetilde{D}_{i}}{\widetilde{B}_{i}+\widetilde{D}_{i}}p_{+ }\Big{(}S(Y_{i})+\frac{1}{n}\Big{)}\] \[\mathbb{P}_{X_{0},\widetilde{X}_{0}}[R_{i+1}-R_{i}=1|Y_{i}, \widetilde{Y}_{i}] =\frac{A_{i}}{n-i^{\prime}}\frac{\widetilde{C}_{i}}{\widetilde{A} _{i}+\widetilde{C}_{i}}p_{-}\Big{(}S(Y_{i})-\frac{1}{n}\Big{)}+\frac{D_{i}}{n -i^{\prime}}\frac{\widetilde{B}_{i}}{\widetilde{B}_{i}+\widetilde{D}_{i}}p_{+ }\Big{(}S(Y_{i})+\frac{1}{n}\Big{)}.\]
Since \(R_{i+1}-R_{i}\in\{-1,0,1\}\),
\[\mathbb{E}_{X_{0},\widetilde{X}_{0}}[R_{i+1}-R_{i}|Y_{i},\widetilde{Y}_{i}]= \frac{\widetilde{C}_{i}-C_{i}}{n-i}p_{-}\Big{(}S(Y_{i})-\frac{1}{n}\Big{)}+ \frac{\widetilde{B}_{i}-B_{i}}{n-i}p_{+}\Big{(}S(Y_{i})+\frac{1}{n}\Big{)}.\]
For any \(t\leq\tau_{k}-1\),
\[k\leq R_{kt}=-(\widetilde{C}_{kt}-C_{kt})=-(\widetilde{B}_{kt}-B_{kt}),\]
hence \(\widetilde{C}_{i}-C_{i}\) and \(\widetilde{B}_{i}-B_{i}\) are both less then or equal to \(0\) if \(i\leq k\tau_{k}\) because both of them can only change by \(1\) as \(i\) changes. The last condition of Lemma 3.3 follows from that \(p_{\pm}([-1,1])\subset[\epsilon,1-\epsilon]\) for some \(\epsilon>0\).
Lemma 3.3 satisfies all the conditions of Lemma 2.1, thus we can calculate how much time we need for \(\tau_{k}\). Without loss of generality let \(U_{\tau_{k}}<\widetilde{U}_{\tau_{k}}\). At each time \(t\geq\tau_{k}\), rematch the vertices of \(A_{tk}\) to \(\widetilde{A}_{tk}\) as many as possible, and do the same procedure for \(D_{tk}\) to \(\widetilde{D}_{tk}\). Match \(\widetilde{B}_{tk}\) to \(B_{tk}\), and \(\widetilde{C}_{tk}\) to \(C_{tk}\) as well. Match the remaining vertices of \(C_{tk}\) to those of \(\widetilde{A}_{tk}\), and those of \(B_{tk}\) to \(\widetilde{D}_{tk}\).
Figure 1. All vertices except \(\mathcal{N}_{i}\)(which are not allowed to be updated at the moment) are divided into four categories. Some vertices from \(B_{i}\cap\widetilde{A}_{i}\) and \(C_{i}\cap\widetilde{D}_{i}\) can be also included in \(\mathcal{N}_{i}\).
All vertices from \(X_{t}\) and \(\widetilde{X}_{t}\) are matched one to one; Now update \(k\) vertices arbitrarily from \(X_{t}\) and corresponding vertices from \(\widetilde{X}_{t}\) under the Monotone coupling. Repeat the rematching process according to their current spins and updates, unless \(U_{t}=\widetilde{U}_{t}\). This coupling has two remarks. First of all, magnetizations remain coupled during the process, and secondly \(R_{t}\) is non-negative and non-increasing sequence.
**Lemma 3.4**.: _Suppose \(k=o(n^{1/3})\). Define \(\tau_{\mathrm{match}}=\min\{t\geq\tau_{k}:U_{t}=\widetilde{U}_{t}\}\). Then we have_
\[\mathbb{P}_{X_{\tau_{k}};\widetilde{X}_{\tau_{k}}}\Big{\{}\tau_{\mathrm{match} }>\tau_{k}+\frac{\gamma n}{k}\Big{\}}=O(k^{3/2}n^{-1/2}). \tag{21}\]
Proof.: Suppose we apply the original (single site) Glauber dynamics to \(X_{\tau_{k}}\) and \(\widetilde{X}_{\tau_{k}}\) and let them \(X_{t}^{\prime}\) and \(\widetilde{X}_{t}^{\prime}\). Then \(\{R(X_{i}^{\prime},\widetilde{X}_{i}^{\prime})\}_{i\geq k\tau_{k}}\) becomes supermartingale. Lemma 2.1 ensures that the stopping time \(\tau_{R}^{\prime}:=\{i\geq k\tau_{k}:R(X_{i}^{\prime},\widetilde{X}_{i}^{ \prime})=0\}\) satisfies \(\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{R}^{\prime}>k\tau_{k}+A|R_{k\tau_ {k}}]\lesssim\frac{k}{\sqrt{A}}\) for any large enough \(A\). Now compare this result with systematic scan dynamics case. Define \(\tau_{R}:=\{i\geq k\tau_{k}:R_{i}=R(X_{i},\widetilde{X}_{i})=0\}\). In the similar fashion as Lemma 2.7, grouping by \(k\) numbers of single site updates from \(i=\tau_{k}\) to \(i=\tau_{k}+A\) generates \(A/k\) times of systematic scan update and the probability that none of the group update same vertex more than once equals greater than \((e^{-k^{2}/n})^{A/k}=e^{-Ak/n}\). Hence
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{R}\leq k\tau_{k}+A|R _{k\tau_{k}}] \geq\mathbb{P}[\tau_{R}^{\prime}=\tau_{R}]\mathbb{P}_{\sigma, \widetilde{\sigma}}[\tau_{R}^{\prime}\leq k\tau_{k}+A|R_{k\tau_{k}}]\] \[\geq e^{-Ak/n}\left(1-O\left(\frac{k}{\sqrt{A}}\right)\right).\]
Pick \(A=\sqrt{nk}\), then the rightmost term becomes \(1-O(k^{3/2}n^{-1/2})\), which goes to \(1\). For the original dynamics, after \(i=\tau_{R}^{\prime}\), \(R(X_{i}^{\prime},\widetilde{X}_{i}^{\prime})\) remains zero. Hence modifying the argument for the smallest multiple of \(k\) greater than \(\tau_{R}^{\prime}\) would give the same result on \(\tau_{\mathrm{match}}\). The proof is finished by \(\mathbb{P}[\tau_{\mathrm{match}}\geq\tau_{k}+\sqrt{n/k}]\geq\mathbb{P}[\tau_{ \mathrm{match}}>\tau_{k}+n/k]\geq\mathbb{P}[\tau_{\mathrm{match}}>\tau_{k}+ \gamma n/k]\).
We must introduce one more lemma to finish the two-coordinate chain phase. If two arbitrary chain starting from \((\sigma,\widetilde{\sigma})\) reach to \(\tau_{\mathrm{mag}}\), we have not only \(S_{\tau_{\mathrm{mag}}}=\widetilde{S}_{\tau_{\mathrm{mag}}}\) but also we have a bound for \(|U_{\tau_{\mathrm{mag}}}-\widetilde{U}_{\tau_{\mathrm{mag}}}|\).
**Lemma 3.5**.: _For arbitrary \(\sigma,\widetilde{\sigma}\in\Omega\), consider chains \((X_{t},\widetilde{X}_{t})\) with \(X_{0}=\sigma\) and \(\widetilde{X}_{0}=\widetilde{\sigma}\), and suppose the magnetization matching phase is finished according to Theorem 3.2; i.e. \(S_{\tau_{\mathrm{mag}}}=\widetilde{S}_{\tau_{\mathrm{mag}}}\). At that moment,_
\[\mathbb{E}_{\sigma,\widetilde{\sigma}}\Big{[}|U_{\tau_{\mathrm{mag}}}- \widetilde{U}_{\tau_{\mathrm{mag}}}|\Big{]}=O(\sqrt{n})\]
_holds for \(\sigma_{0}=\sigma\)._
Proof.: Let \(u_{0}\) be the number of \((+1)\) spins from \(\sigma_{0}\).
\[u_{0}=\frac{n(1+S(\sigma_{0}))}{2}.\]
Then we can observe that
\[U_{t}=M_{t}(A_{0})+\frac{u_{0}}{2}\qquad\text{ and }\qquad\widetilde{U}_{t}= \widetilde{M}_{t}(A_{0})+\frac{u_{0}}{2},\]
where \(A_{0}=\{v:\sigma(v)=1\}\) (\(M_{t}\) is defined at Lemma 2.4). The difference of the two satisfies
\[|U_{t}-\widetilde{U}_{t}|=|M_{t}(A_{0})-\widetilde{M}_{t}(A_{0})|\leq|M_{t}(A_ {0})|+|\widetilde{M}_{t}(A_{0})|.\]
Take expectation for \(\sigma\) and \(\widetilde{\sigma}\). By Lemma 2.4, we get the result.
**Theorem 3.3**.: _(two-coordinate chain phase) Suppose \(k=o(\sqrt[3]{n})\). Consider \((X_{t},\widetilde{X}_{t})\) which starts from \((\sigma,\widetilde{\sigma})\). Evolve the chain as given in Theorem 3.2 and suppose \(\tau_{\rm mag}\leq t_{1}+\gamma n/k\) for some \(\gamma\). Then their two-coordinate chains \(U_{t}\) and \(\widetilde{U}_{t}\) match after additional \(2\gamma n/k\) time with high probability._
Proof.: Recall the definition
\[\Xi_{1}:=\Big{\{}\sigma:\min\{U(\sigma),u_{0}-U(\sigma),V(\sigma),v_{0}-V( \sigma)\}\geq\frac{n}{16}\Big{\}}\]
and let \(H(t):=\{(X_{t},\widetilde{X}_{t})\in\Xi_{1}\times\Xi_{1}\}\). The coupling that we suggested makes \((X_{t},\widetilde{X}_{t})\) remain in \(H(t)\) with high probability. Define
\[D:=\bigcup_{t\in[t(\gamma),t(3\gamma)]}\{|M_{t}(A_{0})|\geq n/32\}\qquad\text{ and}\qquad\delta:=\sum_{t\in[t(\gamma),t(3\gamma)]}\mathbb{1}_{M_{t}(A_{0})>n/64}.\]
Since \(|M_{t+1}(A_{0})-M_{t}(A_{0})|\leq k\), if \(|M_{t_{0}}(A_{0})|>n/32\) then \(|M_{t}(A_{0})|>n/64\) for all \(t\) in any interval of length \(\frac{n}{64k}\) containing \(t_{0}\). Therefore \(D\subset\{\delta>n/64k\}\) and
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(D)\leq\mathbb{P}_{\sigma,\widetilde{ \sigma}}(\delta>n/64k)\leq\frac{ck\mathbb{E}_{\sigma,\widetilde{\sigma}}[\delta ]}{n}.\]
Since \(X_{t}\) and \(\widetilde{X}_{t}\) has finished the magnetization phase, from Lemma 2.4, provided that \(t\geq t(\gamma)\), we have
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(|M_{t}(A_{0})|>n/64)\leq\frac{64 \mathbb{E}_{\sigma,\widetilde{\sigma}}[M_{t}(A_{0})]}{n}=O(n^{-1}),\]
thus \(\mathbb{E}_{\sigma,\widetilde{\sigma}}[\delta]=O(\gamma)\). Therefore, \(\mathbb{P}_{\sigma,\widetilde{\sigma}}(D)=O(\gamma k/n)\). Similar procedure for \(\widetilde{M}_{t}(A_{0})\) also gives \(\mathbb{P}_{\sigma,\widetilde{\sigma}}(\widetilde{D})=O(\gamma k/n)\). Suppose \(U_{t}\leq n/16\). Then \(u_{0}-U_{t}\geq 3n/16\) as we are assuming \(u_{0}\geq n/4\). This implies
\[|M_{t}(A_{0})|=|U_{t}-(u_{0}-U_{t})|\geq(u_{0}-U_{t})-U_{t}\geq\frac{n}{8}.\]
Similarly we can get the same result from \((u_{0}-U_{t})\leq n/16\). This argument is also applicable for \(V_{t}\) and \(v_{0}-V_{t}\). Therefore,
\[H(t)^{c}\subset\{|M_{t}(A_{0})|\geq n/16\}\cup\{|\widetilde{M}_{t}(A_{0})| \geq n/16\}\]
and finally,
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\Big{(}\bigcup_{t\in[t(\gamma),t(3 \gamma)]}H(t)^{c}\Big{)}\leq\mathbb{P}_{\sigma,\widetilde{\sigma}}(D)+ \mathbb{P}_{\sigma,\widetilde{\sigma}}(\widetilde{D})=O(\gamma k/n).\]
Now consider the event
\[H=\bigcap_{t\in[t(\gamma),t(3\gamma)]}H(t).\]
Combining Lemma 2.1 and 3.3 gives
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\big{[}\tau_{k}>t(2\gamma)|X_{t(\gamma )},\widetilde{X}_{t(\gamma)}\big{]}\leq\frac{c|R_{t(\gamma)}|}{\sqrt{n\gamma}},\]
taking expectation over \(X_{t(\gamma)}\) and \(\widetilde{X}_{t(\gamma)}\) with Lemma 3.5,
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\big{[}\tau_{k}>t(2\gamma)\big{]}\leq O (\gamma^{-1/2}).\]
When the event \(\tau_{k}\leq t_{3}+\gamma n/k\) happens, applying the last coupling with Theorem 3.4 yields
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\big{[}\tau_{\rm match}>\tau_{k}+\gamma n /k\big{]}\leq O(\gamma^{-1}).\]
To sum up,
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\big{[}\tau_{\rm match}>t(3\gamma)) \leq O(\gamma^{-1/2}\big{]}.\]
Proof.: (of the theorem 3.1)
From Lemma 3.1 and Lemma 3.2, we have
\[d(\theta_{0}n/k+t) \leq\max_{\sigma_{0}\in\Omega_{0}}\left\|\mathbb{P}_{\sigma_{0}}(X_ {t}\in\cdot)-\mu\right\|_{TV}+O(n^{-1})\] \[=\max_{(u,v)\in\Lambda_{0}}\left\|\mathbb{P}_{(u,v)}((U_{t},V_{t} )\in\cdot)-\pi\right\|_{TV}+O(n^{-1}).\]
Start from two chains \((X_{t},\widetilde{X}_{t})\). Define an event \(G:=\{\tau_{\text{mag}}<t(\gamma)\}\), then thanks to the theorem 3.2,
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(G^{c})=O(\gamma^{-1/2}). \tag{22}\]
From Theorem 3.3, we also have
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(H^{c}|G)=O(\gamma k/n) \tag{23}\]
and
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{\text{match}}>t(3\gamma)|H,G) \leq O(\gamma^{-1/2}). \tag{24}\]
Finally, when the event \(G\), \(H\), \(\{\tau_{\text{match}}<t(3\gamma)\}\) happen, Equation (22), (23), and (24) give
\[d(t_{4})\leq\mathbb{P}_{\sigma,\widetilde{\sigma}}[\tau_{\text{match}}>t_{4} ]+O(n^{-1})\leq O(\gamma^{-1/2})+O(\gamma k/n)+O(n^{-1})\to 0.\]
This is possible because we take limit \(n\to\infty\) first and then \(\gamma\to\infty\).
### Mixing time Lower bound for \(\beta<1\)
**Theorem 3.4**.: _Suppose \(k=o(\sqrt{n})\). Then_
\[\lim_{\gamma\to\infty}\liminf_{n\to\infty}d_{n}\Big{(}\frac{n\log n}{2k(1- \beta)}-\frac{\gamma n}{k}\Big{)}=1.\]
Proof.: It is enough to display a suitable lower bound on the distance between \(S_{t}\) and the stationary distribution, as \((S_{t})\) is a projection of \((X_{t})\). To be more precise, we consider an intermediate sequence \(\{Y_{i}\}\) of \(\{X_{t}\}\) and then suggest a lower bound for \(S(Y_{i})\).
Let \(i^{\prime}:=i(\text{mod k})\) and \(k^{+},k^{-}\) be the number of unavailable vertices at each time \(i\). We have \(k^{+}+k^{-}=i^{\prime}\) and \(k^{+},k^{-}\geq 0\). The drift of \(S(Y_{i})\) becomes
\[\mathbb{E}_{s_{0}}[S(Y_{i+1})-S(Y_{i})|S(Y_{i})=s,k^{\pm}]=\frac{1}{n}\left\{ \frac{n(1-s)-2k^{-}}{n-i^{\prime}}p_{+}\left(s+\frac{1}{n}\right)-\frac{n(1+s) -2k^{+}}{n-i^{\prime}}p_{-}\left(s-\frac{1}{n}\right)\right\}.\]
If \(s\geq 0\), then the equation is equal to
\[\mathbb{E}_{s_{0}}[S(Y_{i+1})-S(Y_{i})|S(Y_{i})=s]\geq\frac{1}{2n(n-i^{\prime} )}\left\{-2ns-4k+2n\tanh\beta s-O(1)\right\}.\]
Since
\[\frac{1}{n(n-i^{\prime})}=\frac{1}{n^{2}}+O\left(\frac{k}{n^{3}}\right),\]
combining with the Taylor expansion of \(\tanh\), we have
\[\mathbb{E}_{s_{0}}[S(Y_{i+1})-S(Y_{i})|S(Y_{i})=s]\geq-\frac{(1-\beta)s}{n}- \frac{s^{3}}{2n}-O\left(\frac{k}{n^{2}}\right).\]
From the symmetry of magnetization or direct calculation, we can check that the equation is true for \(s\leq 0\) as well. Hence, when \(|S(Y_{i})|\geq 2/n\),
\[\mathbb{E}_{s_{0}}\left[|S(Y_{i+1})|\big{|}S(Y_{i})\right]\geq\left(1-\frac{1 -\beta}{n}\right)|S(Y_{i})|-\frac{|S(Y_{i})|^{3}}{2n}-O\left(\frac{k}{n^{2}} \right).\]
This is clearly true when \(S(Y_{i})=0\) or \(|S(Y_{i})|=1/n\) as well. Define \(\eta:=1-\frac{1-\beta}{n}\) and \(Z_{i}=|S(Y_{i})|\eta^{-i}\). For large enough \(n\) the inequality becomes
\[\mathbb{E}_{Z_{0}}[Z_{i+1}]\geq Z_{i}-\frac{\eta^{-i}[|S(Y_{i})|^{3}+O(k/n)]}{n}.\]
As \(|S(Y_{i})|\leq 1\),
\[\mathbb{E}_{Z_{0}}[Z_{i}-Z_{i+1}]\leq\frac{\eta^{-i}[|S(Y_{i})|^{2}+O(k/n)]}{n}. \tag{25}\]
Now, we want to estimate \(\mathbb{E}_{s_{0}}[|S(Y_{i})|^{2}]\). First of all,
\[\left(\mathbb{E}_{s_{0}}[S(Y_{i})]\right)^{2}=|\mathbb{E}_{s_{0}}[S(Y_{i-i^{ \prime}})]+\mathbb{E}_{s_{0}}[S(Y_{i})-S(Y_{i-i^{\prime}})]|^{2} \tag{26}\]
The absolute value of the first term of Equation (26) is bounded by \(2|s_{0}|(1-k(1-\beta)/n)^{(i-i^{\prime})/k}\leq 2|s_{0}|\eta^{i-i^{\prime}}\). The second term of Equation (26) satisfies
\[|\mathbb{E}_{s_{0}}[S(Y_{i})-S(Y_{i-i^{\prime}})]|=|\mathbb{E}_{s_{0}}[ \mathbb{E}_{s_{0}}[S(Y_{i})-S(Y_{i-i^{\prime}})|Y_{i-i^{\prime}}]]|\leq \mathbb{E}_{s_{0}}\left[\frac{c_{1}k}{n}|S(Y_{i-i^{\prime}})|+\frac{c_{2}k^{2 }}{n^{2}}\right],\]
which is bounded by \(c_{3}|s_{0}|kn^{i-i^{\prime}}/n+O(k^{2}/n^{2})\). For the variance, \(\mathrm{Var}[S(Y_{i})]=\mathrm{Var}[S(Y_{i-i^{\prime}})]+\mathrm{Var}[S(Y_{i} )|S(Y_{i-i^{\prime}})]\leq O(1/n)+O(k/n^{2})=O(1/n)\). Hence
\[\mathbb{E}_{s_{0}}[|S(Y_{i})|^{2}] =\left(\mathbb{E}_{s_{0}}[S(Y_{i})]\right)^{2}+\mathrm{Var}_{s_{0 }}[S(Y_{i})]\leq\left[c_{4}|s_{0}|\eta^{i-i^{\prime}}+O\left(\frac{k^{2}}{n^{2 }}\right)\right]^{2}+O\left(\frac{1}{n}\right)\] \[\leq c_{5}|s_{0}|^{2}\eta^{2(i-k)}+\frac{c_{6}k^{2}|s_{0}|\eta^{i -k}}{n^{2}}+O\left(\frac{k^{4}}{n^{4}}+\frac{1}{n}\right).\]
Taking expectation on Equation (25) becomes
\[\mathbb{E}_{Z_{0}}[Z_{i}-Z_{i+1}]\leq\frac{c_{5}|s_{0}|^{2}\eta^{i-2k}}{n}+ \frac{c_{6}k^{2}|s_{0}|\eta^{-k}}{n^{3}}+\eta^{-i}O\left(\frac{k}{n^{2}}\right). \tag{27}\]
Let \(i^{*}=n\log n/2(1-\beta)-\gamma n/(1-\beta)\) and sum up Equation (27) for all \(i=0,1,...,i^{*}\).
\[s_{0}-\mathbb{E}_{s_{0}}[Z_{i^{*}}]\leq\frac{c_{5}|s_{0}|^{2}\eta^{-2k}}{n(1- \eta)}+\frac{c_{7}k^{2}|s_{0}|\eta^{-k}\log n}{n^{2}}+O\left(\frac{k}{\sqrt{n} }\right).\]
Because \(\eta^{-i^{*}}\leq n^{1/2}\), \(\eta^{-k}\to 1\) as \(n\to\infty\), two terms vanish as \(n\to\infty\). For large \(n\), if we choose \(s_{0}<\frac{1-\beta}{3c_{5}}\) then the right side of the equation is less than \(s_{0}/2\). If so,
\[\mathbb{E}_{s_{0}}[|S(Y_{i^{*}})]\geq\frac{s_{0}\eta^{i^{*}}}{2}\geq\frac{s_{0 }e^{\gamma}}{2\sqrt{n}}=:A\]
As we discussed before, since \(\mathrm{Var}_{s_{0}}[S(Y_{i^{*}})]=O(n^{-1})\),
\[\mathbb{P}_{s_{0}}(|S(Y_{i^{*}})|<A/2)\leq\mathbb{P}_{s_{0}}(|S(Y_{i^{*}})- \mathbb{E}_{s_{0}}S(Y_{i^{*}})|>A/2)\leq\frac{4\mathrm{Var}_{s_{0}}[S(Y_{t^{* }})]}{A^{2}}\leq O(e^{-2\gamma}s_{0}^{-2}).\]
On the other point of view, \(\mathbb{E}_{\pi}[S]=0\) and \(\mathrm{Var}_{\pi}[S]=O(n^{-1})\). Therefore
\[\mathbb{P}_{\pi}(|S|>A/2)\leq\frac{4\mathrm{Var}_{\pi}[S]}{A^{2}}=O(e^{-2 \gamma}s_{0}^{-2}).\]
Finally, let \(\mathcal{A}:=[-A/2,A/2]\). Then
\[\|\mathbb{P}_{s_{0}}(S_{i^{*}/k}\in\cdot)-\pi\|_{\mathrm{TV}}\geq\pi( \mathcal{A})-\mathbb{P}_{s_{0}}(|S_{i^{*}/k}|\leq A/2)\geq 1-O(e^{-2\gamma}s_{0}^{2}).\]
Taking \(\gamma\to\infty\) ensures the total variation distance goes to \(1\).
**Remark**.: _Theorem 3.4 is only available when \(k=o(\sqrt{n})\). Without this condition we can derive another lower bound under \(k=o(n)\). This work can be done with generalized Wilson's lemma, which is from [11]._
\[\lim_{\gamma\to\infty}\liminf_{n\to\infty}d_{n}\Big{(}\frac{n\log n}{2k(1-\beta) }-\frac{n\log k}{2k(1-\beta)}-\frac{\gamma n}{k}\Big{)}=1.\]
_However, \(\log k=o(\log n)\) condition is required for cutoff when Equation 3.2 is combined with the upper bound result._
## 4. Main results: critical temperature regime
In this section we show that the mixing time under the randomized systematic scan dynamics is of order \(n^{3/2}/k\) when \(\beta=1\). Assume \(\beta=1\) throughout this section.
### Mixing time Upper bound for \(\beta=1\)
**Theorem 4.1**.: _Suppose \(\beta=1\). If \(k=o(\sqrt[4]{n})\) then \(t_{mix}=O(n^{3/2}/k)\)._
Proof.: The proof consists of two steps. With some positive probability, for any given two chains, their magnetization values eventually agree in time of order \(n^{3/2}/k\). After two magnetizations coalesce, two chains can be matched by using the same matching method used in the case of \(\beta<1\). Markov inequality and Lemma 2.5 ensures the exact match can be done in \(O(n\log n/k)\) time for the second step. It is enough to show that the magnetizations of two arbitrary chains can be matched in time of order \(n^{3/2}/k\). Recall Theorem 2.6; for any chain \(X_{t}\) we have
\[\Big{|}\mathbb{E}_{\sigma}[S_{1}]-\left(1-\frac{k}{n}\right)S_{0}-\frac{k}{n} \tanh\beta S_{0}\Big{|}\leq\frac{2k}{n}\tanh\beta\frac{2k}{n}\leq\frac{4k^{2} }{n^{2}}.\]
Define \(\tau_{0}=\min\{t\geq 0:|S_{t}|\leq 2\sqrt[3]{k/n}\}\). For any \(t<\tau_{0}\), as \(|S_{t+1}-S_{t}|\leq 2k/n<2\sqrt[3]{k/n}\), the sign of both \(S_{t}\) and \(S_{t+1}\) are the same. In this case, we can apply the absolute value on the magnetization:
\[\mathbb{E}_{\sigma}\big{[}|S_{t+1}|\big{|}S_{t}\big{]}\leq\big{(}1-\frac{k}{n }\big{)}|S_{0}|+\frac{k}{n}\tanh|S_{0}|+\frac{4k^{2}}{n^{2}}.\]
Let \(\xi_{t}=\mathbb{E}\big{[}|S_{t}|\big{|}\mathbbm{1}\{\tau_{0}>t\}\big{]}\). Then the above becomes
\[\xi_{t+1}\leq\Big{(}1-\frac{k}{n}\Big{)}\xi_{t}+\frac{k}{n}\tanh\xi_{t}+\frac {4k^{2}}{n^{2}}.\]
Thus if \(\xi_{t}>\epsilon\), there exist \(c_{\epsilon}>0\) which satisfies
\[\xi_{t+1}-\xi_{t}\leq-\frac{kc_{\epsilon}}{n},\]
provided \(k=o(n)\). Therefore in \(t_{*}=O(n/k)\) time we can get \(\xi_{t}\leq 1/4\), for all \(t\geq t_{*}\).
Taylor series of \(\tanh\) gives
\[\xi_{t+1}\leq\xi_{t}-\frac{k\xi_{t}^{3}}{4n}+\frac{4k^{2}}{n^{2}}\]
for \(t\geq t_{*}\). Therefore for some large enough \(n\), \(\xi_{t}\) is a decreasing sequence. It is enough to consider the cases in which \(n\) is large enough. Consider a decreasing sequence \(b_{i}=(1/4)2^{-i}\) and define \(u_{i}:=\min\{t>t_{*}:\xi_{t}\leq b_{i}\}\). We can check that \(b_{i+1}<\xi_{t}\leq b_{i}\) when \(u_{i}\leq t<u_{i+1}\). Now, for any \(t\in(u_{i},u_{i+1}]\),
\[\xi_{t+1}\leq\xi_{t}-\frac{kb_{i}^{3}}{32n}+O\Big{(}\frac{k^{2}}{n^{2}}\Big{)}\]
holds, and summing up for all \(t\in(u_{i},u_{i+1}]\) gives
\[u_{i+1}-u_{i}\leq\frac{16n}{kb_{i}^{2}}[1+O(b_{i}^{-3}n^{-1}k)].\]
Now, let \(i_{0}=\min\{i:b_{i}\leq n^{-1/4}\}\). For any \(i<i_{0}\), we have \(O(b_{i}^{-3}n^{-1}k)=o(1)\) with the assumption \(k=o(\sqrt[4]{n})\). Therefore, for large \(n\) and \(0\leq i<i_{0}\),
\[u_{i+1}-u_{i}\leq\frac{32n}{kb_{i}^{2}}. \tag{28}\]
Summing up Equation (28) for all \(i=0,...,i_{0}\) becomes
\[u_{i_{0}}-u_{0}\leq\sum_{i=0}^{i_{0}-1}\frac{32n}{kb_{i}^{2}}\leq\frac{cn}{kb_{ i_{0}-1}^{2}}=O(n^{3/2}k^{-1}).\]
Since \(u_{0}=t_{*}=O(n/k)\), finally we derive
\[u_{i_{0}}\leq O\Big{(}\frac{n^{3/2}}{k}\Big{)}+O\Big{(}\frac{n}{k}\Big{)}=O \Big{(}\frac{n^{3/2}}{k}\Big{)}.\]
The \(O(n/k)\) term comes from that \(u_{0}=t_{*}=O(n/k)\). Now let \(r_{n}=c_{1}n^{3/2}k^{-1}\), then
\[\mathbb{E}_{\sigma}\big{[}|S_{r_{n}}|\mathbb{1}\{\tau_{0}>r_{n}\}\big{]}=O(n^ {-1/4}). \tag{29}\]
Appealing to Lemma 2.1 on \(|S_{t}|\) gives
\[\mathbb{P}_{\sigma}\Big{(}\tau_{0}>r_{n}+\frac{\gamma n^{3/2}}{k}\Big{|}X_{r_ {n}}\Big{)}\leq\frac{C|S_{r_{n}}|}{\sqrt{\frac{k}{n^{2}}}\sqrt{\frac{\gamma n ^{3/2}}{k}}}=\frac{C|S_{r_{n}}|}{\sqrt{\gamma}n^{1/4}} \tag{30}\]
Therefore, multiply \(1\{\tau_{0}>r_{n}\}\) on Equation (30) and taking expectation with gives
\[\mathbb{P}_{\sigma}\Big{(}\tau_{0}>r_{n}+\frac{\gamma n^{3/2}}{k}\Big{)}=O( \gamma^{-1/2}).\]
Since \(r_{n}=O(n^{3/2}k^{-1})\), we can reach \(\tau_{0}\) in \(O(n^{3/2}k^{-1})\) time with high probability.
Now consider two chains \(\{X_{t}\}\) and \(\{\widetilde{X}_{t}\}\). Without loss of generality assume \(\tau_{0}\) of \(\{X_{t}\}\) is greater than that of \(\{\widetilde{X}_{t}\}\). Define \(\tau_{\text{cross}}\) with respect to their intermediates as in Lemma 2.7. We know that
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\left(\tau_{0}>\frac{\gamma_{1}n^{3/2} }{k}\right)\leq O(\gamma_{1}^{-1/2}).\]
In case of \(\tau_{\text{cross}}<k\tau_{0}\) then by Lemma 2.7 we can couple the magnetizations with a positive probability. This proves \(\mathbb{P}[\tau_{\text{mag}}\leq\tau_{0}]\) is uniformly bounded above from \(0\). On the other hand, if \(\tau_{\text{cross}}>\tau_{0}\), that means both \(S_{\tau_{0}}\) and \(\widetilde{S}_{\tau_{0}}\) are in \([-2\sqrt[3]{k/n},2\sqrt[3]{k/n}]\) with high probability. Define
\[Z_{t}:=S_{t}-\widetilde{S}_{t}+\frac{4k}{n}\]
and run two chains \(X_{t}\) and \(\widetilde{X}_{t}\) independently from \(\tau_{0}\). Set a stopping time \(\tau_{1}=\min\{t\geq\tau_{0}:Z_{t}\leq\frac{4k}{n}\}\). From the assumption \(\tau_{\text{cross}}>\tau_{0}\) and \(\tau_{\text{cross}}\) for \(X_{t}\) is greater, we have \(Z_{\tau_{0}}>\frac{4k}{n}\). Apply Lemma 2.1 on \(Z_{t}\) gives
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\left[\tau_{1}>\frac{n^{3/2}\gamma}{k}+ \tau_{0}\big{|}Z_{\tau_{0}}\right]\lesssim\frac{Z_{\tau_{0}}}{\sqrt{\frac{k}{n^ {2}}}\sqrt{\frac{n^{3/2}\gamma}{k}}}\lesssim\frac{k^{1/3}}{n^{1/12}\gamma^{1/ 2}}\lesssim\gamma^{-1/2}.\]
However, \(k\tau_{1}\leq k\tau_{\text{cross}}\) by definition, so we can capture the time \(\tau_{\text{cross}}\) and follow the coupling from Lemma 2.7. In sum up, \(\mathbb{P}[\tau_{\text{mag}}>\frac{\gamma n^{3/2}}{k}]<c<1\). This is enough to show \(t_{\text{mix}}=O(n^{3/2}/k)\).
### Mixing time Lower bound for \(\beta=1\)
**Theorem 4.2**.: _There exists a constant \(c>0\) such that \(t_{mix}\geq cn^{3/2}/k\)._
Proof.: Since the magnetization chain \(S_{t}\) depends on the original chain, it is enough to prove for the lower bound of the mixing time of \(S_{t}\). If we denote \(S\) to be a magnetization in equilibrium, then the sequence \(n^{1/4}S\) converges to a non-trivial limit law as \(n\to\infty\), according to [13] and [14](See Theorem V.9.5). Therefore, pick \(A>0\) such that
\[\mu\Big{(}|S|\leq An^{-1/4}\Big{)}\geq 3/4,\]
and also let \(s_{0}:=2An^{-1/4}\). Now define \(\widetilde{S}_{t}\) as a chain with the same transition probability as \(S_{t}\), except \(s=s_{0}\).
Define \(Z_{t}=\widetilde{S}_{0}-\widetilde{S}_{t\wedge\tau}\), where \(\tau:=\min\{t\geq 0:\widetilde{S}_{t}\leq An^{-1/4}\}\). When \(An^{-1/4}<\widetilde{S}_{t}=s^{\prime}<s_{0}\), the conditional distribution of \(\widetilde{S}_{t+1}\) is the same to that of \(S_{t+1}\) in case of \(S_{t}=s^{\prime}\). Therefore,
\[\mathbb{E}_{s_{0}}\Big{[}\widetilde{S}_{t+1}|\widetilde{S}_{t}=s\Big{]}= \mathbb{E}_{s_{0}}\Big{[}S_{t+1}|S_{t}=s\Big{]}\geq s-\frac{c_{0}ks^{3}}{n}\]
holds for some constant \(c_{0}\). From this, in terms of \(Z_{t}\),
\[\mathbb{E}[Z_{t+1}|\mathcal{F}_{t}]\leq Z_{t}+\frac{c_{0}k}{n}S_{t}^{3}.\]
Now, consider the second moment of \(Z_{t+1}\);
\[\mathbb{E}_{s_{0}}\Big{[}Z_{t+1}^{2}|\mathcal{F}_{t}\Big{]}=\mathrm{Var}(Z_{t+ 1}|\mathcal{F}_{t})+\Big{(}\mathbb{E}_{s_{0}}[Z_{t+1}|\mathcal{F}_{t}]\Big{)} ^{2}. \tag{31}\]
Lemma 2.2 ensures
\[\mathrm{Var}(Z_{t+1}|\mathcal{F}_{t})\leq O\Big{(}\frac{k}{n^{2}}\Big{)}.\]
The last term of Equation (31), for \(t<\tau\), there is another constant \(c_{1}=c_{1}(A)\) such that
\[\mathbb{E}_{s_{0}}^{2}[Z_{t+1}|\mathcal{F}_{t}]\leq Z_{t}^{2}+2\frac{c_{0}k}{ n}Z_{t}\widetilde{S}_{t}^{3}+\frac{c_{0}^{2}k^{2}\widetilde{S}_{t}^{6}}{n^{2}} \leq Z_{t}^{2}+\frac{c_{1}k}{n^{2}}.\]
In conclusion, we can get
\[\mathbb{E}_{s_{0}}\Big{[}Z_{t+1}^{2}-Z_{t}^{2}|\mathcal{F}_{t}\Big{]}\leq \frac{c_{A}k}{n^{2}}.\]
This means \(\mathbb{E}_{s_{0}}[Z_{t}^{2}]\leq\frac{c_{A}kt}{n^{2}}\), which leads to
\[c_{A}n^{-2}t\leq\mathbb{E}_{s_{0}}[Z_{t}^{2}]\leq\mathbb{E}_{s_{0}}[Z_{t}^{2} \mathbbm{1}_{\{\tau\leq t\}}]\leq\frac{A^{2}}{\sqrt{n}}\mathbb{P}_{s_{0}}(\tau \leq t)\]
Take \(t=(A^{2}/4c_{A})n^{3/2}/k\). Then
\[\mathbb{P}_{s_{0}}\Big{(}S_{t}\leq An^{-1/4}\Big{)}\leq\frac{1}{4},\]
which proves \(d(cn^{3/2}/k)\geq 1/2\).
## 5. Main results: low temperature regime
We assume \(\beta>1\) throughout this section. It is well known that the mixing time on low temperature regime is exponential, as shown by the Cheeger constant in [11]. To this end [10] suggested the restricted Glauber dynamics holds only for \(\Omega^{+}:=\{\sigma\in\Omega:S(\sigma)\geq 0\}\). This can be also generalized for the randomized scan dynamics. The _restricted randomized scan dynamics_ for low temperature is the following update scheme: for any given \(\sigma\in\Omega^{+}\), generate a candidate \(\sigma^{\prime}\) by \(k\) site of updates from \(\sigma\) with the usual randomized systematic scan dynamics. If \(S(\sigma^{\prime})\geq 0\) we accept \(\sigma^{\prime}\) for the next state; if \(S(\sigma^{\prime})<0\) then we accept \(-\sigma^{\prime}\), the state whose spins are all reversed from \(\sigma^{\prime}\), for the next state. Analyzing corresponding magnetization chain takes an important role, so we denote \(S_{t}^{+}:=S(X_{t})\) with the plus sign to emphasize we are working under the restricted Glauber dynamics.
### Mixing time Upper bound for \(\beta>1\)
**Theorem 5.1**.: _Under the restricted Glauber dynamics, \(t_{\text{mix}}\leq\frac{c_{1}n\log n}{k}\) for some constant \(c_{1}=c_{1}(\beta)>0\)._
**Lemma 5.1**.: _Suppose \(k=o(\sqrt{n})\). Define \(s^{*}\) to be the unique positive solution of \(\tanh(\beta s)=s\), and for constant \(\alpha\) let_
\[\tau^{*}=\tau^{*}(\alpha):=\inf\Big{\{}t\geq 0:S_{t}^{+}\leq s^{*}+\frac{ \alpha}{\sqrt{n}}\Big{\}}.\]
_Then, for some suitable \(c=c(\alpha,\beta)>0\),_
\[\lim_{n\to\infty}\mathbb{P}_{\sigma}(\tau^{*}>cn\log n/k)=0.\]
Proof.: Start from Theorem 2.6. In case of \(S_{t}^{+}>2k/n\),
\[\mathbb{E}_{\sigma}[S_{t+1}^{+}-S_{t}^{+}|S_{t}^{+}=s]\leq\frac{k}{n}(\tanh( \beta S^{+})-S^{+})+\frac{4k^{2}}{n^{2}}.\]
Define \(\gamma^{*}:=\beta\cosh^{-2}(\beta s^{*})\). By the mean value theorem, for \(y>0\),
\[\tanh(\beta(s^{*}+y))-\tanh(\beta s^{*})=\frac{\beta}{\cosh^{2}(s^{*}+\tilde{ y})}y\leq\gamma y.\]
Therefore for \(y\geq 0\), \(\tanh(\beta(s^{*}+y))\leq s^{*}+\gamma^{*}y\). In conclusion, provided that \(S_{t}^{+}>(2k/n\lor s^{*})\),
\[\mathbb{E}_{\sigma}[S_{t+1}^{+}-S_{t}^{+}|S_{t}^{+}=s]\leq-\frac{k(1-\gamma^{ *})}{n}(s-s^{*})+\frac{4k^{2}}{n^{2}}.\]
Define
\[Y_{t}:=\Big{[}1-\frac{k(1-\gamma^{*})}{n}\Big{]}^{-t}\Big{(}S_{t}^{+}-s^{*}- \frac{4k}{n(1-\gamma^{*})}\Big{)}.\]
Take large \(n\) so that \((2k/n\lor s^{*})=s^{*}\), then \(Y_{t}\) becomes non-negative supermartingale for \(t<\tau^{*}\) since \(s^{*}\) only depends on \(\beta\). From the optional stopping lemma,
\[1\geq\mathbb{E}_{\sigma}[Y_{\tau^{*}\wedge t}]\geq\Big{[}1-\frac{k(1-\gamma^{ *})}{n}\Big{]}^{-t}\Big{(}\frac{\alpha}{\sqrt{n}}-\frac{4k}{n(1-\gamma^{*})} \Big{)}\mathbb{P}_{\sigma}(\tau^{*}>t).\]
Hence
\[\mathbb{P}_{\sigma}(\tau^{*}>t)\leq\Big{(}\frac{\alpha}{\sqrt{n}}-\frac{4k}{ n(1-\gamma^{*})}\Big{)}^{-1}\Big{[}1-\frac{k(1-\gamma^{*})}{n}\Big{]}^{t}.\]
Plugging in \(t=cn\log n/k\) finishes the proof, as \(k/n=o(1/\sqrt{n})\)
**Lemma 5.2**.: _Define \(s^{*}\) in the same way as Lemma 5.1. For any \(\alpha>0\), define_
\[\tau_{*}=\tau_{*}(\alpha):=\min\Big{\{}t\geq 0:S_{t}^{+}\geq s^{*}+\frac{\alpha}{ \sqrt{n}}\Big{\}}.\]
_If \(k=o(\sqrt{n})\), then \(\mathbb{E}_{0}[\tau_{*}]=O(n\log n/k)\)._
The proof of Lemma 5.2 is suggested on the appendix. With Lemma 5.1 and Lemma 5.2, the upper bound can be proven.
Proof.: (Theorem 5.1) First of all, for any arbitrary configurations \(\sigma\) and \(\widetilde{\sigma}\) in \(\Omega^{+}\) which satisfies \(S(\sigma)=S(\widetilde{\sigma})\), we can show that two chains start from these two configurations can be exactly matched with high probability in \(O(n\log n/k)\) times. This can be done by applying Lemma 2.5 in a slightly different way. From this fact, it is enough to show that there exists \(C>0\) that for any two states \(\sigma\) and \(\widetilde{\sigma}\),
\[\lim_{C\to\infty}\limsup_{n\to\infty}\mathbb{P}_{\sigma,\widetilde{\sigma}}( \tau_{\text{mag}}>Cn\log n/k)<1, \tag{32}\]
where \(\tau_{\text{mag}}\) is the first time \(t\) with \(S_{t}^{+}=\widetilde{S}_{t}^{+}\).
Although the monotone coupling for \(X_{t}\) we have discussed does not make sense under the restricted dynamics, there is still a coupling for two magnetization chains \((S_{t}^{+},\widetilde{S}_{t}^{+})\) which preserves monotonicity between magnetizations; i.e. \((S_{t}^{+}-\widetilde{S}_{t}^{+})(S_{t+1}^{+}-\widetilde{S}_{t+1}^{+})\geq 0\). Due to this monotonicity, it is enough to consider two starting points \(0\) and \(1\)(odd \(n\) case can be also done in an analogous way). Consider two chains, denoted as \(S_{T}^{+}\) and \(S_{B}^{+}\), whose starting positions are \(1\) and \(0\) respectively. Let \(\mu^{+}\) be the stationary distribution of the restricted magnetization chain and \(S_{\mu}^{+}\) be a stationary copy of the restricted magnetization, i.e. whose initial distribution is \(\mu^{+}\). Run three chains \(S_{T}^{+},S_{B}^{+}\), and \(S_{\mu}^{+}\) independently at the beginning. For some constants \(0<c_{1}\leq c_{2}\), define stopping times as
\[\tau_{1} :=\min\Big{\{}t\geq 0:S_{T,t}^{+}\leq s^{*}+c_{1}n^{-1/2}\Big{\}},\] \[\tau_{2} :=\min\Big{\{}t\geq 0:S_{B,t}^{+}\geq s^{*}+c_{2}n^{-1/2}\Big{\}}.\]
Assume \(\tau_{1}<\tau_{2}\). If \(S_{\mu,\tau_{1}}^{+}\geq s^{*}+c_{1}n^{-1/2}\), for \(t\geq\tau_{1}\) we couple \(S_{\mu}^{+}\) chain and \(S_{T}^{+}\) chain, while \(S_{B}^{+}\) runs independently. On the event \(S_{\mu,\tau_{1}}^{+}<s^{*}+c_{1}n^{-1/2}\), continue running all three chains independently. After reaching the time \(\tau_{2}\), if \(S_{\mu,\tau_{2}}^{+}\leq s^{*}+c_{2}n^{-1/2}\) couple all three chains monotonically. If \(S_{\mu,\tau_{2}}^{+}>s^{*}+c_{2}n^{-1/2}\), continue running all three chains independently. The other case, \(\tau_{1}\geq\tau_{2}\) can be set similarly.
For another constant \(c_{3}>0\), define events \(H_{1},H_{2}\) as
\[H_{1} :=\{\tau_{1}\leq c_{3}n\log n/k\}\cap\Big{\{}S_{\mu,\tau_{1}} \geq s^{*}+c_{1}n^{-1/2}\Big{\}},\] \[H_{2} :=\{\tau_{2}\leq c_{3}n\log n/k\}\cap\Big{\{}S_{\mu,\tau_{2}} \leq s^{*}+c_{2}n^{-1/2}\Big{\}}.\]
Then we have
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}(H_{1}^{c}) \leq\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{1}>c_{3}n\log n /k)+\mu^{+}(0,s^{*}+c_{1}n^{-1/2})\] \[\mathbb{P}_{\sigma,\widetilde{\sigma}}(H_{2}^{c}) \leq\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{2}>c_{3}n\log n /k)+\mu^{+}(s^{*}+c_{2}n^{-1/2},1).\]
Observe that on the event \(H_{1}\cap H_{2}\) the chains \(S_{T}\) and \(S_{B}\) cross over each other by \(c_{3}n\log n/k\), therefore
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\geq 1-\mathbb{P}_{\sigma, \widetilde{\sigma}}(\tau_{1}>c_{3}n\log n/k)-\mathbb{P}_{\sigma,\widetilde{ \sigma}}(\tau_{2}>c_{3}n\log n/k)\] \[\qquad\qquad\qquad-\mu^{+}((s^{*}+c_{1}n^{-1/2},s^{*}+c_{2}n^{-1/2 })^{c}).\]
\(\mu^{+}\) satisfies a central limit theorem([1]), therefore the last term \(\mu^{+}((s^{*}+c_{1}n^{-1/2},s^{*}+c_{2}n^{-1/2})^{c})\) is uniformly bounded away from \(1\). Further, from the previous two Lemmas 5.1 and 5.2, we have
\[\lim_{n\to\infty}\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{1}>c_{3}n\log n/ k)=\lim_{n\to\infty}\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{2}>c_{3}n\log n/ k)=0.\]
Therefore \(\mathbb{P}_{\sigma,\widetilde{\sigma}}(H_{1}\cap H_{2})\) is bounded away from \(0\), and this means \(S_{T}^{+}\) and \(S_{B}^{+}\) cross by the time \(c_{3}n\log n/k\). As a final step, define
\[\tau_{\mathrm{loc}}=\min\{t\geq 1:(S_{T,t-1}^{+T}-S_{B,t-1}^{+T})(S_{T,t}^{+T}-S _{B,t}^{+T})\leq 0\}\]
then \(\mathbb{P}_{\sigma,\widetilde{\sigma}}(\tau_{\mathrm{loc}}<c_{3}\log n)>\epsilon\) uniformly on \(n\). From \(k^{2}=o(n)\) condition, there is a coupling between \(S_{T}^{+T}\) and \(S_{B}^{+T}\) such that
\[\mathbb{P}_{\sigma,\widetilde{\sigma}}\Big{(}\tau_{\mathrm{mag}}=\Big{\lceil} \frac{\tau_{\mathrm{loc}}}{k}\Big{\rceil}\Big{)}>\epsilon>0,\]
where \(\epsilon\) is independent to \(n\).
### Mixing time Lower bound for \(\beta>1\)
**Theorem 5.2**.: _Under the restricted Glauber dynamics, \(t_{\text{mix}}\geq(1/4)n\log n/k\)._
Proof.: The proof can be done in the similar fashion as [10]. Again, define \(s^{*}\) to be the unique positive solution of the equation \(\tanh(\beta s)=s\). Let \(\{X_{t}\}\) start from \(1\), and let \(\{\widetilde{X}_{t}\}\) follows the stationary distribution \(\mu^{+}\). Apply the monotone coupling to, and write \(\mathbb{P}_{1,\mu^{+}}\) and \(\mathbb{E}_{1,\mu^{+}}\) be the probability measure and expectation under this coupling. Define \(\mathcal{B}(\sigma)\) be a set of vertices with a minus spin, and \(B(\sigma):=|\mathcal{B}(\sigma)|\). From the stationary magnetization result of [1], for some suitable \(0<c_{1}<1\),
\[\mathbb{P}_{1,\mu^{+}}(B(\widetilde{X}_{0})\leq c_{1}n)=\mu^{+}(\{\sigma:B( \sigma)\leq c_{1}n\})=o(1).\]
Let \(N_{t}\) be the number of the sites in \(\mathcal{B}(\widetilde{X}_{0})\) that have not been updated until time \(t\). Then,
\[\mathbb{E}_{1,\mu^{+}}[N_{t}|B(\widetilde{X}_{0})]=B(\widetilde{X}_{0})\left( 1-\frac{k}{n}\right)^{t}.\]
Plugging in \(t^{*}:=(1/4)n\log n/k\) gives
\[\mathbb{E}_{1,\mu^{+}}[N_{t^{*}}|B(\widetilde{X}_{0})]\geq c_{2}B(\widetilde{ X}_{0})n^{-1/4}.\]
If we consider \(N_{t}\) as a sum of indicators showing the update status on each vertices, for any \(v,w\in\mathcal{B}(\widetilde{X}_{0})\),
\[\mathbb{E}_{1,\mu^{+}}[I_{v}I_{w}]=(1-\frac{2k}{n})^{t}\leq(1-\frac{k}{n})^{2t }=\mathbb{E}_{1,\mu^{+}}[I_{v}]\mathbb{E}_{1,\mu^{+}}[I_{w}],\]
so indicators are negatively correlated. This ensures \(\mathrm{Var}_{1,\mu^{+}}(N_{t})\leq n\) at all time. Combining with Chebyshev's inequality, on the event \(E_{1}:=\{B(\widetilde{X}_{0})>c_{1}n\}\), with some \(c_{3}>0\),
\[\mathbb{P}_{1,\mu^{+}}[N_{t^{*}}\leq c_{3}n^{3/4}|B(\widetilde{X}_{0})]=o(1).\]
Therefore,
\[\mathbb{P}_{1,\mu^{+}}[N_{t^{*}}\leq c_{3}n^{3/4}]\leq\mathbb{P}_{1,\mu^{+}}(E _{1}^{c})+\mathbb{P}_{1,\mu^{+}}(E_{1}\cap\{N_{t^{*}}\leq c_{3}n^{3/4}\})=o(1).\]
Now, suppose \(N_{t^{*}}\leq c_{3}n^{3/4}\). In this case we have \(S_{t^{*}}\geq\widetilde{S}_{t^{*}}+c_{4}n^{-1/4}\) for some \(c_{4}>0\). Pick a small constant \(c_{5}\in(0,c_{4})\) and define \(E_{2}:=S_{t^{*}}\leq s^{*}+c_{5}n^{-1/4}\). In this case,
\[\mathbb{P}_{1,\mu^{+}}(E_{2}) \leq\mathbb{P}_{1,\mu^{+}}(N_{t^{*}}>c_{3}n^{3/4})+\mathbb{P}_{1, \mu^{+}}(E_{2}\cap\{N_{t^{*}}\leq c_{3}n^{3/4}\})\] \[\leq o(1)+\mathbb{P}_{1,\mu^{+}}(\widetilde{S}_{t^{*}}\leq s^{*} +(c_{5}-c_{4})n^{-1/4})\] \[=o(1),\]
by appealing to the central limit theorem at the last equation. Furthermore the theorem gives
\[\mu^{+}\Big{(}\{\sigma:S(\sigma)>s^{*}+c_{5}n^{-1/4}\}\Big{)}=o(1).\]
Finally,
\[d(t^{*})\geq\mathbb{P}_{1,\mu^{+}}(\{\sigma:S(\sigma)>s^{*}+c_{5}n^{-1/4}\})- \mu^{+}(\{\sigma:S(\sigma)>s^{*}+c_{5}n^{-1/4}\})=1-o(1).\]
Therefore we have \(t_{\rm mix}(n)\geq(1/4)n\log n/k\).
## Acknowledgement
We are sincerely grateful to Evita Nestoridi for introducing the topic and helpful discussions.
## 6. Appendix: proof of Lemma 5.2
In this section we suggest a detailed calculation for Lemma 5.2. Throughout this section we assume \(\beta>1\), \(k=o(n)\), and let \(s^{*}\) be the unique positive solution of \(\tanh(\beta s)=s\). This relationship can be modified to
\[\beta=\frac{1}{2s^{*}}\log\Big{(}\frac{1+s^{*}}{1-s^{*}}\Big{)},\]
and Taylor expansion on the right side at \(s=0\) gives \(\beta>1+(s^{*})^{2}/3\).
**Proposition 6.1**.: _Define_
\[I(x):=-2\beta x^{2}+\frac{\lambda k}{n}x+\Big{(}\frac{1}{2}+x\Big{)}\log\Big{(} \frac{1}{2}+x\Big{)}+\Big{(}\frac{1}{2}-x\Big{)}\log\Big{(}\frac{1}{2}-x\Big{)} +\log 2.\]
_Then there exists \(s_{0},s_{1}\) such that \(I^{\prime}(s_{0})=I^{\prime}(s_{1})=0\), \(0<s_{0}=O(k/n)\), and \(s^{*}/2>s_{1}=s^{*}/2-O(k/n)\)._
Proof.: Since
\[I^{\prime}(x) =-4\beta x+\log\big{(}\frac{1}{2}+x\big{)}-\log\big{(}\frac{1}{2 }-x\big{)}+\frac{\lambda k}{n}\] \[I^{\prime\prime}(x) =-4\beta+\frac{4}{1-4x^{2}},\]
we have \(I^{\prime}(s^{*}/2)=I^{\prime}(0)=\lambda k/n>0\). Furthermore \(I(0)=0\), \(I(s^{*}/2)<0\) for large enough \(n\) as \(k/n\to 0\) and \(s^{*}\) is a fixed value. From this condition we can deduce at least two points in \((0,s^{*}/2)\) have a horizontal tangential line. Pick the smallest and the largest among them from \((0,s^{*}/2)\) and call them \(s_{1}\) and \(s_{2}\) respectively.
Taylor expansion of \(I\) at \(x=0\) becomes
\[I(x)\sim\frac{\lambda k}{n}x-2(\beta-1)x^{2}=-2(\beta-1)\Big{(}x-\frac{ \lambda k}{4(\beta-1)n}\Big{)}^{2}+O\Big{(}\frac{k^{2}}{n^{2}}\Big{)}.\]
and \(I\) at \(x=s^{*}/2\) becomes
\[I(x) \sim I(s^{*}/2)+\frac{\lambda k}{n}\Big{(}x-\frac{s^{*}}{2}\Big{)} +\frac{I^{\prime\prime}(s^{*}/2)}{2}\Big{(}x-\frac{s^{*}}{2}\Big{)}^{2}\] \[=I(s^{*}/2)+\frac{I^{\prime\prime}(s^{*}/2)}{2}\Big{(}x-\frac{s^{ *}}{2}+\frac{\lambda k}{nI^{\prime\prime}(s^{*}/2)}\Big{)}^{2}-O\Big{(}\frac{ k^{2}}{n^{2}}\Big{)}.\]
Substitute \(I^{\prime\prime}(s^{*}/2)\) in terms of \(s^{*}\) and \(\beta\) then we have
\[s_{1}\sim\frac{\lambda k}{4(\beta-1)n},\qquad\text{and}\qquad s_{2}\sim\frac{ s^{*}}{2}-\frac{\lambda k}{nI^{\prime\prime}(s^{*}/2)}.\]
**Lemma 6.1**.: _Suppose a function \(f:\mathbb{R}\to\mathbb{R}\in\mathcal{C}^{3}\) and a real number \(s_{0}\) satisfies_
1. \(f^{\prime\prime\prime}(t)\geq 0\) _for all_ \(t\geq s_{0}\)_._
2. \(f^{\prime}(t)\leq 0\) _for all_ \(t\geq s_{0}\)_._
3. \(f^{\prime}(s_{0})=0\)_._
_Then for any \(s\leq x\leq y\), \(f(y)-f(x)\leq\frac{f^{\prime}(y)}{2}(y-x)\) holds._
Proof.: Without loss of generality, set \(s=0\).
\[\begin{split} f(x)-f(y)+\frac{(y-x)f^{\prime}(y)}{2}& =-\int_{x}^{y}f^{\prime}(t)dt+\int_{x}^{y}\frac{f^{\prime}(y)}{2}dt \\ &=\int_{x}^{y}t\Big{(}\frac{f^{\prime}(t)}{t}-\frac{f^{\prime}(y) }{y}\Big{)}dt+f^{\prime}(y)\frac{x^{2}-xy}{2y}.\end{split} \tag{33}\]
Rightmost term of (33) is non-negative. Furthermore, \(tf^{\prime\prime}(t)-f^{\prime}(t)=\int_{0}^{t}f^{\prime\prime}(t)ds-\int_{0}^{t} f^{\prime\prime}(s)ds=\int_{0}^{t}(f^{\prime\prime}(t)-f^{\prime\prime}(s))ds\geq 0\) ensures that
\[\frac{d}{dt}\Big{(}\frac{f^{\prime}(t)}{t}\Big{)}=\frac{tf^{\prime\prime}(t)-f^ {\prime}(t)}{t^{2}}\geq 0,\]
which finishes the proof.
**Remark**.: _The function \(I(x)\) defined in Lemma 6.1 satisfies the all the conditions of Lemma 6.1 with \(s=s_{1}\)._
Proof.: (Lemma 5.2)
Suppose \(n\) is even, and consider a new sequence \(M_{i}:=nS^{+}(Y_{i})/2\). This is not a Markov chain, but \(M_{i}\) forms an integral sequence in \([0,n/2]\) whose difference between any consecutive terms are in \(\{-1,0,1\}\). Define \(\tau_{c}:=\min\{t\geq 0:M_{i}=c\}\) and
\[B_{c} :=\sup\mathbb{E}_{c}[\tau_{c+1}]\] \[p_{c} :=\sup\mathbb{P}_{c}[M_{i+1}-M_{i}=1|\mathcal{F}_{i}]\] \[q_{c} :=\sup\mathbb{P}_{c}[M_{i+1}-M_{i}=-1|\mathcal{F}_{i}]\] \[r_{c} :=\sup\mathbb{P}_{c}[M_{i+1}-M_{i}=0|\mathcal{F}_{i}].\]
All these four terms depend on \(i^{\prime}:=i(\operatorname{mod}\,k)\) and the previous \(i-i^{\prime}\) update history. Therefore, we take supremum over all possible \(i^{\prime}\)s and \(\mathcal{N}_{i}\). We can setup an estimate for \(B_{c}\). We know that \(B_{0}=O(1)\), so for \(1\leq c\leq ns^{*}+\alpha\sqrt{n}\), for any time \(i\)
\[\begin{split}\mathbb{E}_{c}[\tau_{c+1}|\mathcal{F}_{i}]\leq p_{c }&+q_{c}(B_{c-1}+B_{c}+1)+r_{c}(B_{c}+1)\\ &\longrightarrow(1-q_{c}-r_{c})B_{c}\leq(p_{c}+q_{c}+r_{c})+q_{c} B_{c-1}\end{split} \tag{34}\]
Regardless of \(i(\operatorname{mod}\,k)\), we know that
\[\begin{split} p_{c}&=\Big{[}\frac{1-s}{2}+O\Big{(} \frac{k}{n}\Big{)}\Big{]}p_{+}\Big{(}s+\frac{1}{n}\Big{)}&=\Big{[} \frac{n-2c}{2n}+O\Big{(}\frac{k}{n}\Big{)}\Big{]}p_{+}\Big{(}\frac{1}{n}(2c+1) \Big{)}\\ q_{c}&=\Big{[}\frac{1+s}{2}+O\Big{(}\frac{k}{n}\Big{)} \Big{]}p_{-}\Big{(}s-\frac{1}{n}\Big{)}&=\Big{[}\frac{n+2c}{2n}+O \Big{(}\frac{k}{n}\Big{)}\Big{]}p_{-}\Big{(}\frac{1}{n}(2c-1)\Big{)},\end{split}\]
hence Equation (34) becomes
\[\begin{split} B_{c}&\leq A+\Big{[}\frac{n+2c}{n-2c}+O \Big{(}\frac{k}{n}\Big{)}\Big{)}\bigg{]}\frac{p_{-}\Big{(}\frac{1}{n}(2c-1) \Big{)}}{p_{+}\Big{(}\frac{1}{n}(2c+1)\Big{)}}B_{c-1}\\ &\leq A+\Big{[}\frac{n+2c}{n-2c}+O\Big{(}\frac{k}{n}\Big{)}\Big{)} \Big{]}\Big{(}1+O\Big{(}\frac{1}{n}\Big{)}\Big{)}e^{-\frac{2\beta}{n}(2c+1)}B_ {c-1}\\ &\leq A+\Big{[}\frac{n+2c}{n-2c}+\frac{\lambda k}{n}\Big{]}e^{- \frac{2\beta}{n}(2c+1)}B_{c-1}\end{split} \tag{35}\]
for some positive constant \(A\) and \(\lambda\). We know \(B_{1}=O(1)\), so repeating Equation (35) gives
\[B_{c}\lesssim\sum_{j=1}^{c}e^{-\frac{2\beta}{n}(c^{2}-j^{2})}\frac{n+2c}{n-2c}...\frac{n+2j}{n-2j}e^{(c-j)\frac{\lambda k}{n}}. \tag{36}\]
For any \(1\leq j\leq c\leq ns^{*}+\alpha\sqrt{n}\),
\[\frac{n+2c}{n-2c}...\frac{n+2j}{n-2j} =\frac{(n/2+c)!/(n/2+j-1)!}{(n/2-j)!/(n/2-c-1)!}=\frac{(n/2+c)!(n/2- c-1)!}{(n/2-j)!(n/2+j-1)!}\] \[\simeq\frac{(n/2+c)!(n/2-c)!}{(n/2-j)!(n/2+j)!}\simeq\frac{(n/2+ c)^{n/2+c}(n/2-c)^{n/2-c}}{(n/2-j)^{n/2-j}(n/2+j)^{n/2+j}}\] \[=\exp\Big{(}nf(c/n)-nf(j/n)\Big{)},\]
where \(f(x)=\big{(}\frac{1}{2}+x\big{)}\log\big{(}\frac{1}{2}+x\big{)}+\big{(}\frac{ 1}{2}-x\big{)}\log\big{(}\frac{1}{2}-x\big{)}+\log 2\). Hence (36) becomes
\[B_{c} \lesssim\sum_{j=1}^{c}\exp\Big{\{}-\frac{2\beta}{n}(c^{2}-j^{2})+ n\big{[}f\big{(}\frac{c}{n}\big{)}-f\big{(}\frac{j}{n}\big{)}\big{]}+(c-j)\frac{ \lambda k}{n}\Big{\}}\] \[=\sum_{j=1}^{c}\exp\Big{\{}nI\big{(}\frac{c}{n}\big{)}-nI\big{(} \frac{j}{n}\big{)}\Big{\}},\]
where \(I\) is from Proposition 6.1. Eventually we need to bound \(\sum B_{c}\), which is
\[\sum_{i=1}^{(ns^{*}+\alpha\sqrt{n})/2}B_{i} \leq\sum_{i=1}^{(ns^{*}+\alpha\sqrt{n})/2}\sum_{j=1}^{i}\exp \Big{\{}nI\big{(}\frac{i}{n}\big{)}-nI\big{(}\frac{j}{n}\big{)}\Big{\}}\] \[\simeq n^{2}\int_{0}^{\frac{s^{*}}{2}+\frac{\alpha}{2\sqrt{n}}} \int_{0}^{y}\exp[nI(y)-nI(x)]dxdy.\]
The problem changed to calculate the integral over triangular area \(\{0\leq y\leq s^{*}/2+\alpha/2\sqrt{n},0\leq x\leq y\}\). We split the domain into by 8 pieces(Figure 2).
**Piece 1**: \(\{0\leq y\leq s_{1},0\leq x\leq y\}\)
For any \(x\leq y\), we have
\[|I(y)-I(x)|\leq(y-x)\sup_{x<s<y}I^{\prime}(s)\leq(y-x)\sup_{0<s<s_{1}}I^{ \prime}(s)=(y-x)O\Big{(}\frac{k}{n}\Big{)}=O\Big{(}\frac{k^{2}}{n^{2}}\Big{)},\]
Figure 2. A graphical domain division by 8 pieces.
therefore the integration over this domain becomes \(n^{2}s_{1}^{2}\exp\left(O\left(\frac{k^{2}}{n}\right)\right)=O(k^{2})\).
**Piece 2**: \(\{s_{1}\leq y\leq s_{2},0\leq x\leq s_{1}\}\)
\[n^{2}\int_{0}^{s_{1}} \exp[-nI(x)]dx\int_{s_{1}}^{s_{2}}\exp[nI(y)]dy\] \[\leq n^{2}s_{1}\exp[-nI(0)]\int_{s_{1}}^{s_{2}}\exp[nI(y)]dy\] \[\leq n^{2}s_{1}\exp[nI(s_{1})-nI(0)]\int_{s_{1}}^{s_{2}}\exp[nI(y) -nI(s_{1})]dy\] \[=O(kn)\int_{s_{1}}^{s_{2}}\exp[nI(y)-nI(s_{1})]dy.\]
Around \(y=s_{1}\) we have \(nI(y)-nI(s_{1})\simeq-2n(\beta-1)(y-s_{1})^{2}\). If \(y\) is far from \(s_{1}\), since \(I\) is a decreasing function on \((s_{1},s_{2})\), \(nI(y)-nI(s_{1})\) rapidly becomes small. Splitting the domain of integration into \((s_{1},s_{1}+\log n/n)\) and \((s_{1}+\log n/n,s_{2})\) gives
\[O(kn)\int_{s_{1}}^{s_{2}}\exp[nI(y)-nI(s_{1})]dy=O(kn)O\left(\frac{\log n}{ \sqrt{n}}\right)=O(k\sqrt{n}\log n).\]
**Piece 3**: \(\{s_{1}\leq y\leq s_{2},s_{1}\leq x\leq y\}\)
This domain should be divided into 6 pieces again. This continues at the end of the proof.
**Piece 4**: \(\{s_{2}\leq y\leq s^{*}/2,0\leq x\leq s_{1}\}\)
Since \(I\) is increasing on both \((0,s_{1})\) and \((s_{2},s^{*}/2)\),
\[n^{2}\int_{0}^{s_{1}}\exp[-nI(x)]dx\int_{s_{2}}^{s^{*}/2}\exp[nI(y)]dy\leq n^ {2}s_{1}\exp[-nI(0)]\Big{(}\frac{s^{*}}{2}-s_{2}\Big{)}\exp[nI(s^{*}/2)]=O(k^{ 2}).\]
**Piece 5**: \(\{s_{2}\leq y\leq s^{*}/2,s_{1}\leq x\leq s_{2}\}\)
Similar to Piece 2.
\[n^{2}\int_{s_{1}}^{s_{2}} \exp[-nI(x)]dx\int_{s_{2}}^{s^{*}/2}\exp[nI(y)]dy\] \[\leq n^{2}\int_{s_{1}}^{s_{2}}\exp[-nI(x)]dx\left(\frac{s^{*}}{2 }-s_{2}\right)\exp[nI(s^{*}/2)]\] \[=n^{2}\int_{s_{1}}^{s_{2}}\exp[nI(s_{2})-nI(x)]dx\left(\frac{s^{*} }{2}-s_{2}\right)\exp\left[nI(s^{*}/2)-nI(s_{2})\right]\] \[\lesssim n^{2}\int_{s_{1}}^{s_{2}}\exp[nI(s_{2})-nI(x)]dx\left( \frac{s^{*}}{2}-s_{2}\right)\] \[=n^{2}O(\frac{\log n}{\sqrt{n}})O\Big{(}\frac{k}{n}\Big{)}=O(k \sqrt{n}\log n).\]
**Piece 6**: \(\{s_{2}\leq y\leq s^{*}/2,s_{2}\leq x\leq s^{*}/2\}\)
Similar to Piece 1.
\[n^{2}\iint\exp[nI(y)-nI(x)]dydx\lesssim n^{2}\Big{(}\frac{s^{*}}{2}-s_{2} \Big{)}^{2}=O(k^{2}).\]
**Piece 7**: \(\{s^{*}/2\leq y\leq s^{*}/2+\alpha/2\sqrt{n},0\leq x\leq s^{*}/2\}\)
Similar to Piece 4, 5, 6. The integral is
\[n^{2}\int_{s^{*}/2}^{s^{*}/2+\alpha/2\sqrt{n}}\exp[nI(y)]dy\int_{0}^{s^{*}/2} \exp[-nI(x)]dx.\]
We have \(I(\frac{s^{*}}{2})<0\). Taylor expansion at \(\frac{s^{*}}{2}\) for large enough \(n\) ensures \(I\left(\frac{s^{*}}{2}+\frac{\alpha}{2\sqrt{n}}\right)<0\). Therefore
\[n^{2}\int_{s^{*}/2}^{s^{*}/2+\alpha/2\sqrt{n}}\exp[nI(y)]dy\lesssim n^{3/2}.\]
Now, split the range of \(x\) by \([0,s_{1}]\cup[s_{1},s_{2}]\cup[s_{2},s^{*}/2]\). Then each integral with respect to \(x\) gives \(O(k/n),O(\log n/\sqrt{n})\) and \(O(k/n)\). In conclusion,
\[n^{2}\iint\exp[nI(y)-nI(x)]dydx=O(k\sqrt{n}+n\log n)=O(n\log n).\]
**Piece 8**: \(\{s^{*}/2\leq y\leq s^{*}/2+\alpha/2\sqrt{n},s^{*}/2\leq x\leq s^{*}/2+\alpha/ 2\sqrt{n}\}\)
Similar to Piece 1.
\[n^{2}\iint\exp[nI(y)-nI(x)]dydx\lesssim n^{2}\Big{(}\frac{\alpha}{2\sqrt{n}} \Big{)}^{2}=O(n).\]
We divide **Piece 3**: \(\{s_{1}\leq y\leq s_{2},s_{1}\leq x\leq y\}\) again into 6 pieces with a parameter \(0<\epsilon=o(1)\). \(\epsilon\) will be chosen after the calculation(Figure 3).
**Piece 3-1**: \(\{s_{1}\leq y\leq s_{1}+\epsilon,s_{1}\leq x\leq y\}\)
For small enough \(\epsilon>0\), \(s_{1}\leq x\leq y\leq s_{1}+\epsilon\) implies \(I(x)\geq I(y)\). Therefore,
\[n^{2}\int_{s_{1}}^{s_{1}+\epsilon}\int_{s_{1}}^{y}\exp[nI(y)-nI(x)]dxdy\lesssim n ^{2}\epsilon^{2}.\]
**Piece 3-2**: \(\{s_{1}+\epsilon\leq y\leq s_{2}-\epsilon,s_{1}\leq x\leq s_{1}+\epsilon\}\)
\[n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}\int_{s_{1}}^{s_{1}+\epsilon}\exp [nI(y)-nI(x)]dxdy\leq n^{2}\epsilon\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}\exp [nI(y)-nI(s_{1}+\epsilon)]dy\]
Figure 3. A graphical domain division for Piece 3 from Figure 2, by 6 pieces.
Near \(y=s_{1}+\epsilon\), the approximation \(nI(y)-nI(s_{1}+\epsilon)\simeq-2n(\beta-1)\{(y-s_{1})^{2}-\epsilon^{2}\}\) holds. \(I(y)\) is uniformly bounded above by \(I(s_{1}+\epsilon)\) when \(y\) is far from \(s_{1}+\epsilon\). Hence the above is bounded by \(O(n^{2}\epsilon)\), provided that \(n\epsilon\to\infty\).
**Piece 3-3** : \(\{s_{1}+\epsilon\leq y\leq s_{2}-\epsilon,s_{1}+\epsilon\leq x\leq y\}\)
Proposition 6.1 can be applied to the function \(I\) in this domain.
\[\begin{split} n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}& \int_{s_{1}+\epsilon}^{y}\exp[nI(y)-nI(x)]dxdy\\ &\leq n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}\int_{s_{1}+ \epsilon}^{y}\exp[nI^{\prime}(y)(y-x)/2]dxdy\\ &=n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}-\frac{2}{nI^{\prime }(y)}\Big{\{}1-\exp\Big{[}nI^{\prime}(y)\big{(}\frac{y}{2}-\frac{s_{1}+ \epsilon}{2}\big{)}\Big{]}\Big{\}}dy\\ &\leq n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}-\frac{2}{nI^{ \prime}(y)}dy\end{split}\]
Near \(y=s_{1}+\epsilon\) we have \(I^{\prime}(y)\simeq 4(\beta-1)(y-s_{1})\), while near \(y=s_{2}-\epsilon\) we have \(I^{\prime}(y)\simeq O(1)(s_{2}-y)\). For \(y\) in between \(s_{1}+\epsilon\) and \(s_{2}-\epsilon\), \(|I^{\prime}(y)|\) is bounded below by a positive constant. Therefore,
\[n^{2}\int_{s_{1}+\epsilon}^{s_{2}-\epsilon}-\frac{2}{nI^{\prime}(y)}dy\lesssim n O (\log(1/\epsilon))+O(n)=O(-n\log\epsilon)+O(n).\]
**Piece 3-4** : \(\{s_{2}-\epsilon\leq y\leq s_{2},s_{1}\leq x\leq s_{1}+\epsilon\}\)
Similar to Piece 3-1. In this domain we have \(I(x)\geq I(y)\) for \(x\leq y\). Therefore
\[n^{2}\iint\exp[nI(y)-nI(x)]dydx\lesssim n^{2}\epsilon^{2}.\]
**Piece 3-5** : \(\{s_{2}-\epsilon\leq y\leq s_{2},s_{1}+\epsilon\leq x\leq s_{2}-\epsilon\}\)
Similar to Piece 3-2.
\[n^{2}\int_{s_{2}-\epsilon}^{s_{2}-\epsilon}\int_{s_{1}+\epsilon}^{s_{2}- \epsilon}\exp[nI(y)-nI(x)]dxdy=n^{2}\epsilon\int_{s_{1}+\epsilon}^{s_{2}- \epsilon}\exp[nI(s_{2}-\epsilon)-nI(x)]dx.\]
Near \(x=s_{2}-\epsilon\), the approximation \(nI(s_{2}-\epsilon)-nI(x)\simeq O(n)\epsilon(x-s_{2}+\epsilon)\) holds. Hence the above is bounded by \(O(n^{2}\epsilon)\) in an analogous way.
**Piece 3-6** : \(\{s_{2}-\epsilon\leq y\leq s_{2},s_{2}-\epsilon\leq x\leq y\}\)
Similar to Piece 3-1, \(I(x)\geq I(y)\) for \(x\leq y\) implies
\[n^{2}\iint\exp[nI(y)-nI(x)]dydx\lesssim n^{2}\epsilon^{2}.\]
Now set \(\epsilon=O(\log n/n)\). Under \(k=o(\sqrt{n})\) assumption, all the integrals over the domains are \(O(n\log n)\), which ensures that
\[\mathbb{E}_{0}\Big{[}\min\big{\{}i\geq 0:S^{+}(Y_{i})\geq s^{*}+\frac{\alpha}{ \sqrt{n}}\big{\}}\Big{]}=O(n\log n).\]
As this is true for any \(\alpha>0\), after you reach the stopping time
\[\tau_{**}:=\Big{\{}i\geq 0:S^{+}(Y_{i})\geq s^{*}+\frac{\alpha}{\sqrt{n}}\Big{\}},\]
Consider the time \(\lceil\tau_{**}/k\rceil\). We have
\[S^{+}(X_{\lceil\tau_{**}/k\rceil})=S^{+}(Y_{k\lceil\tau_{**}/k\rceil})\geq s^{ *}+\frac{\alpha}{\sqrt{n}}-\frac{2k}{n}\]
From the condition \(k=o(\sqrt{n})\), adjusting \(\alpha\) gives
\[\lceil\frac{\tau_{**}}{k}\rceil\sim\tau_{*}:=\min\big{\{}t\geq 0:S_{t}^{T}\geq s^{ *}+\frac{\alpha}{\sqrt{n}}\big{\}}.\]
Therefore, \(\mathbb{E}_{0}[\tau_{*}]=O(n\log n/k)\).
|
2309.01586 | Automatic Scam-Baiting Using ChatGPT | Automatic scam-baiting is an online fraud countermeasure that involves
automated systems responding to online fraudsters in order to waste their time
and deplete their resources, diverting attackers away from real potential
victims. Previous work has demonstrated that text generation systems are
capable of engaging with attackers as automatic scam-baiters, but the fluency
and coherence of generated text may be a limit to the effectiveness of such
systems.
In this paper, we report on the results of a month-long experiment comparing
the effectiveness of two ChatGPT-based automatic scam-baiters to a control
measure. Within our results, with engagement from over 250 real email
fraudsters, we find that ChatGPT-based scam-baiters show a marked increase in
scammer response rate and conversation length relative to the control measure,
outperforming previous approaches. We discuss the implications of these results
and practical considerations for wider deployment of automatic scam-baiting. | Piyush Bajaj, Matthew Edwards | 2023-09-04T13:13:35Z | http://arxiv.org/abs/2309.01586v1 | # Automatic Scam-Baiting Using ChatGPT
###### Abstract
Automatic scam-baiting is an online fraud countermeasure that involves automated systems responding to online fraudsters in order to waste their time and deplete their resources, diverting attackers away from real potential victims. Previous work has demonstrated that text generation systems are capable of engaging with attackers as automatic scam-baiters, but the fluency and coherence of generated text may be a limit to the effectiveness of such systems.
In this paper, we report on the results of a month-long experiment comparing the effectiveness of two ChatGPT-based automatic scam-baiters to a control measure. Within our results, with engagement from over 250 real email fraudsters, we find that ChatGPT-based scam-baiters show a marked increase in scammer response rate and conversation length relative to the control measure, outperforming previous approaches. We discuss the implications of these results and practical considerations for wider deployment of automatic scam-baiting.
fraud, scam-baiting, active defence
## I Introduction
Email-based fraud is a major component of online crime. Investment scams, online dating fraud, tech support scams, employment scams, lottery and inheritance scams, and advanced fee fraud schemes are all commonly initiated through an approach via email, and the latest IC3 report suggests up to 113,700 victims have reported significant financial losses from these categories in 2022 [1]. Conviction rates for these offences are notoriously low due to the transnational nature of offending, which poses significant hurdles for prosecution. Traditional countermeasures have focused on blacklisting originators of fraud [2] and building email filters that prevent email users from being exposed to fraudulent approaches [3].
More recently, researchers have proposed paying greater attention to an _active defence_ posture when combating cybercrime [4]. As an implementation of this within the domain of email-based offending in particular, Chen et al. [5] have demonstrated the feasibility of _automatic scam-baiting_, in which an automated responder system replies to emails from fraudsters in order to waste their time, distracting offenders from real victims. However, the GPT-Neo text generation systems tested by Chen et al. showed significant limits in their ability to generate coherent and persuasive email messages, with conversations commonly ending due to the generation of a poor-quality text sample [5].
With more powerful text generation systems now widely available, this paper explores approaches to improving the art of automated scam-baiting. In particular, we examine (a) whether and to what degree an updated text generation system (ChatGPT) is more effective at initiating and elongating email conversations with scammers and (b) whether a text generation system given examples of human scam-baiting conversation to imitate will out-perform a text generation system given only general scam-baiting instructions. We test our systems in a month-long experiment involving randomised allocation of scammer approaches to two ChatGPT-based reply systems and one template-based control measure. Alongside our quantitative results, we report on our observations from the various scam-baiting exchanges, including both limitations and unexpected benefits of the approach as well as potential tactical responses from offender populations.
The rest of this paper proceeds as follows. In Section II we provide background on scam-baiting activities and outline why ChatGPT is a promising candidate for application in this domain. Section III details the approaches tested and describes our experimental deployment. Section IV presents our main findings about the effectiveness of the reply strategies, while Section V discusses additional observations, limitations and potential future developments. We conclude with our main recommendations for ongoing work in automatic scam-baiting.
## II Background
### _Scam-baiting_
Scam-baiters are online volunteers who reply to fraudsters in the guise of victims, in order to waste the fraudsters' time. While there can be a range of personal motivations for scam-baiting [6, 7], some of which have been considered critically [6, 8, 9], most modern scam-baiters justify their work on the grounds that the time and energy fraudsters spend interacting with them is time not spent defrauding a real victim. Herley [10] argues that by decreasing the density of viable targets, scam-baiting activity can have a disproportionate impact on fraudsters reducing the cost effectiveness of their work. Observation also suggests that scammers find dealing with scam-baiters frustrating [11], which could be expected to lead to demoralisation.
In some settings, scam-baiting has been employed as research tool to understand elements of online fraud offending [12, 13, 14], making this technique an active extension of the classic honeypot investigative tool [15]. However, a key aspect
of scam-bating is that it forms a template for a form of _social engineering active defence_[4], in which social engineering techniques are deployed against internet fraudsters in order to counteract their offending. Chen et al. [5] suggest that _automatic_ scam-bating could be an effective tool for combatting online fraud, highlighting the possibility of avoiding the human costs of manual scam-bating. Their work demonstrated that the approach was feasible, with an automated system eliciting responses from 15-25% of scammers contacted, and sustaining some conversations over many days. However, the low quality of text produced by their model limited the technique's effectiveness [5], highlighting the need for further investigation with more advanced language models.
### _ChatGPT_
GPT-3 is an autoregressive language model with 175 billion parameters [16]. GPT-3 was improved upon significantly in the GPT-3.5 series. Zu et al. [17] performed a comprehensive analysis of GPT-3.5 on 9 natural language understanding tasks using 21 datasets. Amongst other state-of-the-art results, they noted a significant improvement in performance on tasks that require a high level of language understanding like sequence tagging, reading comprehension, and natural language reasoning. ChatGPT1 was developed by further training a GPT-3.5 series model through reinforcement learning from human feedback (RLHF) [18]. Zhang et al. [19] empirically analysed ChatGPT on 7 tasks using 20 NLP datasets. They found that ChatGPT performs better than GPT-3.5 on question answering tasks favouring reasoning capabilities, dialogue tasks, and natural language inference tasks.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
ChatGPT has demonstrated remarkable abilities across multiple domains such as declining inappropriate queries, rectifying previous mistakes through learning from subsequent interactions, and, importantly for our usage, generating high-quality responses to human input [20]. The most effective model in the GPT-3.5 family is GPT-3.5-turbo, which has been optimised for chat conversations2.
Footnote 2: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
## III Method
We sought to answer two primary research questions:
1. Whether and to what degree ChatGPT-based scam-bating systems outperform previous approaches.
2. Whether ChatGPT-based systems given examples of human scam-bating conversations outperform systems given only scam-bating prompt instructions.
To evaluate our ChatGPT-based scam-bating models, we carried out an experiment that involved communicating with actual human fraudsters. As this requires actively misleading human participants who take part without their knowledge, our research design was examined by our institutional ethics review board (approval code 13904), with oversight in place throughout the experimental period to protect the researchers and avoid possible negative impact of our system. The experiment was carried out using the scam-bating mail server framework developed by Chen et al. [5], which handles the collection of recent scam emails from trusted online sources, message queuing, and the randomised assignment of conversations to different reply strategies.
We created new modules in Chen et al.'s framework to implement the ChatGPT reply strategies, and release our code3. We chose ChatGPT (powered by the GPT-3.5-Turbo model) as the basis for our reply strategies due to its ability to generate high-quality responses to human input. Interactions with ChatGPT were instrumented via the open-source ChatGPT API4 provided by OpenAI. To enable comparison between our work and previous performance, we also included the _Classifier & Random Template_ model provided by Chen et al. as a control measure. Below, we detail the design of each of the reply strategies.
Footnote 3: [https://github.com/an19352/scambaiter_back](https://github.com/an19352/scambaiter_back)
Footnote 4: [https://openai.com/blog/introducing-chatgpt-and-whisper-apis](https://openai.com/blog/introducing-chatgpt-and-whisper-apis)
### _Chat Replier 1_
The intention behind this approach was for the ChatGPT model to generate responses to scam emails only on the basis of instructions supplied in a prompt preamble. This zero-shot approach meant the model was tasked with crafting email responses based on the given instructions and the body of the scammer email to which it needs to reply. The instructions serve as guidance for the model, defining its behaviour and specifying the actions it should take or avoid. The instructions for Chat Replier 1 were: _"You are a dull person and gullible person who is easy to persuade. You want to share information. Do not call them a scam. Keep conversation going. Ask follow-up questions. Do not give any personal information. Greet them in the start."_.
The model is told to behave as a dull and gullible person who can be persuaded easily. The intention here is to generate responses that make the scammer believe that the scam-bairer is falling for their scheme, convincing them to continue engaging with the system. The model was told to actively keep the conversation going by asking follow-up questions and giving the impression that there is a desire to share information. Scammers that perceive a responder as reluctant to engage or share information may disengage in favour of softer targets elsewhere, so we wanted our model to appear tantalisingly close to providing funds or personal details.
During our internal pilot studies, we noticed that the model would sometimes call out scams and give away (invented) personal information in response to scammer requests. De-nouncing a scammer is likely to dissuade them from further engagement, and invented personal data poses risks of discovery or of accidentally involving real people. To rectify this, specific instructions to not call out scams or give away any personal information were added. As we discuss later in Section V, these instructions were not always followed. The model was also asked to begin the response with a greeting in order to structure the reply in an email-like format.
The _temperature_ and _top_p_ settings were set to 0.2 in order to control the randomness of the responses generated and keep the model compliant with the instructions. The example below shows a typical response generated by this replier to an initial scam solicitation email:
```
Dear Mrs. Washington Rhomeka,
```
Thank you for your email regarding the release of my funds approved by the IMF. I appreciate your prompt response and would like to know more about the processing and transfer cost of US Nineteen Thousand Five Hundred and Fifty United States
Dollars Only. Can you please provide me with a breakdown of the cost and the reason for such a high amount?
Also, I would like to know more about the official Bitcoin payment method that you mentioned. Can you please provide me with more details on how to make the payment and the steps involved in the process?
I look forward to hearing back from you soon.
The model has correctly identified names and details from the scam solicitation email provided along with its instructions, and includes them in the response, giving the impression that it is an attentive human. It comments on elements of the scam text that might be expected to be surprising, and asks for additional information about the payment method and how the process will work - a plausible query that scammers may receive from real victims.
### _Chat Replier 2_
The intention of this approach was to explore if ChatGPT can generate better scam-baiting responses when prompted with example scam-baiting conversations. Conversations from the human scam-baiting dataset published by Chen et al. [5] were employed to'show and tell' the replier how to generate responses similar to human scam-baiters. This human scam-baiting dataset is an extended version of a dataset published by Edwards et al. [11], and contains 658 publicly archived conversations between scammers and human scam-baiters, amounting to over 37,000 messages.
As the context window for prompting is limited, we chose four emails each (scammer messages and scam-baiter replies) from three different conversations to exemplify scam-baiting to Chat Replier 2, for a total of 6 human scam-baiter responses to 6 scammer messages. These conversations were taken from different categories of fraud (transactional, lottery and romance fraud, from Chen et al.'s categorisation system) in order to prepare the model to generate replies for different types of fraud conversation. Some fraud formats, particularly romance fraud, can have significantly different internal conversation dynamics, so this breadth of exposure may be key for creating effective replies.
These conversations were given to the model in such a way that it does not refer back to them for context during response generation. As part of this few-shot prompting approach, the model was also given basic instructions to set the behaviour. The instructions given were: _"You are good at pattern following. You are a person and not a language model. Do not call them a scam."_
These instructions were given with the intention that the model would follow the pattern of the conversations provided as examples. During our internal pilot studies, there were instances where the model revealed that it was a language model and not an actual person, necessitating specific instructions to mitigate this issue. As with Chat Replier 1, we also needed to instruct the model to not call out scammers while generating responses. The _temperature_ and _top_p_ settings were the same as given for Chat Replier 1. A typical example of a response generated via this strategy is given below:
```
HelloAbdel, I am doing well, thank you for asking. I do have some financial management abilities and investment ideas that I believe could be lucrative. I would be interested in hearing more about the opportunities you have available and how we could potentially work together. Can you provide more details on the types of projects or businesses your wealthy clients are interested in investing in?
```
As with Chat Replier 1, the model has identified names and details from the original scammer message and refers to them in its reply. Observation of the pattern of scam-baiting messages has also been sufficient for ChatGPT to understand key tactics for successful generation: it produces an email-style initial greeting, expresses interest in the proposal, and asks for further details.
### _Classifier & Random Template_
The Classifier & Random Template replier was implemented by Chen et al. [5]. We included it in our experiment as a control measure, to allow for comparison between performance in our study and the results obtained in previous work. The reply system involves a DistillBERT model which categorises incoming messages into one of five broad fraud categories, and then randomly selects from a set of pre-written responses specifically designed for that fraud category. We chose this measure as the control because it involves a bank of human-authored responses, avoiding potential confounding issues with the consistency of text quality from previous text generators.
## IV Results
The experiment was started on April 9, 2023 and ended on May 7, 2023. In these four weeks, the experimental framework initiated conversation with 819 unique scammer email addresses crawled from online forums and associated with a specific scam email. All 819 scam emails were distributed
equally between our three reply strategies with an allowance of 1. It is worth mentioning that some scammer email address became invalid either before our responses were sent or during our system's conversation with the scammer, likely as a result of anti-fraud enforcement action from mail providers.
Our reply systems received responses from a total of 286 individual scammers (\(\approx\)35% of addresses contacted). Upon analysing these conversations, we discovered that certain scammers were using autoregenders to communicate with the scam-baiter, as they had sent identical emails multiple times without any changes. We filtered out 54 conversations which had more than two identical responses and marked them as potential autoresponders. On further analysis of these 54 conversations, we observed that some of the replies were identical because the scammer was referencing a previous email they had sent and included it as an attachment in their response. There were also instances where every scammer message was received twice by the mail server, due to a misconfiguration of their mail client. We manually sorted emails that exhibited these behaviours, 22 in total, and included them in our dataset of valid conversations. It should be noted that some of the 32 discarded conversations may still include human scammer content. There were also 62 scammers who actively contacted the scam-baiting mail server from unknown addresses during its period of operation. This likely occurred because addresses within our system ended up on a "sucker's list" due to ongoing replies to other fraud approaches. We exclude these conversations from our analysis as they did not involve verified scammer addresses.
After the completion of the filtration process, our dataset contained 254 valid conversations that received at least one reply from a scammer and did not involve substantial use of an auto-responder. The comparison of the three strategies is shown in Figure 1. Chat Replier 1 elicited 501 replies among 93 conversations, whereas Chat Replier 2 received 314 responses among 88 conversations. The Classifier & Random Template strategy got 276 replies among 73 conversations. The conversations between our automatic scam-baiters and actual scammers are made publicly available on GitHub5 to support future research.
Footnote 5: [https://github.com/an19352/scam-baiting-conversations](https://github.com/an19352/scam-baiting-conversations)
To measure and compare the performance of the responders, we calculated the longest distraction time (or time wasted) for all three repliers. This was defined as the time between the first reply and last reply from the scammer within the study period. We also calculated the average number of replies in each conversation, counting all inbound messages in the conversation. The statistics for all three responders are shown in Table I.
The longest distraction time for our run was less than theirs (17.2 days) but the average number of replies per conversation was substantially greater, including one 31-round conversation with a scammer. As the implementation of the control measure was replicated exactly, this difference is difficult to explain, but casts doubt on the validity of the control for indexing performance. As an alternative, we can compare our best strategies directly due to the matched-length study periods: Chen et al.'s Text Generator B attracted 68 replies across 17 conversations in one month, our Chat Replier 1 attracted 501 replies across 93 conversations, an overwhelming performance difference seemingly in favour of ChatGPT-based methods. However, the fact that we also saw an uplift in the rate of response to the control measure suggests that another factor, such as seasonality, could be affecting fraudster engagement levels.
## V Qualitative Analysis
### _I Already Told You That!_
During response generation, neither ChatGPT system was given the full context of the conversation. They were only presented with the email requiring a reply. This was partially a practical implementation issue, but also proved beneficial: the lack of conversational memory meant that the model would ask more questions, appearing forgetful or confused, and keeping the conversation going. The responses generated often pose questions to the scammers that they have already answered before, which can be a source of annoyance for them. Irritation on the part of scammers is not necessarily a negative outcome for automatic scam-baiting, so long as they remain engaged in the conversation. The observation by Edwards et al [11] that in human scam-baiting conversations scammers often move on from expressions of irritation to personal appeals was found in our data as well - several scammers persisted despite getting obviously annoyed with our reply system's responses.
However, this was also a weakness of the approach. In one instance, the scammer suspected that they were conversing with a bot and as a method of authentication they asked the bot to resend the first email it received. The model was not able to do this, revealing itself. Future research should examine methods of evading such tests by providing conversational memory to the scam-baiter.
Despite its instructions, ChatGPT sometimes gave out obviously fake personal information to the scammers. This was both advantageous and disadvantageous. Some scammers did not engage in further conversation, but others wasted their time trying to verify the details to no avail and would later ask for the details to be checked.
### _Limitations and Solutions_
One of the practical limitations of our current implementation is that especially long texts exceed the OpenAI API request size limit. As an API call takes both the prompt and the response into account while calculating the total number of tokens, there is a chance a model could generate an incomplete response, or no response at all. The impact of this limit on our current results is minimal: there were 17 scammer solicitation messages that could not be contacted as the crawled email was too long, and there were 7 conversations terminated early because the scammer sent an email that could not be replied to due to its length. This issue could be solved by better handling within the reply system, by identifying long text issues, making multiple ChatGPT API calls and then intelligently combining the results.
The primary reason scammers ended a conversation with ChatGPT was that the model generated replies which called out the scammers or revealed that the model was not an actual person. This could possibly be mitigated through altered instructions, or through some post-processing to detect and remove these common self-sabotaging patterns. To address certain limitations in the generation process, we could employ a technique where the model generates multiple responses for a given email, with a selection strategy designed to identify the most suitable response from the range of outputs produced.
Due to time and resource constraints, our study period was limited in duration. Over 28 days, our systems engaged in 254 valid conversations with scammers. The longest ongoing conversations at the end of the study were around 27 and 26 days respectively - covering nearly the entire study period. This suggests that conversations could have continued for much longer if the experiment were extended. We stopped our system from issuing replies at the end of the study period, but left the email inbox active for another week. We received 56 more replies from scammers, showing ongoing interest despite a lack of engagement. Further research could probably benefit from using an extended experimental period to allow for longer conversations with scammers. This can also lead to enhanced certainty in discerning effectiveness in performance among various strategies, and identifying any seasonality or long-term effects.
## VI Conclusion
ChatGPT-based scam-baiting systems are highly effective, outperforming a control measure and eliciting many times more replies than were reported in previous automatic scam-baiting experiments. However, we also found that the control measure was more effective than reported in previous work, complicating comparisons. Unexpectedly, we found that a ChatGPT system given human scam-baiting exchanges as examples was less effective than one given only appropriate instructions, and models sometimes ignored instructions.
|
2303.05963 | Coherent detection of hidden spin-lattice coupling in a van der Waals
antiferromagnet | Strong interactions between different degrees of freedom lead to exotic
phases of matter with complex order parameters and emergent collective
excitations. Conventional techniques, such as scattering and transport, probe
the amplitudes of these excitations, but they are typically insensitive to
phase. Therefore, novel methods with phase sensitivity are required to
understand ground states with phase modulations and interactions that couple to
the phase of collective modes. Here, by performing phase-resolved coherent
phonon spectroscopy (CPS), we reveal a hidden spin-lattice coupling in a vdW
antiferromagnet FePS$_{3}$ that eluded other phase-insensitive conventional
probes, such as Raman and X-ray scattering. With comparative analysis and
analytical calculations, we directly show that the magnetic order in FePS$_{3}$
selectively couples to the trigonal distortions through partially filled
t$_{2g}$ orbitals. This magnetoelastic coupling is linear in magnetic order and
lattice parameters, rendering these distortions inaccessible to inelastic
scattering techniques. Our results not only capture the elusive spin-lattice
coupling in FePS$_3$, but also establish phase-resolved CPS as a tool to
investigate hidden interactions. | Emre Ergeçen, Batyr Ilyas, Junghyun Kim, Jaena Park, Mehmet Burak Yilmaz, Tianchuang Luo, Di Xiao, Satoshi Okamoto, Je-Geun Park, Nuh Gedik | 2023-03-10T14:53:43Z | http://arxiv.org/abs/2303.05963v1 | # Coherent detection of hidden spin-lattice coupling in a van der Waals antiferromagnet
###### Abstract
Strong interactions between different degrees of freedom lead to exotic phases of matter with complex order parameters and emergent collective excitations. Conventional techniques, such as scattering and transport, probe the amplitudes of these excitations, but they are typically insensitive to phase. Therefore, novel methods with phase sensitivity are required to understand ground states with phase modulations and interactions that couple to the phase of collective modes. Here, by performing phase-resolved coherent phonon spectroscopy (CPS), we reveal a hidden spin-lattice coupling in a vdW antiferromagnet FePS\({}_{3}\) that eluded other phase-insensitive conventional probes, such as Raman and X-ray scattering. With comparative analysis and analytical calculations, we directly show that the magnetic order in FePS\({}_{3}\) selectively couples to the trigonal distortions through partially tilted \(t_{\mathrm{2g}}\) orbitals. This magnetoelastic coupling is linear in magnetic order and lattice parameters, rendering these distortions inaccessible to inelastic scattering techniques. Our results not only capture the elusive spin-lattice coupling in FePS\({}_{3}\), but also establish phase-resolved CPS as a tool to investigate hidden interactions.
Ultrafast spectroscopy \(|\) van der Waals magnets \(|\) Spin-phonon coupling +
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
+
Footnote †: To whom correspondence should be addressed. E-mail: [email protected]
phonons of FePS\({}_{3}\) with A\({}_{1g}\) and E\({}_{g}\) symmetries without destroying the magnetic order (Methods). After the pump excitation, a lower intensity probe pulse tracks the coherent phonon oscillations as a function of pump-probe delay time \(\Delta t\). The center wavelength of the pump is 760 nm (1.63 eV) and the pulse bandwidth is 60 nm. In this energy range, our broadband photoexcitation overlaps with the charge transfer gap [(20, 21)]. Figure 1c shows the transient reflectivity trace of FePS\({}_{3}\) at room temperature. In addition to the incoherent electronic decay, the signal consists of an oscillatory part composed of two Fourier components (Figure 1c inset), with frequencies of 7.51 THz and 11.45 THz. The lattice distortions corresponding to these phonon modes of A\({}_{1g}\) symmetry are shown in Figure 1b. The 7.51 THz mode is an out-of-plane breathing mode of sulfur atoms, whereas the 11.45 THz mode involves the in-plane motion of sulfur atoms.
To investigate the effects of spin-lattice coupling on the coherent phonon spectrum, we performed temperature dependent phase-resolved CPS on FePS\({}_{3}\) and traced the changes in transient reflectivity as a function of temperature (Figure 2a). Around 90 K, the signal develops a long-lived component, indicating a change in electronic structure due to the magnetic order. Concurrent with this change, we observe a change in coherent phonon oscillations (Figure 2a), which are extracted by subtracting the incoherent background with a single exponential fit. Figure 2b shows the temperature-dependent Fourier transform of the coherent phonon oscillations. Below \(\sim\)90 K, coherent phonon spectrum develops a new mode at 3.28 THz. This low energy mode has been observed in Raman spectroscopy below T\({}_{\rm N}\) and attributed to magnetic zone-folding [(12, 15)], heralding the onset of magnetic order. We use this mode as a proxy for magnetic order in our experimental scheme. We attribute the discrepancy between the reported Neel temperature (118K) and the observed Neel temperature (90 K) to steady-state laser induced average heating in our experiment, due to high repetition rate of our laser. As shown in Figure 2b and 2c, simultaneously with the emergence of the 3.28 THz mode and hence with the onset of the magnetic order, the 7.51 THz mode amplitude shows a clear downturn. Upon cooling, this coherent phonon oscillation vanishes at around 80 K and recovers with further cooling. The 11.45 THz mode, on the other hand, shows negligible change across T\({}_{\rm N}\).
We further examine the time-domain evolution of these modes by performing Fourier filtering. The filtered spectral region and the respective time traces are given in Figure 2b. Strikingly, the 7.51 THz mode exhibits a \(\pi\) phase shift at low temperatures, which is absent in the 11.45 THz phonon mode, as shown in Figure 2c. Figure 2c also shows the phase-corrected coherent phonon amplitudes, where the \(\pi\) phase shift corresponds to negative mode amplitude. The 3.28 THz phonon mode shows an order parameter behaviour, indicating that the pump pulse does not destroy the magnetic order at all temperatures. The 7.51 THz mode amplitude starts to decline at the onset of magnetic order, and its decrease follows the same order parameter behaviour as the 3.28 THz one. This behaviour is strongly suppressed in the 11.45 THz phonon mode, suggesting a smaller magnetoelastic coupling compared to the 7.51 THz mode.
The frequencies of both phonon modes are in agreement with the Raman results. However, in contrast to CPS, the Raman scattering amplitudes for the 7.51 THz and 11.45 THz modes do not show any temperature dependence (SI Figure 2). Despite the sensitivity of both Raman and CPS to phonon modes, both methods excite and detect phonons differently. Raman scattering measures phonon induced changes in polarizability, also known as Raman matrix element (\(|\frac{\partial\kappa}{\partial Q}|^{2}\)) by making a transition between the ground state and the single phonon excited state [(22)]. This implies that in FePS\({}_{3}\), the Raman matrix elements of both A\({}_{1g}\) phonons remain constant at all temperatures. Contrary to Raman spectroscopy, CPS relies on pump induced coherent phonons generated through displacive or impulsive processes [(23)]. Thus, the observed mode amplitude in CPS is equal to:
\[R_{osc}=\sum_{i}\frac{\partial R}{\partial\epsilon}\frac{\partial\epsilon}{ \partial Q_{i}}\Delta Q_{i} \tag{1}\]
where \(\Delta Q_{i}\) corresponds to the initial displacement of a phonon after photoexcitation, \(\epsilon\) is the dielectric constant and \(R\) is the reflectivity as a function of dielectric constants. As the Raman matrix elements stay the same at all temperatures, the temperature dependent amplitude and phase changes in CPS can be attributed to the change in \(\Delta Q_{i}\).
To gain more insights into the physical origins leading to the mode-selective and magnetic order-dependent phonon phase shift in FePS\({}_{3}\), we study the free energy landscape before and after the photoexcitation. For ultrafast excitation, the initial phonon displacement \(\Delta Q\) equals the difference between the minimal energy lattice positions before and after pump excitation. This mechanism, also known as displacive excitation of coherent phonons, for FePS\({}_{3}\) is shown in Figure 3, where the upper and lower free energy manifolds represent photoexcited and equilibrium states, respectively. Above T\({}_{\rm N}\), the equilibrium and photoexcited free energy landscapes have different minima because of the presence of excited electrons. Below T\({}_{\rm N}\), the equilibrium atomic positions shift due to magnetoelastic coupling, and the amount of displacement along specific mode direction depends on the strength of the mode selective spin-phonon coupling. In this case, the pump excitation perturbs the magnetic order and causes the magnetoelastic coupling to relax, leading to an excited free energy manifold similar to the one above T\({}_{\rm N}\). If the magnetoelastic coupling is linear in phonon position operators, the magnetic order alters the coherent phonon displacements and phases without changing their frequencies and Raman matrix elements, as shown in Figure 3. The mathematical form of the phenomenological free energy describing this scenario is given in SI. Because the 7.51 THz phonon mode shows a much larger change than the 11.45 THz phonon mode concurrent with the magnetic order, we can explain our experimental findings with the fact that the magnetoelastic coefficient of the 7.51 THz mode is significantly higher than that of the 11.45 THz mode.
Even though our phenomenological model captures our experimental observations with a mode-selective spin-lattice coupling model, it does not provide a microscopic reason for its mode-specificity and the linear coupling between the phonons and magnetic order. To pin down the microscopic origin of these observations, we compare the coherent phonon spectra of FePS\({}_{3}\) to that of NiPS\({}_{3}\), which is isostructural to FePS\({}_{3}\). Both systems develop a zigzag AFM order with similar exchange coupling constants [(24, 25, 26)]. In addition to their magnetic structures, the optical spectra reflect similar bandgap for
both [27, 28, 20]. The significant difference between these two systems are in the electronic configuration of transition metal ions. Ni\({}^{2+}\) ions (3d\({}^{8}\)) have two more d-electrons than the Fe\({}^{2+}\) ions (3d\({}^{6}\)). Although the CPS spectra of NiPS\({}_{3}\) (see Figure S3) exhibit both 7.51 and 11.45 THz phonon modes with the same symmetry as FePS\({}_{3}\), none of these modes show any temperature-dependent phase or amplitude change below T\({}_{\rm N}\). Therefore, we can ascribe the mode-selective spin-lattice coupling in FePS\({}_{3}\) to its localized d-orbital electron configuration.
The d-orbital electron configurations of Ni\({}^{2+}\) and Fe\({}^{2+}\) ions are given Figure 4a. In both compounds, transition metal ions are surrounded by ligands with octahedral arrangements, together with trigonal distortions [(19)]. Both Fe\({}^{2+}\) and Ni\({}^{2+}\) ions have two unpaired spins in \(e_{\rm g}\) orbitals. However, \(t_{\rm 2g}\) levels in Fe\({}^{2+}\) are partially-filled, and in Ni\({}^{2+}\), these levels are fully-filled. Therefore we can narrow down the microscopic origin of the mode-selective magnetoelasticity in FePS\({}_{3}\) to \(t_{\rm 2g}\) orbitals.
To examine the magnetoelastic effects in FePS\({}_{3}\), we theoretically analyze the system on a single octahedra level and focus on the low energy \(t_{\rm 2g}\) manifold. In the presence of trigonal distortions, the Hamiltonian for the low energy \(t_{\rm 2g}\) manifold can be written as [(19)]:
\[H=(\Delta_{\rm trig.}+\alpha u)L_{z}^{2}+\lambda LS+\frac{1}{2}Bu^{2}\] [(2)]
where \(\Delta_{\rm trig.}\) is the existing trigonal splitting, \(\lambda\) is the spin-orbit coupling constant, \(B\) is the elastic constant associated with trigonal distortions, and \(\alpha\) quantifies the change in energy following a change in trigonal distortions. The quantization axis is along the (111) direction of the octahedra and the c-axis of the crystal. Despite the absence of a direct coupling between the structural distortion \(u\) and the spin operator \(S\), spin-orbit coupling gives rise to an effective Hamiltonian, that is proportional to \(H_{\rm eff,\ magnetoelastic}=-\frac{\alpha\lambda^{2}}{\Delta^{2}}us_{z}^{2}\). This term implies that the z-component of spins selectively displaces the octahedra along the trigonal distortion. This does not cause any change in its elastic properties, which would appear as a coupling quadratic term in \(u\) and would alter the normal mode frequencies. Furthermore, this microscopic treatment yields the value of trigonal distortion as \(u_{0}=\frac{\alpha\lambda^{2}}{B\Delta^{2}}S_{z}^{2}\). Following the pump excitation, this value will get altered because of the pump induced perturbation of the magnetic system, which is equal to \(\delta u_{0}=\frac{2\alpha\lambda^{2}}{B\Delta^{2}}S_{z}\delta S_{z}\). This expression is identical to the magnetic order-dependent phonon displacement (\(\Delta Q\)) expression and explains the spin-dependent coherent phonon oscillations microscopically on a single ion level.
Our microscopic analysis of mode-selective magnetoelasticity that relies on single ion anisotropy is valid for FePS\({}_{3}\) on the bulk level. First, below the magnetic transition, the expectation value of \(S_{z}^{2}\) is constant for different octahedra composing the material. Therefore, each individual layer of FePS\({}_{3}\) undergoes a uniform trigonal distortion, and \(S_{z}\) operator can be replaced with the antiferromagnetic order parameter \(N\). Furthermore, the spin-orbit coupling in this material emerges from on-site \(e_{g}^{*}\) d-orbitals. Unlike other vdW magnets such as CrI\({}_{3}\)[(29, 30)], the spin-orbit coupling does not emerge from ligands, rendering our single-ion treatment valid for FePS\({}_{3}\) on the bulk level.
The trigonal distortions induced by the mode-selective magnetoelasticity on the single octahedra level correspond to out-of-plane motion of sulfur atoms on the bulk level. Therefore, the mode-selective magnetoelasticity will couple to phonon modes differently, depending on the split spatial projection on the trigonal distortion or out-of-plane motion of sulfur atoms. As shown in Figure 1, 7.51 THz \(\Lambda_{1g}\) phonon mode modulates the out of plane distance of sulfur atoms and directly corresponds to the trigonal distortion of octahedra. On the other hand, as the 11.45 THz phonon mode involves in-plane motion of sulfur atoms, the coupling between this phonon mode and the trigonal distortions is negligible. Furthermore, as shown in Figure 4b, this magnetoelastic coupling and phonon phase shift are absent in NiPS\({}_{3}\) because of the fully filled t\({}_{2g}\) orbitals.
In summary, using coherent phonon spectroscopy, we reveal mode selective magnetoelasticity in FePS\({}_{3}\), which previously eluded phase insensitive measurements. This effect changes the coherent phonon amplitude and the phase of the 7.51 THz \(\Lambda_{1g}\) phonon mode without frequency renormalization below the magnetic transition temperature. By performing a comparative study between FePS\({}_{3}\) and NiPS\({}_{3}\), we pinpoint the mode selective origin of spin-lattice coupling and show the pivotal role of trigonal distortions in these compounds. Our results not only reveal a mode selective magnetoelasticity in FePS\({}_{3}\) but resolve the dichotomy between magnetically enabled phonon modes and already existing phonon modes. Our results suggest that perturbations that directly couple to the trigonal distortions in FePS\({}_{3}\), such as pressure [(31)] and nonlinear phononics [(32, 33)], can be used to manipulate the magnetic order or to enter nonequilibrium magnetic phases which cannot be accessed in equilibrium. Furthermore, we envision that the coherent phonon spectroscopy technique can be utilized as a highly sensitive probe of hidden spin-lattice coupling in vdW magnet monolayers and other systems with strong spin-lattice coupling.
## Materials and Methods
### Sample preparation.
We synthesized our FePS\({}_{3}\) crystals using a chemical vapor transport method (for details see Ref. [(34)]). All the powdered elements (purchased from Sigma-Aldrich): iron (99.99% purity), phosphorus (99.99%) and sulfur (99.998%), were prepared inside an argon-filled glove box. After weighing the starting materials in the correct stoichiometric ratio, we added an additional 5 wt of sulfur to compensate for its high vapor pressure. After the synthesis, we carried out the chemical analysis of the single-crystal samples using a COXI EM-30 scanning electron microscope equipped with a Bruker QUANTAX 70 energy dispersive X-ray system to confirm the correct stoichiometry. We also checked the XRD using a commercial diffractometer (Rigaku Miniflex II). Prior to optical measurements, we determined the crystal axes of the samples using an x-ray diffractometer. We cleaved samples before placing them into high vacuum (\(\sim 10^{-7}\) torr) to expose a fresh surface without contamination and oxidation.
### Phase-resolved coherent phonon spectroscopy.
A Ti:sapphire oscillator (Cascade-5, KMLabs), centered at 760 nm (1.63 eV) and with pulse duration of \(\sim\)25 fs was used in our experiments. The repetition rate of the laser was set to 80 MHz. Before splitting the output into pump and probe branches, we compensated for group velocity dispersion (GVD) using a pair of chirp mirrors and N-BK7 wedges to maintain the pulse duration at the sample position. The pump and probe pulses were characterized separately at the sample position, using frequency resolved optical gating technique. The pulse duration was \(\sim\)25 fs. To increase the signal to noise ratio of our setup, we modulate the pump intensity at 100 kHz. The probe signal from the photodiode is sent to a lock-in amplifier (Stanford Research SRS30) locked to the chopping frequency (100 kHz). For faster data acquisition and averaging, the pump-probe delay is
rapidly scanned at a rate of 5 Hz with an oscillating mirror (APE ScanDelay USB). The diameters of pump and probe beam spots were 90 \(\mu\)m, measured by a knife edge method. The pump and probe beams are cross polarized. During all of our measurements, the pump fluence was set to \(10~{}\mu J/cm^{2}\). The detailed schematics of the setup is given in SI (Figure S1).
### Data, Materials, and Software Availability
All study data are included in the article and/or supporting information.
We thank Riccardo Comin for fruitful discussions. We acknowledge support from the US Department of Energy, BES DMSE (data taking and analysis) and Gordon and Betty Moore Foundation's EPiQS Initiative grant GBMF9459 (instrumentation and manuscript writing). Work at the Center for Quantum Materials was supported by the Leading Researcher Program of the National Research Foundation of Korea (Grant No. 2020R1A3B2079375). The research of S.O. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
|
2306.11110 | About a Family of ALF Instantons with Conical Singularities | We apply the techniques developed in our previous article to describe some
interesting families of ALF gravitational instantons with conical
singularities. In particular, we completely understand the 5-dimensional family
of Chen-Teo metrics and prove that only 4-dimensional subfamilies can be
smoothly compactified so that the metric has conical singularities. | Olivier Biquard, Paul Gauduchon | 2023-06-19T18:24:48Z | http://arxiv.org/abs/2306.11110v2 | # About a family of ALF instantons with conical singularities
###### Abstract.
We apply the techniques developed in our previous article to describe some interesting families of ALF gravitational instantons with conical singularities. In particular we study the 5-dimensional family of Chen-Teo metrics and prove that only 4-dimensional subfamilies can be smoothly compactified so that the metric has conical singularities.
_Dedicated to Jean-Pierre Bourguignon on the occasion of his 75th birthday, with our admiration and gratitude._
###### Contents
* 1 **Introduction**
* 2 **Toric Hermitian ALF gravitational instantons: a quick review*
* 2.1 General presentation
* 2.2 The Kahler environment
* 2.3 The self-dual Eguchi-Hanson metric
* 3 **The Chen-Teo family*
* 3.1 Regularity
* 3.2 The case when the metric is smooth
* 3.3 Some particular cases in the general ALF case
* 3.4 The AF case
## 1. **Introduction**
In a previous paper [3], the authors of the present paper have provided a complete classification, as well as an effective mode of construction, of so-called _toric Hermitian ALF gravitational instantons_. These are four-dimensional, complete, non-compact oriented Ricci-flat Riemannian (positive definite, smooth) manifolds, which are toric, i.e. admits an effective metric action of the torus \(\mathbb{T}^{2}=S^{1}\times S^{1}\), are conformally Kahler -- but non-Kahler --and, at infinity, are diffeomorphic to the product \(\mathbb{R}\times L\), where \(L\) is locally a \(S^{1}\)-bundle over the sphere \(S^{2}\); the AF case is when \(L=S^{2}\times S^{1}\).
This class of gravitational instantons includes the Riemannian versions, obtained by _Wick rotations_, of well-known Lorentzian space-times, namely (i) the _Schwarzschild space_, (ii) the family of _Kerr spaces_, (iii) the _self-dual Taub-NUT space_, equipped with the opposite orientation to the one induced by its hyperkahler structure, and (iv) the _Taub-bolt space_, discovered in 1978 by Don Page [17]. Apart from the Taub-NUT space, these spaces share the feature of being _of type \(D^{+}D^{-}\)_, meaning that their self-dual and anti-self-dual Weyl tensors, \(W^{+}\) and \(W^{-}\) respectively, are both _degenerate_ and non-vanishing, hence giving rise to an _ambikahler
###### Abstract
We consider the \(\mathbb{R}^{3}\)-linear Schrodinger equation (LER) for a \(4\)-parameter family of \(\mathbb{R}^{
\(f\) on \(\mathbb{R}\) of the form:
\[f(z)=A+\sum_{i=1}^{r}a_{i}|z-z_{i}|, \tag{2.1}\]
for some positive integer \(r\), where: \(z\) denotes the standard parameter of \(\mathbb{R}\), \(A\) is a positive real number, the coefficients \(a_{i}\) are positive real numbers, with \(\sum_{i=1}^{r}a_{i}=1\), and the \(z_{i}\), \(i=1,\ldots,r\), called _angular_ or _turning_ points, denote the points of discontinuity on \(\mathbb{R}\) of the slope of \(f\). For convenience, we put: \(z_{0}:=-\infty\) and \(z_{r+1}:=+\infty\), cf. Figure 1 for the piecewise affine function of the Chen-Teo instanton. On each open interval \((z_{i},z_{i+1})\), \(i=0,\ldots,r\), the slope of \(f\) is constant, denoted by \(f_{i}^{\prime}\). It is required that
\[f_{0}^{\prime}=-1<f_{1}^{\prime}<\ldots<f_{r-1}<f_{r}^{\prime}=1. \tag{2.2}\]
The coefficients \(a_{i}\) are related to the slopes \(f_{i}^{\prime}\) by
\[a_{i}=\frac{1}{2}(f_{i}^{\prime}-f_{i-1}^{\prime}),\quad i=1,\ldots,r. \tag{2.3}\]
According to the _Tod ansatz_, cf. [19] and also [3, Section 3], the geometry of a toric Hermitian Ricci-flat metric is determined by a harmonic, axisymmetric (real) function \(U=U(\rho,z)\), defined on the Euclidean space \(\mathbb{R}^{3}\), with the following notation: if \(u_{1},u_{2},u_{3}\) denote the standard coordinates of \(\mathbb{R}^{3}\), the pair \((\rho,z)\) -- the so-called _Weyl-Papapetrou coordinates_ -- are defined by: \(\rho:=(u_{1}^{2}+u_{2}^{2})^{\frac{1}{2}},z=u_{3}\); \(U\) being axisymmetric means that it is invariant by the \(S^{1}\)-action: \(e^{i\theta}\cdot u=(\cos\theta u_{1}+\sin\theta u_{2},-\sin\theta u_{1}+\cos \theta u_{2},u_{3})\), hence is a function of \(\rho,z\), and the condition of being harmonic is then expressed by: \(U_{zz}+U_{\rho\rho}+\frac{1}{\rho}U_{\rho}=0\), where, as usual, \(U_{\rho}\), \(U_{z}\), \(U_{\rho,z}\) etc... denote the partial derivatives with respect to \(\rho\) and \(z\). For any such _generating function_\(U\), the corresponding metric is then given, in the _Harmark form_, by:
\[g=\frac{(dt-F\,dx_{3})^{\otimes 2}}{V}+V\rho^{2}\,dx_{3}\otimes dx_{3}+e^{2 \nu}(d\rho\otimes d\rho+dz\otimes dz), \tag{2.4}\]
on the open set, where \(\rho\neq 0\), where \(t,x_{3}\) are angular coordinates, and \(V,F,e^{2\nu}\) are functions of \(\rho,z\), defined by:
\[V=-\frac{1}{k}\,\left(\rho U_{\rho}+\frac{U_{\rho}^{2}U_{zz}}{U_{\rho z}^{2}+U _{zz}^{2}}\right),\quad e^{2\nu}=\frac{1}{4}V\rho^{2}(U_{\rho z}^{2}+U_{zz}^{2 }), \tag{2.5}\]
\[F=-\frac{1}{k}\left(-\frac{\rho U_{\rho}^{2}U_{\rho z}}{U_{\rho z}^{2}+U_{zz}^ {2}}+\rho^{2}U_{z}+2H\right), \tag{2.6}\]
where \(H\), the _conjugate function_ of \(U\), is defined, up to an additive constant, by:
\[H_{z}=\rho\,U_{\rho},\qquad H_{\rho}=-\rho\,U_{z}, \tag{2.7}\]
cf. Paragraph 2.2 for the significance of the constant \(k\).
The functions \(H\) and \(F\) are both defined up to an additional constant. Indeed, in the expression (2.4) of the metric, the \(1\)-form \(\eta=dt-F\,dx_{3}\) is well-defined, but the pair \((t,F)\) is subject to the transform \((t,F)\mapsto(t+c\,x_{3},F+c)\), for any constant \(c\), by which \(\eta\), the vector field \(\partial_{t}\) and \(x_{3}\) remain unchanged, while the vector field \(\partial_{x_{3}}\) becomes \(\partial_{x_{3}}-c\,\partial_{t}\). In particular, the vector field \(\partial_{x_{3}}++F\,\partial_{t}\) remains unchanged.
In the current ALF case, it was shown in [3, Section 5] that the generating function \(U\) of any toric Hermitian ALF gravitational instanton is defined on the whole space \(\mathbb{R}^{3}\), except on the \(z\)-axis \(\rho=0\), that, near the \(z\)-axis, \(U\) is close
to \(f(z)\log\rho^{2}\), while, at infinity, it is asymptotic to the harmonic axisymmetric function \(U_{0}\) defined by:
\[U_{0}(\rho,z)=2(\rho^{2}+z^{2})^{\frac{1}{2}}-z\log\frac{(\rho^{2}+z^{2})^{\frac {1}{2}}+z}{(\rho^{2}+z^{2})^{\frac{1}{2}}-z}. \tag{2.8}\]
It follows that the generating function \(U\) of any toric Hermitian ALF gravitational instanton is actually entirely determined by the above piecewise affine function \(f(z)\), via the formula:
\[U(\rho,z)=A\log\rho^{2}+\sum_{i=1}^{r}a_{i}\,U_{0}(\rho,z-z_{i}). \tag{2.9}\]
By setting:
\[d_{i}:=(\rho^{2}+(z-z_{i})^{2})^{\frac{1}{2}}, \tag{2.10}\]
and by noticing that the constant \(k\) in (2.5)-(2.6) is equal to \(2A\), cf. below, we get the following expressions of \(U\), its first and second derivatives, and \(H\):
\[U(\rho,z)=A\log\rho^{2}+2\sum_{i=1}^{r}a_{i}d_{i}-\sum_{i=1}^{r}a_{i}(z-z_{i}) \log\frac{(d_{i}+z-z_{i})}{(d_{i}-z+z_{i})}, \tag{2.11}\]
\[U_{\rho}=\frac{2}{\rho}(A+\sum_{i=1}^{r}a_{i}d_{i}),\quad U_{z}=-\sum_{i=1}^{ r}a_{i}\log\frac{(d_{i}+z-z_{i})}{(d_{i}-z+z_{i})}, \tag{2.12}\]
\[U_{\rho\rho}=-\frac{2}{\rho^{2}}(A+\sum_{i=1}^{r}a_{i}d_{i})+2\sum_{i=1}^{r} \frac{a_{i}}{d_{i}},\quad U_{\rho z}=\frac{2}{\rho}\sum_{i=1}^{r}a_{i}\frac{( z-z_{i})}{d_{i}},\quad U_{zz}=-2\sum_{i=1}^{r}\frac{a_{i}}{d_{i}}, \tag{2.13}\]
and
\[H(\rho,z)=2Az+\sum_{i=1}^{r}a_{i}(z-z_{i})d_{i}+\frac{1}{2}\rho^{2}\sum_{i=1} ^{r}a_{i}\log\frac{(d_{i}+z-z_{i})}{(d_{i}-z+z_{i})}, \tag{2.14}\]
up to constant. We then get:
\[V=\frac{1}{A}(A+\sum_{i=1}^{r}a_{i}d_{i})\left(\frac{(\sum_{i=1}^{r}\frac{a_{ i}}{d_{i}})(A+\sum_{i=1}^{r}a_{i}d_{i})}{(\sum_{i=1}^{r}\frac{a_{i}(z-z_{i})}{d_ {i}})^{2}+(\sum_{i=1}^{r}\frac{a_{i}\rho}{d_{i}})^{2}}-1\right), \tag{2.15}\]
\[e^{2\nu}=\frac{1}{A}(A+\sum_{i=1}^{r}a_{i}d_{i})\Big{(}\sum_{i=1}^{r}\frac{a_{ i}}{d_{i}}(A+\sum_{i=1}^{r}a_{i}d_{i})-\Big{(}\big{(}\sum_{i=1}^{r}\frac{a_{i}(z-z _{i})}{d_{i}}\big{)}^{2}+\big{(}\sum_{i=1}^{r}\frac{a_{i}\rho}{d_{i}}\big{)}^{ 2}\big{)}\Big{)}, \tag{2.16}\]
and
\[F=\frac{1}{A}\left(\frac{(A+\sum_{i=1}^{r}a_{i}d_{i})^{2}(\sum_{i=1}^{r}\frac {a_{i}(z-z_{i})}{d_{i}})}{(\sum_{i=1}^{r}\frac{a_{i}(z-z_{i})}{d_{i}})^{2}+( \sum_{i=1}^{r}\frac{a_{i}\rho}{d_{i}})^{2}}-2Az-\sum_{i=1}^{r}a_{i}(z-z_{i})d_ {i}\right). \tag{2.17}\]
It is easy to show that:
\[\big{(}\sum_{i=1}^{r}\frac{a_{i}(z-z_{i})}{d_{i}}\big{)}^{2}+\big{(}\sum_{i=1} ^{r}\frac{a_{i}\rho}{d_{i}}\big{)}^{2}\leq 1,\qquad\sum_{i=1}^{r}\frac{a_{i}}{d_{i}} \sum_{i=1}^{r}a_{i}d_{i}\geq 1. \tag{2.18}\]
It then readily follows that:
\[V\geq 1+A\sum_{i=1}^{r}\frac{a_{i}}{d_{i}}, \tag{2.19}\]
and that \(V\) tends to \(1\) at infinity.
On the \(z\)-axis \(\rho=0\), for any \(z\) where \(f^{\prime}(z)\neq 0\), we infer:
\[V(0,z)=\frac{1}{A}\frac{f(z)}{(f^{\prime}(z))^{2}}\left(f(z)\sum_{i=1}^{r}\frac{ a_{i}}{|z-z_{i}|}-(f^{\prime}(z))^{2}\right), \tag{2.20}\]
\[e^{2\nu}(0,z)=(f^{\prime}(z))^{2}\,V(0,z), \tag{2.21}\]
\[F(0,z)=\frac{1}{A}\left(\frac{(f(z))^{2}}{f^{\prime}(z)}-2Az-\sum_{i=1}^{r}a_{ i}(z-z_{i})|z-z_{i}|\right)=\frac{1}{A}\left(\frac{(f(z))^{2}}{f^{\prime}(z)}-H( 0,z)\right). \tag{2.22}\]
From (2.22), we infer that on any interval \((z_{i},z_{i+1})\), \(i=0,\ldots,r\), \(F(0,z)\)_is constant_, say equal to \(F_{i}\). If \(f^{\prime}_{i}\neq 0\) and \(f^{\prime}_{i-1}\neq 0\), since \(H\) is continuous on the \(z\)-axis, we then have:
\[F_{i}-F_{i-1}=\frac{1}{A}f_{i}^{2}\left(\frac{1}{f^{\prime}_{i}}-\frac{1}{f^{ \prime}_{i-1}}\right); \tag{2.23}\]
if, however, \(f^{\prime}_{i}=0\), then \(f^{\prime}_{i-1}\neq 0\) and \(f^{\prime}_{i+1}\neq 0\) and we then get:
\[F_{i+1}-F_{i-1}=\frac{1}{A}\left(f_{i}^{2}\left(\frac{1}{f^{\prime}_{i+1}}- \frac{1}{f^{\prime}_{i-1}}\right)-2(z_{i+1}-z_{i})\,f_{i}\right), \tag{2.24}\]
cf. [3, Propsition 7]. From (2.22) again, we get:
\[F_{0} =-\frac{1}{A}(A+\sum_{i=1}^{r}a_{i}z_{i})^{2}+\frac{1}{A}\sum_{i= 1}^{r}a_{i}z_{i}^{2}.\] \[F_{r} =\frac{1}{A}(A-\sum_{i=1}^{r}a_{i}z_{i})^{2}-\frac{1}{A}\sum_{i=1} ^{r}a_{i}z_{i}^{2}, \tag{2.25}\]
up to an additional constant, hence
\[F_{r}-F_{0}=\frac{2}{A}\Big{(}A^{2}+(\sum_{i=1}^{r}a_{i}z_{i})^{2}-\sum_{i=1} ^{r}a_{i}z_{i}^{2}\Big{)}. \tag{2.26}\]
It follows that
\[A=\frac{1}{4}\left(F_{r}-F_{0}+\Big{(}(F_{r}-F_{0})^{\frac{1}{2}}+16\sum_{i=1} ^{r}a_{i}z_{i}^{2}-16(\sum_{i=1}^{r}a_{i}z_{i})^{2}\Big{)}^{\frac{1}{2}}\right). \tag{2.27}\]
In particular, the metric is AF, i.e. satisfies \(F_{r}-F_{0}=0\), if and only if
\[A^{2}=\sum_{i=1}^{r}a_{i}z_{i}^{2}-(\sum_{i=1}^{r}a_{i}z_{i})^{2}. \tag{2.28}\]
**Remark 2.1**.: As explained above, to any convex, piecewise affine function \(f(z)\) as defined in (2.1) is associated a generating function \(U(\rho,z)\), defined by (2.9), hence by (2.11); conversely, it follows from (2.12) that \(f(z)\) is determined by \(U(\rho,z)\) via the formula \(f(z)=\frac{1}{2}(\rho\,U_{\rho})(0,z)\). The corresponding Ricci-flat metric \(g\) is then expressed by (2.4), where the functions \(V(\rho,z),F(\rho,z),e^{2\nu}(\rho,z)\) are given by (2.5)-(2.6), hence by (2.15)-(2.16)-(2.17). For any real constants \(\alpha>0,\beta\), \(f(z)\) may be replaced by the function \(\tilde{f}(z):=\frac{1}{\alpha}f(\alpha z+\beta)=\frac{A}{a}+\sum_{i=1}^{r}a_{i }|z-\tilde{z}_{i}|\), with \(\tilde{z}_{i}=\frac{z_{i}-\beta}{\alpha}\), and the corresponding generating function is then replaced by \(\tilde{U}(\rho,z)=\frac{1}{a}U(\alpha\rho,\alpha z+\beta)=\frac{A}{a}\log\rho^ {2}+\sum_{i=1}^{r}a_{i}U_{0}(\rho,z-\tilde{z}_{i})\). The corresponding Ricci-flat metric is then \(\tilde{g}(\rho,z,t,x_{3}):=\frac{1}{\alpha^{2}}g(\alpha\rho,\alpha z+\beta, \alpha t,x_{3})\), meaning that \(\tilde{g}\) is homothetic to \(g\) by a factor \(1/\alpha^{2}\), via the change of variables \((\rho,z,t,x_{3})\mapsto(\alpha\rho,\alpha z+\beta,\alpha t,x_{3})\). Also notice that, by denoting \(\tilde{f}_{i}:=\tilde{f}(\tilde{z}_{i})\) and \(f_{i}:=f(z_{i})\), we have: \(\tilde{f}_{i}=f_{i}/\alpha,\quad i=1,\ldots,r\).
### The Kahler environment
By definition, a toric Hermitian ALF gravitational instanton, say \((M,g)\), admits a Kahler metric, \(g_{K}\), in the conformal class of \(g\), which is actually toric as well, meaning that the torus action is Hamiltonian, i.e. admits a _moment map_ The fact that \(g\) is conformally Kahler implies that the self-dual Weyl tensor \(W^{+}\) of \(g\), regarded as a (symmetric, trace-less) operator on the self-dual part of \(\Lambda^{2}M\), is _degenerate_, meaning that \(W^{+}\) has a simple, non-vanishing, simple eigenvalue, say \(\lambda\), and a repeated eigenvalue \(-\frac{\lambda}{2}\). It is also required that \(\lambda\) is not constant and everywhere non-vanishing. According to [7], the Kahler metric \(g_{K}\) is then equal to \(\lambda^{2/3}\,g\) or a constant multiple. In view of the ALF condition, it is convenient to set: \(g_{K}=(k^{-1}\lambda)^{2/3}\,g\), where the constant \(k\) is chosen in such a way that \(g_{K}\) is asymptotic at infinity to the product of the standard sphere of radius \(1\) and the Poincare cusp of sectional curvature \(-1\) (more detail in [3, Section 2]).
The conformal factor \((k^{-1}\lambda)^{\frac{2}{3}}\) is then equal to \(x_{1}^{2}\), where \(x_{1}\) denotes the moment of the Hamiltonian Killing vector field \(\partial_{t}\), and the scalar curvature, \(\operatorname{Scal}_{g_{K}}\), of the Kahler metric \(g_{K}\) is then equal to \(6k^{\frac{2}{3}}\lambda^{\frac{1}{3}}=6kx_{1}\), and tends to \(0\) at infinity. In particular, \(g_{K}=x_{1}^{2}\,g\) is _extremal_, even _Bach-flat_, since it is conformal to an Einstein metric. The constant \(k\) is actually the same as the constant \(k\) appearing in (2.5)-(2.6), and turns out to be equal to \(2A\), [3, Formula (89)].
In terms of the generating function \(U\), the Kahler form, \(\omega_{K}\), and the volume form, \(v_{g_{K}}\), of \(g_{K}\) have the following expression:
\[\omega_{K}=\frac{2}{U_{\rho}^{2}}\,\Big{(}\frac{1}{\rho}(U_{zz}\,d\rho-U_{ \rho z}\,dz)\wedge(dt-F\,dx_{3})-V(U_{\rho z}\,d\rho+U_{zz}\,dz)\wedge dx_{3} \Big{)}, \tag{2.29}\]
and
\[v_{g_{K}}=\frac{1}{2}\omega_{K}\wedge\omega_{K}=\frac{4}{\rho U_{\rho}^{4}}(U _{\rho z}^{2}+U_{zz}^{2})\,dt\wedge dx_{3}\wedge dz\wedge d\rho. \tag{2.30}\]
From (2.30) we can infer that the volume of \((M,g_{K})\) is finite, and the image of the moment map is a convex, pre-compact polytope in the Lie algebra \(\mathfrak{t}\) of the torus \(\mathbb{T}^{2}\), cf. Figure 2, which is the picture, taken from [3, Section 8], of the moment polytope of the Chen-Teo instanton. Notice that, apart from the dashed edge \(E_{\infty}\), representing the boundary at infinity, each edge \(E_{i}\), \(i=0,1,\ldots,r\) is associated to the interval \((z_{i},z_{i+1})\) on the \(z\)-axis \(\rho=0\).
The moment with respect of \(\omega_{K}\) of the Killing vector fields \(\partial_{t}\) and \(\partial_{x_{3}}\) -- which, in general, don't form a basis of \(\Lambda\) -- are denoted by \(x_{1}\) and \(\mu\) respectively, with
\[x_{1}=\frac{2}{H_{z}},\quad\mu=-\frac{1}{A}\frac{zH_{z}+\rho H_{\rho}-2H}{H_{z }}, \tag{2.31}\]
where, we recall, \(H\) is defined by (2.7), [3, Proposition 6.1]. Notice however that in general \(\partial_{t}\) and \(\partial_{x_{3}}\), regarded as elements of the Lie algebre \(\mathfrak{t}\) of the torus \(\mathbb{T}^{2}\) don't form a basis of the lattice \(\Lambda\) in \(\mathfrak{t}\) induced by \(\mathbb{T}^{2}\). In restriction to the boundary \(\rho=0\), the moments are functions of \(z\), with the following expressions on each interval \((z_{i},z_{i+1})\) where \(f_{i}^{\prime}\neq 0\):
\[x_{1}=\frac{1}{f(z)},\quad\mu=-\frac{F_{i}}{f(z)}+\frac{1}{A}\left(\frac{f(z)}{ f^{\prime}(z)}-z\right). \tag{2.32}\]
The expression (2.4) of the metric \(g\) holds on the open set, \(M_{0}\), where \(\rho\neq 0\), i. e. where the torus action is free. In the toric Kahler setting, the boundary \(\rho=0\) of this open set is formed of \((r-1)\) compact invariant divisors, isomorphic to \(2\)-spheres, and of two divisors isomorphic to punctured spheres, corresponding to a point at infinity for each of them, encoded by the \(r+1\) edges of the moment polytope. To each edge of the moment polytope, itself encoded by an interval \((z_{i},z_{i=1})\), \(i=0,\ldots,r\), is
associated a Killing vector field, \(v_{i}\), regarded as an element of \(\mathfrak{t}\), actually a primitive element of \(\Lambda\): \(v_{i}\) is then the generator of a \(S^{1}\)-action of period \(2\pi\), and vanishes on the corresponding invariant divisor. It follows from (2.20) that \(v_{i}\) has the following form:
\[\begin{split} v_{i}&=f_{i}^{\prime}(\partial_{x_{3} }+F_{i}\,\partial_{t}),\quad\text{if}\,f_{i}^{\prime}\neq 0,\\ v_{i}&=\frac{1}{A}f_{i}^{2}\partial_{t},\quad \text{if}\,f_{i}^{\prime}=0.\end{split} \tag{2.33}\]
More generally, if the metric admits a conical singularity along the invariant divisor \(E_{i}\), of angle \(2\pi\alpha_{i}\), then
\[\begin{split} v_{i}&=\alpha_{i}\,f_{i}^{\prime}( \partial_{x_{3}}+F_{i}\,\partial_{t},\quad\text{if}\,f_{i}^{\prime}\neq 0,\\ v_{i}&=\frac{1}{A}\alpha_{i}\,f_{i}^{2}\partial_{t },\quad\text{if}\,f_{i}^{\prime}=0.\end{split} \tag{2.34}\]
The conditions that \((M_{0},g)\) will smoothly extend to the boundary, possibly with conical singularities of \(g\) on the invariant divisors, is that each pair \(v_{i},v_{i+1}\) be a basis of the lattice \(\Lambda\), i. e. that each pair be related to the next one by an element of the group \(GL(2,\mathbb{Z})\) of \(2\times 2\) matrices with integer coefficients and determinant equal to \(\pm 1\), i.e.
\[\begin{pmatrix}v_{i-1}\\ v_{i}\end{pmatrix}=\begin{pmatrix}\ell_{i}&-\epsilon_{i}\\ 1&0\end{pmatrix}\begin{pmatrix}v_{i}\\ v_{i+1}\end{pmatrix}, \tag{2.35}\]
hence
\[\ell_{i}v_{i}=v_{i-1}+\epsilon_{i}\,v_{i+1},\quad i=1,\dots,r-1, \tag{2.36}\]
where the \(\ell_{i}\) are integers and \(\epsilon_{=}\pm 1\), cf. [3, Section 8].
As already mentioned, these conditions turn out to be quite restrictive, in particular impose that \(r\) cannot exceed 3. For each value 1, 2 or 3 of \(r\), the only toric Hermitian ALF gravitational instantons are then as follows, cf. Theorem A-Theorem 8.2 in [3]:
1. The _self-dual Taub-NUT instanton_, i. e. the Euclidean self-dual Taub-NUT on \(\mathbb{R}^{4}\), with the orientation opposite to the one induced by its hyperkahler structure. Its piecewise affine unction is \(f(z)=2n+|z|\) and its generating function is \(U(\rho,z)=2n\log\rho^{2}+U_{0}(\rho,z)\).
2. (i) The _Taub-bolt instanton_, discovered by D. Page in 1978, whose piecewise affine function is \(f(z)=3b+\frac{1}{2}|z+b|+\frac{1}{2}|z-b|\), \(b=\frac{3}{4}\,|n|\). (ii) The _Euclidean Kerr metrics_, discovered by R. Kerr in 1963, with \(f(z)=m+\frac{1}{2}(1-\frac{a}{b})|z+b|+\frac{1}{2}(1+\frac{a}{b})|z+b|\), \(0<|a|<b=(m^{2}+a^{2})^{\frac{1}{2}}\), (iii) The _Euclidean Schwarzschild metric_, discovered by K. Schwarzschild in 1918, with \(f(z)=m+\frac{1}{2}|z+m|+\frac{1}{2}|z-m|\), which can be viewed as a particular case of Euclidean Kerr metric, with \(a=0\) and \(b=m\).
3. The 1-parameter family of _Chen-Teo instantons_, discovered by Yu Chen and Edward Teo in 2011, cf. [5], with (2.37) \[f(z)=\frac{1}{2}\left(1-p^{\frac{3}{2}}-q^{\frac{3}{2}}+q\,|z+q^{\frac{1}{2}}- q|+|z|+p\,|z-p^{\frac{1}{2}}+p|\right),\] (2.38) \[0<p<1,\quad 0<q<1,\quad p+q=1,\] (2.39) \[f_{1}=pq^{\frac{1}{2}},\quad f_{2}=pq,\quad f_{3}=p^{\frac{1}{2}}q.\] In contrast with the previous cases, the Chen-Teo instantons are not the Euclidean form of Lorentzian spaces, and their anti-self-dual Weyl tensor \(W^{-}\) is _not_ degenerate, as shown in [1].
More generally, we shall consider smooth completions of the metric \(g\) given by (2.4) admitting conical singularities along the invariant divisors, as described by Theorem 7.5 in [3, Section 7]. As shown in [3, Section 7], this can be done for the whole family of _Kerr-Taub-NUT_ family, introduced bu G. W. Gibbons and M. J. Perry in 1980, which includes the instantons mentioned above when \(r=2\).
This also concerns the _Chen-Teo \(4\)-parameter family_ introduced in [6], which we shall explore in the next Section.
### The self-dual Eguchi-Hanson metric
The _Eguchi-Hanson metric_ was discovered in 1978 by Tohru Eguchi and Andrew J. Hanson in [9], and also, independently, by Nigel Hitchin [14]; it is also a member of the Gibbons-Hawking family of hyperkahler metrics [10]. Like the Taub-NUT metric quoted above, also a member of the Gibbons-Hawking family, the Eguchi-Hanson metric is of type \(O^{+}D^{-}\) with respect to the orientation determined by the hyperkahler structure, meaning that \(W^{+}\equiv 0\), while \(W^{-}\) is degenerate, but non-zero. With respect to
Figure 1. _The piecewise affine function of the Chen–Teo metric._
Figure 2. _The Chen–Teo moment polytope in the \(x,y\)-plane, with respect to the \(\mathbb{Z}\)-basis \(v_{1}=-p(\partial_{x_{3}}+F_{1}\partial_{t})\), with \(F_{0}=0\), where \(x=-p(y+F_{1}\,x_{1}),y=\mu+\frac{1}{2A}(p^{\frac{3}{2}}-q^{\frac{3}{2}}-p+q)\), and \(2A=1-p^{\frac{3}{2}}-q^{\frac{3}{2}}\). The slope of the edge between \(V_{2}\) and \(V_{3}\) is \(-1\) for any value of the parameter \(p\)._
the opposite orientation it is then of type \(D^{+}O^{-}\) and will then be called _the self-dual Eguchi-Hanson metric_. This can be written in Harmark form (2.4), with \(\rho=(r^{2}-b^{2})^{\frac{1}{2}}\sin\theta\), \(z=r\cos\theta\), \(V=\frac{r}{r^{2}-b^{2}}\), \(F=-\cos\theta\), \(e^{2\nu}=\frac{r}{r^{2}-b^{2}\cos^{2}\theta}\). It can be shown that the simple eigenvalue of \(W^{+}\) is \(\lambda_{+}=\frac{2b^{2}}{r^{2}}\) and the conformal Kahler metric is then conveniently chosen to be \(g_{K}=\frac{1}{r^{2}}g\), whose Kahler class is then \(\omega_{K}=-\frac{dr}{r^{2}}\wedge(dt+\cos\theta\,dx_{3})-\frac{1}{r}\sin \theta\,d\theta\wedge dx_{3}\), so that the moment \(x_{1}\) of \(\partial_{t}\) be equal to \(\frac{1}{r}\) and \(k=2b^{2}\). Unlike the self-dual Taub-NUT metric, the self-dual Eguchi-Hanson metric is ALE, not ALF, but its generating function, \(U_{EH}\), is nevertheless of the same type (2.9) as the generating functions of the gravitational instantons considered in this note, namely
\[\begin{split} U_{EH}(\rho,z)&=\frac{1}{2}U_{0}( \rho,z+b)+\frac{1}{2}U_{0}(\rho,z-b)\\ &=d_{1}-\frac{1}{2}(z+b)\log\frac{d_{1}+z+b}{d_{1}-z-b}+d_{2}- \frac{1}{2}(z-b)\log\frac{d_{2}+z-b}{d_{2}-z+b},\end{split} \tag{2.40}\]
with \(d_{1}=(\rho^{2}+(z+b)^{2})^{\frac{1}{2}}\) and \(d_{2}=(\rho^{2}+(z-b)^{2})^{\frac{1}{2}}\), and the corresponding piecewise affine function is then:
\[f_{EH}(z)=\frac{1}{2}|z+b|+\frac{1}{2}|z-b|. \tag{2.41}\]
It may be observed that the positive constant \(A\) appearing in the general expression (2.1) is here equal to \(0\) and that the identity \(k=2A\) is here no longer valid, showing again that the self-dual Eguchi-Hanson metric does not belong to the family of gravitational instantons considered in this paper. It may however be viewed as a limit, as already observed by Don Page in [17], cf. also Paragraph 7.2 in [3]. In the current setting, this can be viewed by considering the following one-parameter family of metrics encoded by their piecewise affine functions of the form
\[f_{A}(z)=A+\frac{1}{2}|z+b|+\frac{1}{2}|z-b|, \tag{2.42}\]
normalized by the condition \(A+b=1\), cf. Remark 2.1, with \(A,b\geq 0\); notice that \(f_{1}=f_{2}=1\), and that most metrics in this family have conical singularities along invariant divisors, of angles \(2\pi\alpha_{i}\), \(i=1,2,3\). The vector fields attached to the corresponding polytopes, cf. (2.33)-(2.34), are then \(v_{0}=-\alpha_{0}(\partial_{x_{3}}+F_{0}\,\partial_{t})\), \(v_{1}=\alpha_{1}\frac{1}{4}\partial_{t}\), \(v_{2}=\alpha_{2}(\partial_{x_{3}}+F_{2}\,\partial_{t})\), and the regularity condition is then: \(\begin{pmatrix}v_{0}\\ v_{1}\end{pmatrix}=\begin{pmatrix}\ell&-\epsilon\\ 1&0\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}\), for some integer \(\ell\) and \(\epsilon=\pm 1\); we then have: \(\alpha_{0}=\epsilon\alpha_{2}\), hence \(\epsilon=1\), \(\alpha_{0}=\alpha_{2}\), and we can actually assume \(\alpha_{0}=\alpha_{2}=1\), and \(\ell\alpha=A(F_{2}-F_{0})\), by setting \(\alpha_{1}=\alpha\), hence, by (2.26), \(\ell\alpha=2(A^{2}-b^{2})=2(A-b)=2(2A-1)\), where we can assume that \(\ell\) is equal to \(1\), \(0\) or \(-1\). When \(A\) runs in the open interval \((0,1)\), the corresponding metric is smooth in the following three cases: \((A=\frac{3}{4},b=\frac{1}{4},\ell=1)\), \((A=\frac{1}{2},b=\frac{1}{2},\ell=0)\) and \((A=\frac{1}{4},b=\frac{3}{4},\ell=-1)\), corresponding to the "positive" Taub-bolt metric, the Schwarzschild metric and the "negative" Taub-bolt metric respectively 1.
Footnote 1: The ”positive” and the ”negative” Taub-bolt metrics are actually the same metric on the same manifold, namely the complex projective plane \(\mathbb{CP}^{2}\) with a deleted point, with however opposite orientations, hence two different conformal Kähler structures: the ”positive” Taub-bolt has the natural orientation of the tautological line bundle \(\mathcal{O}(-1)\) over \(\mathbb{CP}^{1}\), the ”negative” one the natural orientation of the dual line bundle \(\mathcal{O}(1)\). Similarly, the hyperkähler Eguchi-Hanson metric lives on the oriented manifold \(\mathcal{O}(-2)\), while the self-dual Eguchi-Hanson lives on the dual line bundle \(\mathcal{O}(2)\).
When \(A\in(0,\frac{1}{2})\) we can take \(\ell=-1\), the topology is that of the negative Taub-bolt metric (the total space of \(\mathcal{O}(1)\)), the angle \(4\pi(1-2A)\) goes from \(0\) when \(A\to\frac{1}{2}\)
to \(4\pi\) when \(A\to 0\), hence the metric tends to the pull-back, from \(\mathcal{O}(2)\) to \(\mathcal{O}(1)\), of the self-dual Eguchi-Hanson metric, with a conical singularity of angle \(4\pi\). When \(A\in(\frac{1}{2},1)\), we have \(\ell=1\), the topology is that of the positive Taub-bolt metric (the total space of \(\mathcal{O}(-1)\)), again the angle \(4\pi(2A-1)\) goes from \(0\) when \(A\to\frac{1}{2}\) to \(4\pi\) when \(A\to 1\). The limit for \(A=1\) is the Taub-Nut metric on \(\mathbb{R}^{4}\) which is generated by the function \(f_{1}(z)=1+|z|\).
There is a symmetry around \(A=\frac{1}{2}\): the metrics for \(A=\frac{1}{2}\pm a\) are the same with the orientation reversed, up to scale. So it may seem curious that the limits for \(A=0\) and \(A=1\) are the selfdual Eguchi-Hanson metric and the Taub-NUT metric. This contradiction is solved by understanding that these are limits at different scales: the Taub-NUT metric is obtained when \(A\to 1\) by schrinking the 2-sphere to a point, and by rescaling there is a bubble which is \(\mathcal{O}(-1)\) with the 2-cover of the Eguchi-Hanson metric. This is precisely what we see on the other side \(A\to 0\), with the opposite orientation.
Finally notice the change of topology and of orientation at \((A=\frac{1}{2},\ell=0,b=\frac{1}{2})\), encoding the (Riemannian) Schwarzschild metric, which lives on the product \(S^{2}\times\mathbb{R}^{2}\), with its natural two orientations.
## 3. **The Chen-Teo family**
The Chen-Teo 4-parameter family is actually relevant to the general treatment of the preceding section, i. e. is included and probably coincides with the family of toric Hermitian ALF gravitational instantons, with \(r=3\), when the \(z\)-axis admits 3 angular points, \(z_{1}<z_{2}<z_{3}\).
The convex piecewise affine function \(f\) as then the following general form:
\[f(z)=A+\frac{1}{2}(1-p)\,|z-z_{1}|+\frac{1}{2}(p+q)\,|z-z_{2}|+\frac{1}{2}(1-q) \,|z-z_{3}|, \tag{3.1}\]
where
\[-1<-p<q<1, \tag{3.2}\]
are the slopes of \(f\), on the open intervals \((-\infty,z_{1})\), \((z_{1},z_{2})\), \((z_{2},z_{3})\), \((z_{3},\infty)\) respectively. The pair \((p,q)\) then belongs to the open domain of \(\mathbb{R}^{2}\) defined by:
\[-1<p<1,\qquad-1<q<1,\qquad p+q>0. \tag{3.3}\]
We denote \(f_{1}:=f(z_{1})\), \(f_{2}:=f(z_{2})\), \(f_{3}:=f(z_{3})\), and, in addition to \(p,q\), we introduce two positive parameters \(a,b\) by
\[a:=f_{1}^{2}/f_{2}^{2},\qquad b:=f_{3}^{2}/f_{2}^{2}. \tag{3.4}\]
Alternatively:
\[\sqrt{a}-1=p\,\frac{(z_{2}-z_{1})}{f_{2}},\qquad\sqrt{b}-1=q\,\frac{(z_{3}-z_{ 2})}{f_{2}}. \tag{3.5}\]
Then \(a>1\) if \(p>0\), \(a<1\) if \(p<0\) and \(a=1\) if \(p=0\); similarly, \(b>0\) if \(q>0\), \(b<1\) if \(q<0\) and \(b=1\) if \(q=0\), and:
\[\lim_{p\to 0}\frac{(a-1)}{p}=\frac{2(z_{2}-z_{1})}{f_{2}},\qquad\lim_{q\to 0 }\frac{(b-1)}{q}=\frac{2(z_{3}-z_{2})}{f_{2}}. \tag{3.6}\]
Notice that the parameters \(a,b\), as well as the parameters \(p,q\), are insensitive to the transform described in Remark 2.1.
From (3.1), we get: \(f_{2}=A+\frac{f_{2}}{2}\big{(}\frac{(\sqrt{a}-1)}{p}(1-p)+\frac{(\sqrt{b}-1)} {q}(1-q)\big{)}\), hence:
\[A=f_{2}\,\frac{\big{(}p+q-\sqrt{a}\,q(1-p)-\sqrt{b}\,p(1-q)\big{)}}{2pq}=\frac {1}{2}\big{(}f_{2}(\sqrt{a}+\sqrt{b})-(z_{3}-z_{1})\big{)}. \tag{3.7}\]
**Remark 3.1**.: For further use, it will be convenient to "normalize" the convex piecewise affine function \(f(z)\), via the transform described in Remark 2.1, in order that \(z_{2}=0\) and \(f_{2}=1\), hence \(f_{1}=\sqrt{a}\), \(f_{3}=\sqrt{b}\). The convex piecewise affine function \(f(z)\) is then given by (3.1), with:
\[A=\frac{\sqrt{a}+\sqrt{b}}{2}-\frac{1}{2}\big{(}\frac{\sqrt{a}-1}{p}+\frac{ \sqrt{b}-1}{q}\big{)},\quad z_{1}=\frac{1-\sqrt{a}}{p},\ z_{2}=0,\ z_{3}=\frac{ \sqrt{b}-1}{q}. \tag{3.8}\]
### Regularity
In order to test the regularity of these metrics, we introduce the angles \(2\pi\alpha_{0}\), \(2\pi\alpha_{1}\), \(2\pi\alpha_{2}\), \(2\pi\alpha_{3}\), attached to each divisor, where the \(\alpha_{i}\) are all positive, and we consider the corresponding Killing vector fields, when \(p\neq 0\), \(q\neq 0\):
\[\begin{split} v_{0}&=-\alpha_{0}\big{(}\frac{ \partial}{\partial x_{3}}+F_{0}\frac{\partial}{\partial t}\big{)},\quad v_{1} =-p\alpha_{1}\big{(}\frac{\partial}{\partial x_{3}}+F_{1}\frac{\partial}{ \partial t}\big{)},\\ v_{2}&=q\alpha_{2}\big{(}\frac{\partial}{\partial x _{3}}+F_{2}\frac{\partial}{\partial t}\big{)},\quad v_{3}=\alpha_{3}\big{(} \frac{\partial}{\partial x_{3}}+F_{3}\frac{\partial}{\partial t}\big{)},\end{split} \tag{3.9}\]
where, for \(i=0,1,2,3\), \(F_{i}\) denotes the (constant) value of \(F\) in the interval \((z_{i},z_{i+1})\) on the axis \(\rho=0\), cf. Lemma 7.1 in [3]. The regularity conditions are then:
\[\begin{pmatrix}v_{0}\\ v_{1}\end{pmatrix}=\begin{pmatrix}\ell_{1}&-\epsilon_{1}\\ 1&0\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix},\quad\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\begin{pmatrix}\ell_{2}&-\epsilon_{2}\\ 1&0\end{pmatrix}\begin{pmatrix}v_{2}\\ v_{3}\end{pmatrix} \tag{3.10}\]
where \(\ell_{1},\ell_{2}\) are integers, and \(\epsilon_{1},\epsilon_{2}\) are equal to \(\pm 1\), hence:
\[\ell_{1}v_{1}=v_{0}+\epsilon_{1}v_{2},\qquad\ell_{2}v_{2}=v_{1}+\epsilon_{2}v _{3}, \tag{3.11}\]
or else, in view of (3.9):
\[\ell_{1}\,p\,\alpha_{1}+\epsilon_{1}q\,\alpha_{2}=\alpha_{0}, \tag{3.12}\]
\[p\,\alpha_{1}+\ell_{2}\,q\,\alpha_{2}=\epsilon_{2}\,\alpha_{3}, \tag{3.13}\]
\[\ell_{1}\,p\,\alpha_{1}\,F_{1}+\epsilon_{1}q\,\alpha_{2}\,F_{2}=\alpha_{0}\,F_ {0}, \tag{3.14}\]
\[p\,\alpha_{1}\,F_{1}+\ell_{2}\,q\,\alpha_{2}\,F_{2}=\epsilon_{2}\,\alpha_{3}\, F_{3}. \tag{3.15}\]
In view of (3.12), in (3.14) the \(F_{i}\) may be replaced by \(F_{i}+c\) for any constant \(c\), and likewise in (3.15) in view of (3.13). Also recall, cf. [3, Proposition 7.3], that the \(F_{i}\) are related by:
\[\begin{split} F_{1}-F_{0}&=-\frac{f_{1}^{2}}{A}\,\frac {(1-p)}{p}=-\frac{f_{2}^{2}}{A}\,\frac{a\,(1-p)}{p},\\ F_{2}-F_{1}&=\frac{f_{2}^{2}}{A}\,\frac{(p+q)}{pq}, \\ F_{3}-F_{2}&=-\frac{f_{3}^{2}}{A}\,\frac{(1-q)}{q}=- \frac{f_{2}^{2}}{A}\,\frac{b\,(1-q)}{q}.\end{split} \tag{3.16}\]
In particular:
\[A(F_{3}-F_{0})=\frac{f_{2}^{2}}{pq}\big{(}p+q-a\,q(1-p)-b\,p(1-q)\big{)}. \tag{3.17}\]
As observed above, in view of (3.12)-(3.13), (3.14)-(3.15) can be rewritten as:
\[\ell_{1}p(F_{1}-F_{0})\,\alpha_{1}+\epsilon_{1}q(F_{2}-F_{0})\,\alpha_{2}=0, \tag{3.18}\]
\[\epsilon_{2}p(F_{1}-F_{3})\,\alpha_{1}+\epsilon_{2}\ell_{2}q(F_{2}-F_{3})\, \alpha_{2}=0. \tag{3.19}\]
Since \(\alpha_{1}\) and \(\alpha_{2}\) are both positive, it follows that
\[(F_{2}-F_{0})(F_{1}-F_{3})=\epsilon_{1}\ell_{1}\ell_{2}(F_{1}-F_{0})(F_{2}-F_{3}), \tag{3.20}\]
hence
\[\frac{(F_{2}-F_{0})(F_{1}-F_{3})}{(F_{1}-F_{0})(F_{2}-F_{3})}=\epsilon_{1}\ell_{1} \ell_{2}, \tag{3.21}\]
or, equivalently:
\[\mathbf{n}:=\frac{(F_{3}-F_{0})(F_{1}-F_{2})}{(F_{1}-F_{0})(F_{2}-F_{3})}= \epsilon_{1}\ell_{1}\ell_{2}-1. \tag{3.22}\]
In view of (3.16)-(3.17), \(\mathbf{n}\), defined by (3.22), has the following expression;
\[\mathbf{n}=\frac{(p+q)(p+q-a\,q(1-p)-b\,p(1-q))}{a\,q(1-p)\,b\,p(1-q)}, \tag{3.23}\]
and will be called the _normalized total NUT-charge_, cf. [6, III.B]. By (3.22), \(\mathbf{n}\) is then an _integer_, whenever the metric is regular. Notice that \(\mathbf{n}=0\) if and only if \(F_{3}-F_{0}=0\), i.e. if and only if the metric is AF.
**Remark 3.2**.: Notice that (3.23) can be rewritten as
\[\mathbf{n}=\frac{(p+q)}{a\,b\,(1-p)(1-q)}\left(a+b-\frac{(a-1)}{p}-\frac{(b-1) }{q}\right). \tag{3.24}\]
It follows from (3.23) and (3.5) that \(\mathbf{n}\) is well-defined at \(p=0\) or \(q=0\) and that the quantity \(a+b-\frac{(a-1)}{p}-\frac{(b-1)}{q}\) has the sign of \(\mathbf{n}\). In particular, a regular metric is AF if and only if the parameters \(p,q,a,b\) are related by \(a+b-\frac{(a-1)}{p}-\frac{(b-1)}{q}=0\).
From (3.12)-(3.13)-(3.14)-(3.15), we infer:
\[\alpha_{1}=\frac{1}{\ell_{1}p}\frac{(F_{0}-F_{2})}{(F_{1}-F_{2})}\,\alpha_{0} =\frac{1}{\epsilon_{2}p}\frac{(F_{3}-F_{2})}{(F_{1}-F_{2})}\,\alpha_{3}, \tag{3.25}\]
\[\alpha_{2}=\frac{1}{\epsilon_{1}q}\frac{(F_{0}-F_{1})}{(F_{2}-F_{1})}\,\alpha_ {0}=\frac{1}{\epsilon_{2}\ell_{2}q}\frac{(F_{3}-F_{1})}{(F_{2}-F_{1})}\,\alpha _{3}, \tag{3.26}\]
\[\alpha_{3}=\frac{\epsilon_{2}}{\ell_{1}}\frac{(F_{0}-F_{2})}{(F_{3}-F_{2})}\, \alpha_{0}=\frac{\epsilon_{2}\ell_{2}}{\epsilon_{1}}\frac{(F_{0}-F_{1})}{(F_{ 3}-F_{1})}\,\alpha_{0}. \tag{3.27}\]
In view of (3.16), we then get:
\[\alpha_{1}=\epsilon_{2}\frac{b\,(1-q)}{(p+q)}\,\alpha_{3},\qquad\alpha_{2}= \epsilon_{1}\frac{a\,(1-p)}{(p+q)}\,\alpha_{0}, \tag{3.28}\]
from which we infer:
\[\epsilon_{1}=\epsilon_{2}=1. \tag{3.29}\]
It then follows that
\[\mathbf{n}=\ell_{1}\ell_{2}-1, \tag{3.30}\]
\[v_{2}=\ell_{1}v_{1}-v_{0},\quad v_{3}=\ell_{2}v_{2}-v_{1}=\mathbf{n}\,v_{1}- \ell_{2}v_{0}, \tag{3.31}\]
so that \(v_{0}\wedge v_{3}=\mathbf{n}\,v_{0}\wedge v_{1}\), hence:
\[\mathbf{n}=\det{(v_{0},v_{3})}, \tag{3.32}\]
since the pair \((v_{0},v_{1})\) is a basis of the lattice \(\Lambda\). From (3.25)-(3.26)-(3.27) and (3.16) we easily infer that the integers \(\ell_{1}\), \(\ell_{2}\) can be rewritten as:
\[\ell_{1}=\frac{(p+q-a\,q(1-p))}{b\,p(1-q)}\,\frac{\alpha_{0}}{\alpha_{3}}= \left(1+\frac{a\,q(1-p)}{p+q}\,\mathbf{n}\right)\,\frac{\alpha_{0}}{\alpha_{3}}, \tag{3.33}\]
\[\ell_{2}=\frac{(p+q-b\,p(1-q))}{a\,q(1-p)}\,\frac{\alpha_{3}}{\alpha_{0}}= \left(1+\frac{b\,p(1-q)}{p+q}\,\mathbf{n}\right)\,\frac{\alpha_{3}}{\alpha_{0}}. \tag{3.34}\]
From (3.28) and (3.29), the conical parameters \(\alpha_{1}\), \(\alpha_{2}\) are given by
\[\alpha_{1}=\frac{b(1-q)}{p+q}\,\alpha_{3},\qquad\alpha_{2}=\frac{a\,(1-p)}{p+q} \,\alpha_{0}, \tag{3.35}\]
while the relations (3.12)-(3.13)-(3.18)-(3.19) are expressed by:
\[\ell_{1}p\alpha_{1}+q\alpha_{2}=\alpha_{0}, \tag{3.36}\]
\[p\alpha_{1}+\ell_{2}q\alpha_{2}=\alpha_{3}, \tag{3.37}\]
\[-\ell_{1}a\,p(1-p)\,\alpha_{1}+(p+q-a\,q(1-p))\,\alpha_{2}=0, \tag{3.38}\]
\[(p+q-b\,p(1-q))\,\alpha_{1}-\ell_{2}b\,q(1-q)\,\alpha_{2}=0. \tag{3.39}\]
By using the expressions of \(\alpha_{1},\alpha_{2}\) given by (3.35), it is easily checked that these relations are all satisfied.
So far, we assumed that \(pq\neq 0\). In view of (3.6), the cases when \(p=0,q>0\) or \(q=0,p>0\) are then obtained by continuity. When \(p\) tends to \(0\), then \(q>0\), since \(p+q>0\), and
\[\ell_{1}=(1+\mathbf{n})\,\frac{\alpha_{0}}{\alpha_{3}},\qquad\ell_{2}=\frac{ \alpha_{3}}{\alpha_{0}}, \tag{3.40}\]
\[\alpha_{1}=\frac{b\,(1-q)}{q}\,\alpha_{3},\qquad\alpha_{2}=\frac{1}{q}\,\alpha _{0}, \tag{3.41}\]
and
\[\mathbf{n}=\frac{(1+q)}{b(1-q)}-\frac{q}{b(1-q)}\,\lim_{p\to 0}\frac{(a-1)}{p}-1. \tag{3.42}\]
Similarly, when \(q\) tends to \(0\), then \(p>0\) and
\[\ell_{1}=\frac{\alpha_{0}}{\alpha_{3}},\qquad\ell_{2}=(1+\mathbf{n})\,\frac{ \alpha_{3}}{\alpha_{0}}, \tag{3.43}\]
\[\alpha_{1}=\frac{1}{p}\,\alpha_{3},\qquad\alpha_{2}=\frac{a\,(1-p)}{p}\, \alpha_{0} \tag{3.44}\]
and
\[\mathbf{n}=\frac{(1+p)}{a(1-p)}-\frac{p}{a(1-p)}\,\lim_{q\to 0}\frac{(b-1)}{q}-1. \tag{3.45}\]
Recall that a metric of the Chen-Teo 4-parameter family is said to be _regular_ if it can be smoothly compactified along the invariant divisors, \(D_{i}\), encoded by the edges \(E_{i}\) of the momentum polytope, \(i=0,\ldots,r\), with a suitable choice of conical singularities of angles \(2\pi\alpha_{i}\) along each \(D_{i}\). In view of the above, this happens if and only if we can choose \(\alpha_{0},\alpha_{1},\alpha_{2},\alpha_{3}\), all positive, satisfying the conditions (3.36)-(3.37)-(3.38)-(3.39), hence, equivalently, the conditions (3.33)-(3.34)-(3.35), in fact (3.33)-(3.34) only, since \(\alpha_{1}\) and \(\alpha_{2}\) are then be defined by (3.35). We first observe that, in this case, it follows from (3.35) that \(\alpha_{1}\) and \(\alpha_{2}\) are completely determined by \(\alpha_{0}\) and \(\alpha_{3}\), as \(p+q>0\); moreover, only the quotient \(\alpha_{0}/\alpha_{3}\) is relevant, so that we can arrange that, say, \(\alpha_{0}=1\). This being understood, we can formulate the following statement:
**Theorem 3.1**.: _Let \((M,g)\) be an element of the 4-parameter Chen-Teo family, of parameter \(p,q,a,b\); let \(\mathbf{n}\) be the total NUT-charge of \(g\):_
\((1)\)_\((M,g)\) is regular if and only if \(\mathbf{n}\) is an integer._
\((2)\) _If \((M,g)\) is regular, then the boundary at infinity is diffeomorphic to \(L\), where \(L\) is: (i) a lens space of type \(\ell/\mathbf{n}\), where \(\ell\) is a factor of \(\mathbf{n}+1\), if \(\mathbf{n}\neq-1\); (ii) the sphere \(S^{3}\), if \(\mathbf{n}=-1\); (iii) \(S^{1}\times S^{2}\), if \(\mathbf{n}=0\)._
Proof.: (i) We already know that \(\mathbf{n}\) is an integer whenever \(g\) is regular. For the converse, in view of the above, we simply have to show that if \(\mathbf{n}\) is an integer there always exist \(\alpha_{0},\alpha_{3}\) positive, in fact only \(\alpha_{3}>0\) if we choose \(\alpha_{0}=1\), satisfying the conditions (3.33)-(3.34), where \(\ell_{1},\ell_{2}\) is some pair of integers such that \(\mathbf{n}=\ell_{1}\ell_{2}-1\). From (3.23), we infer that
\[\left(1+\frac{aq(1-p)}{p+q}\mathbf{n}\right)\left(1+\frac{bp(1-q)}{p+q} \mathbf{n}\right)=\ell_{1}\ell_{2}=\mathbf{n}+1, \tag{3.46}\]
\[\left(1+\frac{aq(1-p)}{p+q}\mathbf{n}\right)=\frac{p+q-aq(1-p)}{bp(1-q)}, \tag{3.47}\]
and
\[\left(1+\frac{bp(1-q)}{p+q}\mathbf{n}\right)=\frac{p+q-bp(1-q)}{aq(1-p)}. \tag{3.48}\]
If \(\mathbf{n}\geq 0\), hence \(\ell_{1}\ell_{2}>0\), it follows from (3.46) that either \(\left(1+\frac{aq(1-p)}{p+q}\mathbf{n}\right)\) and \(\left(1+\frac{bp(1-q)}{p+q}\mathbf{n}\right)\) are both positive or both negative; the second case is in fact excluded, due to the constraints on the parameters \(p,q,a,b\): indeed, since \(\mathbf{n}\geq 0\), if \(p>0,q>0\), then \(\left(1+\frac{aq(1-p)}{p+q}\mathbf{n}\right)\) and \(\left(1+\frac{bp(1-q)}{p+q}\mathbf{n}\right)\) are clearly both positive, and this is still the case if \(p\geq 0,q<0\), because of (3.47), or if \(p<0,q\geq 0\), because of (3.48). Thus, \(\ell_{1}\), \(\ell_{2}\) are both positive, and \(\alpha_{3}\) is then defined by (3.33)-(3.34).
If \(\mathbf{n}<-1\), hence \(\ell_{1}\ell_{2}<0\), then either \(\left(1+\frac{aq(1-p)}{p+q}\mathbf{n}\right)>0\) and \(\left(1+\frac{bp(1-q)}{p+q}\mathbf{n}\right)<0\) or vice versa. In the former case, we can choose \(\ell_{1}>0\), \(\ell_{2}<0\), in the latter case, chose instead \(\ell_{1}<0\), \(\ell_{2}>0\), and, in both cases, define \(\alpha_{3}\) by \(\alpha_{3}=\frac{p+q-aq(1-p)}{\ell_{1}bp(1-q)}=\frac{\ell_{2}aq(1-p)}{p+q-bp( 1-q)}\).
If \(\mathbf{n}=-1\), then \(\ell_{1}\ell_{2}=0\), so that either \(\ell_{1}=0\) or \(\ell_{2}=0\) or both. In the former case, we can define \(\alpha_{3}=\frac{\ell_{2}(p+q)}{p+q-bp(1-q)}\), where \(\ell_{2}\) may be any integer of the sign of \(p+q-bp(1-q)\), and likewise if \(\ell_{2}=0\). The most interesting case is when \(\ell_{1}=\ell_{2}=0\), i.e. wen \(p+q=aq(1-p)=bp(1-q)\); then, we can choose \(\alpha_{3}=\alpha_{0}=1\), \(a=\frac{p+q}{q(1-p)}\), \(b=\frac{p+q}{p(1-q)}\), \(\alpha_{1}=\frac{1}{p}\), \(\alpha_{2}=\frac{1}{q}\).
(ii) If \((M,g)\) is regular, we know by (3.31) that \(v_{3}=\mathbf{n}v_{1}-\ell_{2}v_{0}\). According to Proposition 4.1 in [3], the function \(z+i\rho\) identifies the interior of the moment polytope \(P\), equipped with the complex structure induced by \(g\), with the Poincare upper half-plane. At infinity, the topology of \((M,g)\) is then \(\mathbb{R}\times L\), where \(L\) is obtained, from the product \([0,1]\times\mathbb{T}^{2}\), by identifying \(3\) the circle \(\{0\}\times S^{1}\) with the circle \(\{1\}\times S^{1}\) via the rotation \(2i\pi\frac{\ell_{2}}{\mathbf{n}}\), where \(\{0\}\times S^{1}\) encodes the orbit of \(v_{0}\), around \(E_{0}\), and \(\{1\}\times S^{1}\) the orbit of \(v_{3}=\mathbf{n}v_{1}-\ell_{2}v_{0}\), around \(E_{3}\), cf. [13], [18] and Figure 3.
### The case when the metric is smooth
If \(\alpha_{0}=\alpha_{1}=\alpha_{2}=\alpha_{3}=1\), i.e. if \((M,g)\) is a gravitational instanton, the system (3.36)-(3.37)-(3.38)-(3.39) becomes:
\[\ell_{1}p+q=1,\qquad p+\ell_{2}q=1, \tag{3.49}\]
\[p+q-a(1-p)=0,\quad p+q-b(1-q)=0. \tag{3.50}\]
By (3.49), \(p+q=1-(\ell_{1}-1)p=1-(\ell_{2}-1)q\). From (3.50), we then infer
\[\frac{a-1}{p}=\frac{2-\ell_{1}}{1-p},\qquad\frac{b-1}{q}=\frac{2-\ell_{2}}{1-q}. \tag{3.51}\]
It follows that \(\ell_{1}<2\) and \(\ell_{2}<2\). From (3.49), we infer that \(\ell_{1}p=1-q>0\) and \(\ell_{2}q=1-p>0\). If \(p>0\) and \(q>0\), hence \(\ell_{1}=\ell_{2}=1\), we thus get \(\mathbf{n}=0\), \(p+q=1\), \(a=\frac{1}{q}\), \(b=\frac{1}{p}\), which characterizes the Chen-Teo instanton, cf. (2.37)-(2.38)-(2.39). If \(p>0\) and \(q<0\), then \(\ell_{1}>0\), hence \(\ell_{1}=1\), so that \(p+q=1\) and \(q=\ell_{2}q\), which is impossible, since \(\ell_{2}q>0\). Similarly, we cannot have \(p<0\) and \(q>0\). We thus recover the fact, already established in [3, Section 7], that the only toric Hermitian ALF gravitational instantons with 3 angular points are the Chen-Teo gravitational instantons.
### Some particular cases in the general ALF case
If \(\alpha_{0}=\alpha_{3}=1\), then, by (3.36)-(3.37), we get
\[\ell_{1}p\alpha_{1}+q\alpha_{2}=1,\qquad p\alpha_{1}+\ell_{2}q\alpha_{2}=1, \tag{3.52}\]
while, by (3.35), we have:
\[a=\frac{(p+q)\alpha_{2}}{1-p},\qquad b=\frac{(p+q)\alpha_{1}}{1-q}. \tag{3.53}\]
From (3.33)-(3.34), we also infer:
\[\ell_{1}=\frac{1-q\alpha_{2}}{p\alpha_{1}}=1+q\alpha_{2}\mathbf{n},\quad\ell_ {2}=\frac{1-p\alpha_{1}}{q\alpha_{2}}=1+p\alpha_{1}\mathbf{n}, \tag{3.54}\]
\[\mathbf{n}=\frac{\ell_{1}-1}{q\alpha_{2}}=\frac{\ell_{2}-1}{p\alpha_{1}}= \frac{1-p\alpha_{1}-q\alpha_{2}}{pq\alpha_{1}\alpha_{2}}, \tag{3.55}\]
Figure 3. _Lens space at infinity, obtained ”by attaching two solid tori \(S^{1}\times D^{2}\) together by a diffeomorphism \(S^{1}\times\partial D^{2}\to S^{1}\times\partial D^{2}\) sending a meridian \(\{x\}\times\partial D^{2}\) to a circle of slope \(\ell/\mathbf{n}\)”, cf. [13]. The disk \(D_{v_{0}}\), resp. \(D_{v_{3}}\), is formed by the orbits of the Killing vector field \(v_{0}\), resp. \(v_{3}\). The red half-circle is the hyperbolic geodesic in the Poincaré upper half-plane relating \(-R\) to \(R\) on the real axis, and \(R\) tends to infinity._
and
\[\frac{a-1}{p}=\frac{1-\ell_{1}\alpha_{1}-\alpha_{2}}{1-p},\qquad\frac{b-1}{q}= \frac{1+\alpha_{1}-\ell_{2}\alpha_{2}}{1-q}. \tag{3.56}\]
**Particular case 1.** We first consider the case when, in addition to \(\alpha_{0}=\alpha_{3}=1\), we suppose that \(\alpha_{2}=1\), and we then put: \(\alpha_{1}=\alpha\) (similar developments can be done, by simply swapping \(p\) and \(q\) if we suppose instead that \(\alpha_{1}=1\) and \(\alpha_{2}=\alpha\)). We then have:
\[\ell_{1}p\alpha+q=1,\qquad p\alpha+\ell_{2}q=1, \tag{3.57}\]
and:
\[a=\frac{p+q}{1-p},\qquad b=\frac{(p+q)\alpha}{1-q}. \tag{3.58}\]
We then have:
\[\ell_{1}=1+q\mathbf{n}=\frac{1-q}{p\alpha},\qquad\ell_{2}=1+p\alpha\mathbf{n} =\frac{1-p\alpha}{q}, \tag{3.59}\]
\[\mathbf{n}=\frac{\ell_{1}-1}{q}=\frac{\ell_{2}-1}{p\alpha}=\frac{1-p\alpha-q }{pq\alpha}, \tag{3.60}\]
and
\[\frac{a-1}{p}=\frac{2-\ell_{1}\alpha}{1-p},\qquad\frac{b-1}{q}=\frac{1+\alpha -\ell_{2}}{1-q}. \tag{3.61}\]
Interesting 1-parameter families are obtained by taking \(q=0\), from which we infer: \(p>0\), hence \(\frac{1}{2}<p<1\) -- since we have then \(a>1\); from (3.57) we also get: \(p\alpha=1\), hence \(1<\alpha<2\), and \(\ell_{1}=1\), hence \(\mathbf{n}=\ell_{2}-1\); we also infer: \(a=\frac{p}{1-p}\), \(b=1\), \(\frac{a-1}{p}=\frac{2-\alpha}{1-p}\) and \(\frac{b-1}{q}=1+\alpha-\ell_{2}\). For any \(\mathbf{n}=\ell_{2}-1\), we thus get a 1-parameter family of regular metrics parametrised either by \(p\in(\frac{1}{2},1)\) or, equivalently, by the angle \(2\pi\alpha\in(2\pi,4\pi)\).
When \(p\) tends to 1, i. e. when \(\alpha\) tends to 1, for any \(\ell_{2}\), \(a\) tends to \(+\infty\), \(b=1\) and \(\frac{\sqrt{6}-1}{q}\) tends to \(\frac{2-\ell_{2}}{2}\); in view of Remark 3.1, the metric then tends to the metric encoded by the affine piecewise function:
\[f^{\alpha=1}(z)=1-\frac{(2-\ell_{2})}{4}+\frac{1}{2}|z+\frac{(2-\ell_{2})}{4} |+\frac{1}{2}|z-\frac{(2-\ell_{2})}{4}|. \tag{3.62}\]
When \(p\) tends to \(\frac{1}{2}\), i. e. when \(\alpha\) tends to 2, for any \(\ell_{2}\), \(a=1\), implying that \(z_{1}=z_{2}\) and \(\frac{\sqrt{a}-1}{p}=0\), and \(\frac{\sqrt{b}-1}{q}\) tends to \(\frac{3-\ell_{2}}{2}\). In view of Remark 3.1 again, the metric then tends to the metric encoded by the piecewise affine function:
\[f^{\alpha=2}(z)=1-\frac{(3-\ell_{2})}{4}+\frac{1}{2}|z+\frac{(3-\ell_{2})}{4} |+\frac{1}{2}|z-\frac{(3-\ell_{2})}{4}|. \tag{3.63}\]
This limit when the angle goes to \(4\pi\) corresponds to the process described in [3, SS9] where the \(S^{2}\) with the conical singularity disappears at the limit \(4\pi\) with a bubble which should be the 2-cover of the self-dual Eguchi-Hanson metric (see the family of section 2.3 for an example of this phenomenon).
By successively considering the particular cases when \(\ell_{2}=2,\mathbf{n}=1\), \(\ell_{2}=0,\mathbf{n}=-1\), \(\ell_{2}=-1,\mathbf{n}=-2\) and the AF case \(\ell_{2}=1,\mathbf{n}=0\), we thus get the following 1-parameter families.
(i) \(\ell_{2}=2,\mathbf{n}=1\): \(f^{\alpha=1}(z)=1+|z|\), which encodes the self-dual Taub-NUT gravitational instanton, and \(f^{\alpha=2}(z)=\frac{3}{4}+\frac{1}{2}|z+\frac{1}{4}|+\frac{1}{2}|z-\frac{1} {4}|\), which encodes the positive Taub-bolt metric.
(ii) \(\ell_{2}=0,\mathbf{n}=-1\): \(f^{\alpha=1}(z)=\frac{1}{2}+\frac{1}{2}|z+\frac{1}{2}|+\frac{1}{2}|z-\frac{1} {2}|\), which encodes the Schwarzschild gravitational instanton, and \(f^{\alpha=2}(z)=\frac{1}{4}+\frac{1}{2}z+\frac{3}{4}|+\frac{1}{2}z-\frac{3} {4}|\), which encodes the negative Taub-bolt metric.
(iii) \(\ell_{2}=-1,\mathbf{n}=-2\): \(f^{\alpha=1}(z)=\frac{1}{4}+\frac{1}{2}|z+\frac{3}{4}|+\frac{1}{2}|z-\frac{3}{4}|\), which encodes the negative Taub-bolt gravitational instanton, and \(f^{\alpha=2}(z)=\frac{1}{2}|z+1|+\frac{1}{2}|z-1|\), which encodes the self-dual Eguchi-Hanson metric.
(iv) \(\ell_{2}=1,\mathbf{n}=0\) (this is an AF case): \(f^{\alpha=1}(z)=\frac{3}{4}+\frac{1}{2}|z+\frac{1}{4}|+\frac{1}{2}|z-\frac{1}{4}|\), which encodes the positive Taub-bolt gravitational instanton, and \(f^{\alpha=2}(z)=\frac{1}{2}+\frac{1}{2}|z+\frac{1}{2}|+\frac{1}{2}|z-\frac{1}{2}|\), which encodes the Schwarzschild metric.
### The AF case
As mentioned above, the normalized total NUT-charge \(\mathbf{n}\) is equal to zero if and only if the metric is AF. We then have \(\ell_{1}\ell_{2}=1\), while it follows from (3.33)-(3.34) that \(\ell_{1}=\ell_{2}^{-1}=\frac{\alpha_{0}}{\alpha_{3}}\). Since \(\ell_{1}\) and \(\ell_{2}\) are both positive integers, we eventually infer that:
\[\ell_{1}=\ell_{2}=1, \tag{3.64}\]
so that
\[\alpha_{0}=\alpha_{3}. \tag{3.65}\]
The conditions (3.36)-(3.37)-(3.38)-(3.39) then become:
\[p\alpha_{1}+q\alpha_{2}=\alpha_{0}=\alpha_{3}, \tag{3.66}\]
\[-a\,p(1-p)\,\alpha_{1}+(p+q-a\,q(1-p))\,\alpha_{2}=0, \tag{3.67}\]
\[(p+q-b\,p(1-q))\,\alpha_{1}-b\,q(1-q)\,\alpha_{2}=0. \tag{3.68}\]
Without loss of generality, we can suppose that \(\alpha_{0}=\alpha_{3}=1\). We thus get:
\[p\alpha_{1}+q\alpha_{2}=1, \tag{3.69}\]
as well as
\[a=\frac{(p+q)}{1-p}\,\alpha_{2},\qquad b=\frac{(p+q)}{1-q}\,\alpha_{1}. \tag{3.70}\]
### Particular case 2
An interesting case is when \(\alpha_{1}=\alpha_{2}=:\alpha>0\). This happens if and only if
\[a=\frac{1}{1-p},\qquad b=\frac{1}{1-q}, \tag{3.71}\]
and then
\[\alpha=\frac{1}{p+q}. \tag{3.72}\]
If \(p+q=1\), i.e. if \(\alpha\) tends to \(1\), we thus recover the Chen-Teo instanton. If, however, \(p+q\) tends to \(2\). i.e. if both \(p\) and \(q\) tend to \(1\), then \(\alpha\) tends to \(\frac{1}{2}\), while the normalised piecewise affine function, cf. Remark 3.1, tends to \(f(z)=1+|z|\); we thus obtain a quotient by \(\mathbb{Z}/2\mathbb{Z}\) of the self-dual Taub-NUT space. Finally, if \(p+q\) tends to \(0\), i.e. \(p\) and \(q\) both tend to \(0\) and \(\alpha\) then tends to \(+\infty\), then \(a\) and \(b\) both tend to \(1\), \(\frac{\sqrt{a}-1}{p}=\frac{\sqrt{b}-1}{q}=\frac{1}{2}\), and the piecewise affine function tends to \(f_{EH}(z)=\frac{1}{2}|z+\frac{1}{2}|+\frac{1}{2}|z-\frac{1}{2}|\), which encodes the self-dual Eguchi-Hanson metric, cf. Paragraph 2.3.
**Particular case 3.** An interesting case is with \(\alpha_{0}=\alpha_{2}=\alpha_{3}=1\), and we then put: \(\alpha_{1}=:\alpha\). We thus get:
\[a=\frac{p+q}{1-p},\qquad b=\frac{p+q}{p} \tag{3.73}\]
and
\[\alpha=\frac{1-q}{p}. \tag{3.74}\]
Interesting 1-parameter families are obtained by fixing the parameter \(q\) (this actually amounts to fixing the asymptotic behavior of the metric). Then \(\frac{1-q}{2}<p<1\) (the first inequality coming from \(a>1\)). We get a family of AF examples which are smooth except for one conical singularity along a \(S^{2}\) with angle \(2\pi\alpha\in(2\pi(1-q),4\pi)\); \(q\) being fixed, this family is parametrised either by \(p\), or by \(\alpha\), or, better by \(\tau:=\alpha-1\), so that: \(-q<\tau<1\). In view of of Remark 2.1, for each value of \(\tau\) it is easy to check that the corresponding metric is encoded by the following piecewise affine function
\[\begin{split} f^{q,\tau}(z)=&\frac{(1+q\,\tau)^{ \frac{1}{2}}}{2q(1-q)}\Big{(}(1+q\tau)^{\frac{1}{2}}-q(q+\tau)^{\frac{1}{2}}-( 1-q)^{\frac{3}{2}}\Big{)}\\ &+\frac{(q+\tau)}{2(1+\tau)}\,|z+\frac{(1-\tau^{2})}{(q+\tau)^{ \frac{1}{2}}\big{(}(1+q\,\tau)^{\frac{1}{2}}+(q+\tau)^{\frac{1}{2}}\big{)}}| \\ &+\frac{(1+q\,\tau)}{2(1+\tau)}\,|z|\\ &+\frac{(1-q)}{2}\,|z-\frac{(1+\tau)}{(1-q)^{\frac{1}{2}}\big{(} (1+q\,\tau)^{\frac{1}{2}}+(1-q)^{\frac{1}{2}}\big{)}}|.\end{split} \tag{3.75}\]
When \(\tau=0\), i.e. \(\alpha=1\) and \(p=1-q\), the corresponding metric is smooth: it is the Chen-Teo gravitational instanton of parameter \(q,p=1-q\), whose piecewise affine function is
\[f^{\tau=0}(z)= \frac{1}{2pq}\big{(}1-p^{\frac{3}{2}}-q^{\frac{3}{2}}\big{)}+ \frac{q}{2}\,|z+\frac{1}{q^{\frac{1}{2}}(1+q^{\frac{1}{2}})}+\frac{1}{2}|z|+ \frac{p}{2}\,|z-\frac{1}{p^{\frac{1}{2}}(1+p^{\frac{1}{2}})}|. \tag{3.76}\]
If \(\tau=-q\), i.e. \(\alpha=1-q\), we get
\[\begin{split} f^{\tau=-q}(z)=\frac{(1+q)^{\frac{1}{2}}}{2q} \Big{(}(1+q)^{\frac{1}{2}}-(1-q)\Big{)}+\frac{(1-q)}{2}\,|z|+\frac{(1-q)}{2} \,|z-\frac{1}{1+(1+q)^{\frac{1}{2}}}|.\end{split} \tag{3.77}\]
If \(\tau=1\), i.e. \(\alpha=2\), we get:
\[\begin{split} f^{\tau=1}(z)=&\frac{(1+q)^{\frac{1} {2}}}{2q}\Big{(}(1+q)^{\frac{1}{2}}-(1-q)^{\frac{1}{2}}\Big{)}+\frac{(1+q)}{2 }\,|z|\\ &+\frac{(1-q)}{2}\,|z-\frac{2}{(1-q)^{\frac{1}{2}}\Big{(}(1+q)^{ \frac{1}{2}}+(1-q)^{\frac{1}{2}}\Big{)}}|.\end{split} \tag{3.78}\]
The limit for the angle \(4\pi\) is again obtained by blowing down the \(S^{2}\) to a point and is a Kerr metric. The limit for the angle \(2\pi(1-q)\) is a Kerr-Taub-bolt metric with a conical singularity: it changes the topology at infinity because there is a bubble at infinity. The special case \(q=0\) was already studied in section 3.3, case (iv). |
2304.08238 | Gradient estimate for solutions of the equation $Δ_p v+av^{q}=0$ on
a complete Riemannian manifold | In this paper, we use Nash-Moser iteration method to study the local and
global behaviours of positive solutions to the nonlinear elliptic equation
$\Delta_pv +av^{q}=0$ defined on a complete Riemannian manifolds $(M,g)$ where
$p>1$, $a$ and $q$ are constants. Under some assumptions on $a$, $p$ and $q$,
we derive gradient estimates and Liouville type theorems for such positive
solutions. | Jie He, Youde Wang, Guodong Wei | 2023-04-17T12:59:31Z | http://arxiv.org/abs/2304.08238v5 | Gradient estimate for solutions of the equation \(\Delta_{p}u+av^{q}=0\) on a complete Riemannian manifold
###### Abstract.
In this paper, we use Nash-Moser iteration method to study the local and global behaviours of positive solutions to the nonlinear elliptic equation \(\Delta_{p}v+av^{q}=0\) defined on a complete Riemannian manifolds \((M,g)\) where \(p>1\), \(a\) and \(q\) are constants. Under some assumptions on \(a\), \(p\) and \(q\), we derive gradient estimates and Liouville type theorems for such positive solutions.
Key words and phrases:non-linear elliptic equation, gradient estimate, \(p\)-Laplace
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Proof of main theorem
* 3.1 Estimate for the linearisation operator of \(p\)-Laplace operator
* 3.2 Deducing the main integral inequality
* 3.3 \(L^{\beta}\) bound of gradient in \(3R/4\) radius ball
* 3.4 Moser iteration
## 1. Introduction
Gradient estimate is a basic and powerful technique in the study of partial differential equations on Riemannian manifolds. For instance, one can use gradient estimates to derive Harnack inequalities, to deduce Liouville type theorems, to study the geometry of manifolds, etc. Many mathematicians pay attention to the study on this topic (see for example, [14, 16, 24, 25, 26, 27] and the references there in).
In this paper we are concerned with the following equation on a complete Riemannian manifold \((M,g)\),
\[\Delta_{p}v+av^{q}=0, \tag{1.1}\]
where \(p>1\), \(a\), \(q\in\mathbb{R}\) are constants and \(\Delta_{p}(v)=\operatorname{div}(|\nabla v|^{p-2}\nabla v)\) is the \(p\)-Laplace operator.
When \(q\neq p-1\) and \(a>0\), the constant \(a\) can be absorbed by a dilation transformation, then equation (1.1) reduces to the classical Lane-Emden-Fowler equation
\[\Delta_{p}v+v^{q}=0, \tag{1.2}\]
which has been widely studied in the literature (see for example,[1, 2, 3, 4, 9, 15, 22]). In particular, Serrin-Zou in [22] showed that if
\[1<p<n,\quad\text{and}\quad 0<q<p^{*}=\frac{np}{n-p}-1,\]
then equation (1.2) defined on \(\mathbb{R}^{n}\) admits no nonnegative nontrivial solution.
Next, we pay our main attention on the equation (1.1) defined on a Riemannian manifold. When \(p=2\), and \(q=2^{*}\), equation (1.1) has a deep relationship with the Yamabe problem (see [19, 20, 21]). When \(M\) is the two sphere, equation (1.1) is also closely relevant to the stationary solutions to Euler's equation on \(\mathbb{S}^{2}\) (see for example, [6, 7]).
Now, we will focus on the Liouville type results for equation (1.1) on Riemannian manifolds. When \(a=0\), equation (1.1) becomes the \(p\)-Laplace equation
\[\Delta_{p}v=0,\quad p>1. \tag{1.3}\]
The celebrated Cheng-Yau's gradient estimate for harmonic functions showed that when \(p=2\), any solution bounded from above or blow to (1.3) is a constant (cf. [5]) provided the Ricci curvature of the manifold is nonnegative. Kotschwar-Ni (see [13]) proved the non-existence of positive solutions to (1.3) for any \(p\) under the assumption of lower boundness of sectional curvature. Later, Wang-Zhang ([25]) initially applied the Nash-Moser iteration technique to study the gradient estimates and provd that any positive solutions of (1.3) is constant provided the Ricci curvature of the manifold is nonnegative. Wang-Zhang's results only assumed the lower bound of Ricci curvature of \(M\), and hence generalized Yau's result ([26]) of \(p=2\) to any \(p>1\) and substantially improved Kotschwar-Ni's results.
When \(a=1\), Gidas and Spruck ([9]) proved that any \(C^{2}\) non-negative solution of (1.1) on a complete Riemannian manifold with non-negative Ricci curvature is zero when \(p=2,1\leq q<2^{*}\). However, there exist no extensions of Gidas and Spruck's result for a general \(p\). Instead of curvature condition, Grigor'yan-Sun([10]) add a condition on the volume growth of geodesic ball on \(M\) and obtained Liouville theorem. In the case \(p=2\), they showed that if we add a condition:
\[\operatorname{vol}\left(B(x_{0},r)\right)\leq Cr^{\frac{2q}{q-1}}(\ln r)^{ \frac{1}{q-1}},\]
then equation (1.1) has no nontrivial non-negative solutions on \(M\). Later, Sun([23]) extend the results to the general case \(p>1\).
Motivated by Wang-Zhang's method, Zhao-Yang [27] studied the weighted \(p\)-Laplacian Lichnerowicz equation
\[\Delta_{p,f}u+au^{\sigma}=0\]
on manifolds with \(m\)-Bakry-Emery Ricci curvature bounded from below. They also obtained a gradient estimate similar to Wang-Zhang's result, however their results can not cover Wang-Zhang's results in the case that \(f\) is a constant and \(a=0\).
Very recently, the second named author and third named author (see [17]) proved in the case \(p=2\) and \(a\) is a postive constant that if \((M,g)\) has non-negative Ricci curvature, then for any
\[q\in\left(-\infty,\quad\frac{n+1}{n-1}+\frac{2}{\sqrt{n(n-2)}}\right),\]
euqation (1.1) admits no positive solutions. Motivated by Wang-Zhang's results, they use Nash-Moser iteration to deduce a gradient estimate and then obtain Liouville propery from the gradient
estimate. In this paper, we consider the Liouville property for the equation (1.1) for a general \(p>1\) on a Riemannian manifold.
**Theorem 1.1**.: _Let \((M,g)\) be a \(n\)-dim(\(n>2\)) complete manifold with \(\operatorname{Ric}_{g}\geq-(n-1)\kappa g\), where \(\kappa\) is a non-negative constant. Assume that \(v\) is a positive solution to equation (1.1) on the geodesic ball \(B(o,R)\subset M\). If the constants \(a,q\) and \(p>1\) satisfy one of the following two conditions,_
\[a\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)\geq 0; \tag{1.4}\]
\[p-1<q<\frac{n+3}{n-1}(p-1). \tag{2}\]
_Then there holds_
\[\sup_{B_{\frac{R}{2}}(o)}\frac{|\nabla v|^{2}}{v^{2}}\leq c(n,p,q)\frac{(1+ \sqrt{\kappa}R)^{2}}{R^{2}}.\]
A direcy corollary of Theorem 1.1 is:
**Corollary 1.2**.: _When \(a=0\), we can derive Wang-Zhang's gradient estimates (see [25]) from the case (1) in Theorem 1.1._
By carefully analyzing the conditions (1.4) and (1.5) in Theorem 1.1, the following result holds.
**Corollary 1.3**.: _Let \((M,g)\) be a \(n\)-dim(\(n>2\)) complete manifold with \(\operatorname{Ric}_{g}\geq-(n-1)\kappa g\), where \(\kappa\) is a non-negative constant. Assume that \(v\) is a positive solution to equation (1.1) on the geodesic ball \(B(o,R)\subset M\). If_
\[a>0\quad\text{ and }\quad q<\frac{n+3}{n-1}(p-1),\]
_or_
\[a<0\quad\text{ and }\quad q>p-1,\]
_then_
\[\sup_{B_{\frac{R}{2}}(o)}\frac{|\nabla v|^{2}}{v^{2}}\leq c(n,p,q)\frac{(1+ \sqrt{\kappa}R)^{2}}{R^{2}}.\]
When the manifold \((M,g)\) has non-negative Ricci curvature, we can obtain the corresponding Liouville property of the equation (1.1).
**Theorem 1.4**.: _Let \((M,g)\) be a complete non-compact Riemannian manifold with non-negative Ricci curvature. If \(a,\ p\) and \(q\) satisfy one of the conditions given in Theorem 1.1, then equation (1.1) admits no positive solutions._
**Remark 1**.: _When \(a>0\) and \(p=2\), by Theorem 1.4, we deduce that for_
\[q\in\left(-\infty,\quad\frac{n+3}{n-1}\right),\]
equation(1.1) has no positive solutions. Since_
\[\frac{n+3}{n-1}>\frac{n+1}{n-1}+\frac{2}{\sqrt{n(n-1)}},\]
_Theorem 1.4 improved Wang-Wei's main results (see [17])._
**Theorem 1.5**.: _Let \((M,g)\) be a complete non-compact Riemannian manifold with \(\operatorname{Ric}_{g}\geq-(n-1)\kappa g,\) where \(\kappa\) is a non-negative constant. Suppose \(u\) is a positive solution of equation (1.1) with the constants \(a,p\) and \(q\) satisfy (1.4) or (1.5). Fix \(x_{0}\in M\), then for any \(x\in M\), there holds_
\[u(x_{0})e^{-c(n,p,q)\sqrt{\kappa}d(x,x_{0})}\leq u(x)\leq u(x_{0})e^{c(n,p,q) \sqrt{\kappa}d(x,x_{0})},\]
_where \(d(x_{0},x)\) is the geodesic distance between \(x_{0}\) and \(x\)._
We notice that just one week ago, Guangyue Huang, Qi Guo, and Lujun Guo posted a preprint on arxiv (see [12]) in where they also study the gradient estimates of equation (1.1). We point out here that their results are much different from ours. First, the results in their Theorem 1 and Theorem 3 are all weaker than ours since in our gradient estimate and Liouville type theorem, the range of \(q\) is larger. What's more, their Theorem 1 need a restriction on \(p\) and their Theorem 3 does not have a detailed proof; Second, it is mysterious that their gradient estimate can not completely cover Wang-Zhang's result.
The rest of our paper is organized as follows. In section 2, we will give a meticulous estimate of \(\mathcal{L}\left(|\nabla\log v|^{2\alpha}\right)\) (see (2.3) for the explicit definition of the operator \(\mathcal{L}\)) and recall Saloff-Coste's Sobolev embedding theorem. In section 3, we carefully use the Moser iteration to provide the proofs of the main results in this paper.
## 2. Preliminaries
Throughout this paper, we denote \((M,g)\) an \(n\)-dim Riemannian manifold, and \(\nabla\) the corresponding Levi-Civita connection. For any function \(\varphi\in C^{1}(M)\), we denote \(\nabla\varphi\in\Gamma(T^{*}M)\) by \(\nabla\varphi(X)=\nabla_{X}\varphi\). We denote the volume form \(\operatorname{vol}=\sqrt{\det(g_{ij})}dx_{1}\wedge\ldots\wedge dx_{n}\) where \((x_{1},\ldots,x_{n})\) is a local coordinates, and for simplicity we may omit the volume form of integral over \(M\).
The \(p\)-Laplace operator is defined by
\[\Delta_{p}u=\operatorname{div}\left(|\nabla u|^{p-2}\nabla u\right).\]
The solution of \(p\)-Laplace equation \(\Delta_{p}u=0\) is the critical point of the energy functional
\[E(u)=\int_{M}|\nabla u|^{p}.\]
**Definition 2.1**.: \(v\) is said to be a (weak) solution of equation (1.1), if \(v\in C^{1}(M)\cap W^{1,p}_{loc}(M)\) and for all \(\psi\in W^{1,p}_{0}(M)\), we have
\[-\int_{M}|\nabla v|^{p-2}\langle\nabla v,\nabla\psi\rangle+\int_{M}av^{q}\psi =0.\]
Next, we recall the Saloff-Coste's Sobolev inequalities (see [18, Theorem 3.1]) which shall play an key role in our proof of the main theorem.
**Lemma 2.2** ([18]).: _Let \((M,g)\) be a complete manifold with \(Ric\geq-(n-1)\kappa\). For \(n>2\), there exists a positive constant \(C_{n}\) depending only on \(n\), such that for all \(B\subset M\) of radius R and volume \(V\) we have for \(f\in C_{0}^{\infty}(B)\)_
\[\|f\|_{L^{\frac{2n}{n-2}}}^{2}\leq e^{C_{n}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}} R^{2}\left(\int|\nabla f|^{2}+R^{-2}f^{2}\right).\]
By a logarithmic transformation \(u=-(p-1)\log v\), equation (1.1) becomes
\[\Delta_{p}u-|\nabla u|^{p}-be^{cu}=0, \tag{2.1}\]
where
\[b=a(p-1)^{p-1},\quad c=\frac{p-q-1}{p-1}. \tag{2.2}\]
Now we consider the linearisation operator \(\mathcal{L}\) of \(p\)-Laplace operator:
\[\mathcal{L}(\psi)=\mathrm{div}\left(f^{p/2-1}A(\nabla\psi)\right), \tag{2.3}\]
where
\[f=|\nabla u|^{2}, \tag{2.4}\]
and
\[A(\nabla\psi)=\nabla\psi+(p-2)f^{-1}\langle\nabla\psi,\nabla u\rangle\nabla u. \tag{2.5}\]
We first derive an useful expression of \(\mathcal{L}(f^{\alpha})\) for any \(\alpha\geq 1\).
**Lemma 2.3**.: _For any \(\alpha\geq 1\), we have_
\[\begin{split}\mathcal{L}(f^{\alpha})=&\alpha\left( \alpha+\frac{p}{2}-2\right)f^{\alpha+\frac{p}{2}-3}|\nabla f|^{2}+2\alpha f^{ \alpha+\frac{p}{2}-2}\left(|\nabla\nabla u|^{2}+\mathrm{Ric}(\nabla u,\nabla u )\right)\\ &+\alpha(p-2)(\alpha-1)f^{\alpha+\frac{p}{2}-4}\langle\nabla f, \nabla u\rangle^{2}+2\alpha f^{\alpha-1}\langle\nabla\Delta_{p}u,\nabla u \rangle.\end{split} \tag{2.6}\]
Proof.: By the definition of \(A\) in (2.5), we have
\[A\Big{(}\nabla(f^{\alpha})\Big{)}=\alpha f^{\alpha-1}\nabla f+\alpha(p-2)f^{ \alpha-2}\langle\nabla f,\nabla u\rangle\nabla u=\alpha f^{\alpha-1}A(\nabla f),\]
then it follows
\[\mathcal{L}(f^{\alpha})=\alpha\mathrm{div}\Big{(}f^{\alpha-1}f^{\frac{p}{2}-1} A(\nabla f)\Big{)}=\alpha\Big{\langle}\nabla(f^{\alpha-1}),f^{\frac{p}{2}-1}A( \nabla f)\Big{\rangle}+\alpha f^{\alpha-1}\mathcal{L}(f). \tag{2.7}\]
Direction computation shows that
\[\alpha\Big{\langle}\nabla(f^{\alpha-1}),f^{\frac{p}{2}-1}A(\nabla f)\Big{\rangle}= \Big{\langle}\alpha(\alpha-1)f^{\alpha-2}\nabla f,f^{\frac{p}{2}-1}\nabla f +(p-2)f^{\frac{p}{2}-2}\langle\nabla f,\nabla u\rangle\nabla u\Big{\rangle}, \tag{2.8}\]
and
\[\begin{split}\alpha f^{\alpha-1}\mathcal{L}(f)=& \alpha f^{\alpha-1}\Big{(}\left(\frac{p}{2}-1\right)f^{\frac{p}{2}-2}|\nabla f |^{2}+f^{\frac{p}{2}-1}\Delta f+(p-2)\left(\frac{p}{2}-2\right)f^{\frac{p}{2} -3}\langle\nabla f,\nabla v\rangle^{2}\\ &+(p-2)f^{\frac{p}{2}-2}\langle\nabla\langle\nabla f,\nabla u \rangle,\nabla u\rangle+(p-2)f^{\frac{p}{2}-2}\langle\nabla f,\nabla u \rangle\Delta u\Big{)}.\end{split} \tag{2.9}\]
Combining (2.8) and (2.9) together, we obtain
\[\begin{split}\mathcal{L}(f^{\alpha})=&\alpha\left( \alpha+\frac{p}{2}-2\right)f^{\alpha+\frac{p}{2}-3}|\nabla f|^{2}+\alpha f^{ \alpha+\frac{p}{2}-2}\Delta f\\ &+\alpha(p-2)\left(\alpha+\frac{p}{2}-3\right)f^{\alpha+\frac{p} {2}-4}\langle\nabla f,\nabla u\rangle^{2}\\ &+\alpha(p-2)f^{\alpha+\frac{p}{2}-3}\langle\nabla\langle\nabla f,\nabla u\rangle,\nabla u\rangle+\alpha(p-2)f^{\alpha+\frac{p}{2}-3}\langle \nabla f,\nabla u\rangle\Delta u.\end{split} \tag{2.10}\]
On the other hand, by the definition of the \(p\)-Laplacian, we have
\[\begin{split}\langle\nabla\Delta_{p}v,\nabla v\rangle=& \left(\frac{p}{2}-1\right)\left(\frac{p}{2}-2\right)f^{\frac{p}{2}-3}\langle \nabla f,\nabla u\rangle^{2}+\left(\frac{p}{2}-1\right)f^{\frac{p}{2}-2} \langle\nabla\langle\nabla f,\nabla u\rangle,\nabla v\rangle\\ &+\left(\frac{p}{2}-1\right)f^{\frac{p}{2}-2}\langle\nabla f, \nabla v\rangle\Delta u+f^{\frac{p}{2}-1}\langle\nabla\Delta u,\nabla u \rangle.\end{split}\]
That is to say, the last term in (2.10) can be written as
\[\begin{split}\alpha(p-2)f^{\alpha+\frac{p}{2}-3}\langle\nabla f,\nabla u\rangle\Delta u=& 2\alpha f^{\alpha-1}\langle\nabla\Delta_{p}v, \nabla u\rangle-2\alpha f^{\alpha+\frac{p}{2}-2}\langle\nabla\Delta u,\nabla u \rangle\\ &-\alpha(p-2)\left(\frac{p}{2}-2\right)f^{\alpha+\frac{p}{2}-4} \langle\nabla f,\nabla u\rangle^{2}\\ &-\alpha(p-2)f^{\alpha+\frac{p}{2}-3}\langle\nabla\langle\nabla f,\nabla u\rangle,\nabla v\rangle.\end{split} \tag{2.11}\]
By (2.11) and the following Bochner formula
\[\frac{1}{2}\Delta f=|\nabla\nabla u|^{2}+\operatorname{Ric}(\nabla u,\nabla u )+\langle\nabla\Delta u,\nabla u\rangle,\]
we have
\[\begin{split}\mathcal{L}(f^{\alpha})=&\alpha\left( \alpha+\frac{p}{2}-2\right)f^{\alpha+\frac{p}{2}-3}|\nabla f|^{2}+2\alpha f^{ \alpha+\frac{p}{2}-2}\left(|\nabla\nabla u|^{2}+\operatorname{Ric}(\nabla u, \nabla u)\right)\\ &+\alpha(p-2)(\alpha-1)f^{\alpha+\frac{p}{2}-4}\langle\nabla f, \nabla u\rangle^{2}+2\alpha f^{\alpha-1}\langle\nabla\Delta_{p}u,\nabla u \rangle.\end{split}\]
## 3. Proof of main theorem
### Estimate for the linearisation operator of \(p\)-Laplace operator
We first prove an pointwise estimate for \(\mathcal{L}(f^{\alpha})\).
**Lemma 3.1**.: _Let \(u\) be a solution of equation (2.1) and \(f=|\nabla u|^{2}\) as we defined in (2.4). We denote \(a_{1}=\left|p-\frac{2(p-1)}{n-1}\right|\), then we have_
1. _If_ \[a\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)\geq 0,\] _then we have_ (3.1) \[\mathcal{L}(f^{\alpha})\geq \frac{2\alpha f^{\alpha+\frac{p}{2}}}{n-1}-2(n-1)\alpha\kappa f^{ \alpha+\frac{p}{2}-1}-\alpha a_{1}f^{\alpha+\frac{p}{2}-\frac{3}{2}}|\nabla f|.\]
2. _If_ (3.2) \[\delta_{n,p,q}=\frac{1}{n-1}-\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)^{2} \frac{(2\alpha-1)(n-1)+p-1}{4(2\alpha-1)}>0,\]
_then we have_
\[\mathcal{L}(f)\geq 2\alpha\delta_{n,p,q}f^{\alpha+\frac{p}{2}}-2\alpha(n-1)\kappa f^{ \alpha+\frac{p}{2}-1}-\alpha a_{1}f^{\alpha+\frac{p}{2}-\frac{3}{2}}|\nabla f|.\]
Proof.: Let \(\{e_{1},e_{2},\ldots,e_{n}\}\) be an orthonormal frame of \(TM\) on a domain with \(f\neq 0\) such that \(e_{1}=\frac{\nabla u}{|\nabla u|}\). We have \(u_{1}=f^{1/2}\) and
\[u_{11}=\frac{1}{2}f^{-1/2}f_{1}=\frac{1}{2}f^{-1}\langle\nabla u,\nabla f\rangle \tag{3.3}\]
If we express the \(p\)-Lapalce in terms of \(f\), we have (see also [13, 25])
\[\Delta_{p}u=f^{\frac{p}{2}}+be^{cu}= f^{\frac{p}{2}-1}\left((p-1)u_{11}+\sum_{i=2}^{n}u_{ii}\right) \tag{3.4}\]
Substituting (3.4) into equation (1.1), we obtain:
\[(p-1)u_{11}+\sum_{i=2}^{n}u_{ii}=f+be^{cu}f^{1-\frac{p}{2}}. \tag{3.5}\]
Using the fact \(u_{1}=f^{1/2}\) again yields
\[|\nabla f|^{2}/f=4\sum_{i=1}^{n}u_{1i}^{2}\geq 4u_{11}^{2}. \tag{3.6}\]
By Cauchy inequality, we arrive at
\[|\nabla\nabla u|^{2}\geq u_{11}^{2}+\sum_{i=2}u_{ii}^{2}\geq u_{11}^{2}+\frac{1}{n-1} \left(\sum_{i=2}u_{ii}\right)^{2}. \tag{3.7}\]
Making use of the equation (2.1), we have
\[\langle\nabla\Delta_{p}u,\nabla u\rangle=pf^{\frac{p}{2}}u_{11}+bce^{cu}f. \tag{3.8}\]
Substituting (3.3), (3.6), (3.7) and (3.8) into (2.6), we have
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} _{p}(w)\geq& 2\left(\alpha+\frac{p}{2}-2\right)u_{11}^{2}+u_{11}^{2}+ \frac{1}{n-1}\left(\sum_{i=2}u_{ii}\right)^{2}+\operatorname{Ric}(\nabla u, \nabla u)\\ &+2(p-2)(\alpha-1)u_{11}^{2}+f^{1-\frac{p}{2}}\left(pf^{\frac{p} {2}}u_{11}+bce^{cu}f\right).\end{split} \tag{3.9}\]
It follows from (3.5) that
\[\begin{split}\left(\sum_{i=2}u_{ii}\right)^{2}=&\left( f+be^{cu}f^{1-\frac{p}{2}}-(p-1)u_{11}\right)^{2}\\ =& f^{2}+\left(be^{cu}f^{1-\frac{p}{2}}-(p-1)u_{11} \right)^{2}+2be^{cu}f^{2-\frac{p}{2}}-2f(p-1)u_{11}.\end{split}\]
Substituting the above inequality into (3.9), we have
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} \left(f^{\alpha}\right)\geq&(p-1)(2\alpha-1)u_{11}^{2}-(n-1) \kappa f+\left(p-\frac{2(p-1)}{n-1}\right)fu_{11}+\frac{f^{2}}{n-1}\\ &+b\left(c+\frac{2}{n-1}\right)e^{cu}f^{2-\frac{p}{2}}+\frac{1}{n -1}\left(be^{cu}f^{1-\frac{p}{2}}-(p-1)u_{11}\right)^{2}.\end{split} \tag{3.10}\]
If we denote \(a_{1}=\left|p-\frac{2(p-1)}{n-1}\right|\), by (3.3) we have
\[2\left(p-\frac{2(p-1)}{n-1}\right)fu_{11}\geq-a_{1}f^{\frac{1}{2}}|\nabla f|. \tag{3.11}\]
Substituting \(a_{1}\) into (3.10), we have
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} (f^{\alpha})\geq&(p-1)(2\alpha-1)u_{11}^{2}-(n-1)\kappa f-\frac{a _{1}}{2}f^{\frac{1}{2}}|\nabla f|+\frac{f^{2}}{n-1}\\ b\left(c+\frac{2}{n-1}\right)e^{cu}f^{2-\frac{p}{2}}+\frac{1}{n- 1}\left(be^{cu}f^{1-\frac{p}{2}}-(p-1)u_{11}\right)^{2}.\end{split} \tag{3.12}\]
**Case I:**
\[a\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)\geq 0.\]
Under this condition we have
\[be^{cu}f\left(c+\frac{2}{n-1}\right)=a(p-1)^{p-1}e^{cu}f\left(\frac{n+1}{n-1}- \frac{q}{p-1}\right)\geq 0.\]
Omitting some non-negative terms in (3.12) we obtain
\[\mathcal{L}(f^{\alpha})\geq 2\alpha f^{\alpha+\frac{p}{2}-2}\left(\frac{f^{2}}{n-1}-(n-1) \kappa f-\frac{a_{1}}{2}f^{\frac{1}{2}}|\nabla f|\right),\]
which is just the inequality (3.1).
Now we expand the square of the last term in (3.12) and collect the terms, we obtain
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} (f^{\alpha})\geq&(p-1)\left(2\alpha-1+\frac{p-1}{n-1}\right)u_{ 11}^{2}-(n-1)\kappa f\\ &+b\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)e^{cu}f^{2-\frac{p} {2}}-\frac{a_{1}}{2}f^{\frac{1}{2}}|\nabla f|+\frac{f^{2}}{n-1}\\ &+\frac{1}{n-1}\left(b^{2}e^{2cu}f^{2-p}-2(p-1)be^{cu}f^{1-\frac{ p}{2}}u_{11}\right).\end{split} \tag{3.13}\]
**Case II :**
\[\frac{1}{n-1}-\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)^{2}\frac{(2\alpha-1) (n-1)+p-1}{4(2\alpha-1)}>0.\]
In this case, we infer from \(a^{2}-2ab\geq-b^{2}\) that
\[\begin{split}&(p-1)\left(2\alpha-1+\frac{p-1}{n-1}\right)u_{11}^{2} -2\frac{(p-1)}{n-1}be^{cu}f^{1-\frac{p}{2}}u_{11}\\ \geq&-\frac{(p-1)b^{2}e^{2cu}f^{2-p}}{((2\alpha-1)(n -1)+p-1)(n-1)}.\end{split} \tag{3.14}\]
Combining (3.13) and (3.14), we get
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} (f^{\alpha})\geq&\frac{(2\alpha-1)b^{2}e^{2cu}f^{2-p}}{(2\alpha-1 )(n-1)+p-1}-\kappa f+\left(p-\frac{2(p-1)}{n-1}\right)fu_{11}\\ &+b\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)e^{cu}f^{2-\frac{p}{ 2}}+\frac{f^{2}}{n-1}.\end{split} \tag{3.15}\]
Making use of \(a^{2}+2ab\geq-b^{2}\) again,
\[\begin{split}&\frac{(2\alpha-1)b^{2}e^{2cu}f^{2-p}}{(2\alpha-1 )(n-1)+p-1}+b\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)e^{cu}f^{2-\frac{p}{2}} \\ \geq&-\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)^{2} \frac{(2\alpha-1)(n-1)+p-1}{4(2\alpha-1)}f^{2}\end{split} \tag{3.16}\]
Substituting (3.16) into (3.15), we arrive at
\[\begin{split}\frac{f^{2-\alpha-\frac{p}{2}}}{2\alpha}\mathcal{L} (f^{\alpha})\geq&\left(\frac{1}{n-1}-\left(\frac{n+1}{n-1}-\frac{ q}{p-1}\right)^{2}\frac{(2\alpha-1)(n-1)+p-1}{4(2\alpha-1)}\right)f^{2}\\ &-(n-1)\kappa f-\frac{a_{1}}{2}f^{-\frac{1}{2}}|\nabla f|.\end{split} \tag{3.17}\]
i.e.
\[\mathcal{L}(f^{\alpha})\geq \delta_{n,p,q,\alpha}\alpha f^{\alpha+\frac{p}{2}}-2\alpha(n-1) \kappa f^{\alpha+\frac{p}{2}-1}-a_{1}\alpha f^{\alpha+\frac{p}{2}-\frac{3}{2 }}|\nabla f|,\]
where
\[\delta_{n,p,q,\alpha}=2\left(\frac{1}{n-1}-\left(\frac{n+1}{n-1}-\frac{q}{p-1} \right)^{2}\frac{(2\alpha-1)(n-1)+p-1}{4(2\alpha-1)}\right)>0.\]
If \(q\) satisfies
\[p-1<q<\frac{n+1}{n-1}(p-1),\]
then \(\delta_{n,p,q,\alpha}\) has a positive lower bound as \(\alpha\to\infty\).
### Deducing the main integral inequality
We denote
\[\beta_{n,p,q,\alpha}=\begin{cases}\frac{2}{n-1},&\text{ if }a\left(\frac{n+1}{n-1}- \frac{q}{p-1}\right)\geq 0;\\ \delta_{n,p,q,\alpha},&\text{ if }\quad\delta_{n,p,q,\alpha}>0.\end{cases} \tag{3.18}\]
If one of the conditions in Lemma 3.1 establishes, we have
\[\mathcal{L}(f^{\alpha})\geq\beta_{n,p,q,\alpha}\alpha f^{\alpha+\frac{p}{2}}- 2\alpha(n-1)\kappa f^{\alpha+\frac{p}{2}-1}-a_{1}\alpha f^{\alpha+\frac{p}{2 }-\frac{3}{2}}|\nabla f|. \tag{3.19}\]
Now we choose a geodesic ball \(\Omega=B_{R}(o)\subset M\). If we choose test function \(\psi=f^{t}\eta^{2}\) where \(\eta\in C_{0}^{\infty}(\Omega,\mathbb{R})\) is to be determined, it follows from (3.22) that
\[-\int_{\Omega}\langle f^{p/2-1}\nabla f^{\alpha}+(p-2)f^{p/2-2} \langle\nabla f^{\alpha},\nabla u\rangle\nabla u,\nabla\psi\rangle\] \[\geq \beta_{n,p,q,\alpha}\alpha\int_{\Omega}f^{\alpha+\frac{p}{2}+t} \eta^{2}-2(n-1)\alpha\kappa\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}-a_{ 1}\alpha\int_{\Omega}f^{\alpha+\frac{p-3}{2}+t}|\nabla f|\eta^{2},\]
i.e.
\[-\int_{\Omega}2\eta\alpha f^{\alpha+\frac{p}{2}+t-2}\langle \nabla f,\nabla\eta\rangle+2\alpha\eta(p-2)f^{\alpha+\frac{p}{2}+t-3}\langle \nabla f,\nabla u\rangle\langle\nabla u,\nabla\eta\rangle\] \[\geq \beta_{n,p,q,\alpha}\alpha\int_{\Omega}f^{\alpha+\frac{p}{2}+t} \eta^{2}-2(n-1)\alpha\kappa\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}-a_{ 1}\alpha\int_{\Omega}f^{\alpha+\frac{p-3}{2}+t}|\nabla f|\eta^{2}. \tag{3.20}\]
If we denote \(a_{2}=\min\{1,p-1\}\), then we have
\[|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla f,\nabla u\rangle^{2}\geq a_{2}| \nabla f|^{2}, \tag{3.21}\]
and
\[\langle\nabla f,\nabla\eta\rangle+(p-2)f^{-1}\langle\nabla f,\nabla u\rangle \langle\nabla u,\nabla\eta\rangle\geq-(p-1)|\nabla f||\nabla\eta| \tag{3.22}\]
Applying (3.21) and (3.22) to (3.20) and dividing both sides by \(\alpha\), we have
\[\begin{split}&\beta_{n,p,q,\alpha}\int_{\Omega}f^{\alpha+\frac{p}{2 }+t}\eta^{2}+\int_{\Omega}a_{2}tf^{\alpha+\frac{p}{2}+t-3}|\nabla f|^{2}\eta^{ 2}\\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1} \eta^{2}+a_{1}\int_{\Omega}f^{\alpha+\frac{p-3}{2}+t}|\nabla f|\eta^{2}+2(p-1 )\int_{\Omega}f^{\alpha+\frac{p}{2}+t-2}|\nabla f||\nabla\eta|\eta.\end{split} \tag{3.23}\]
Using Cauchy-inequality, we have
\[\begin{split} a_{1}f^{\alpha+\frac{p-3}{2}+t}|\nabla f|\eta^{2} \leq&\frac{a_{2}t}{4}f^{\alpha+\frac{p}{2}+t-3}|\nabla f|^{2}\eta ^{2}+\frac{a_{1}^{2}}{a_{2}t}f^{\alpha+\frac{p}{2}+t}\eta^{2};\\ 2(p-1)f^{\alpha+\frac{p}{2}+t-2}|\nabla f||\nabla\eta|\eta\leq& \frac{a_{2}t}{4}f^{\alpha+\frac{p}{2}+t-3}|\nabla f|^{2}\eta^{2}+\frac{4(p-1)^{ 2}}{a_{2}t}f^{\alpha+\frac{p}{2}+t-1}|\nabla\eta|^{2}.\end{split} \tag{3.24}\]
Now we choose \(t\) large enough such that
\[\frac{a_{1}^{2}}{a_{2}t}\leq\frac{1}{2}\beta_{n,p,q}. \tag{3.25}\]
It follows from (3.23), (3.24) and (3.25) that
\[\begin{split}&\frac{1}{2}\beta_{n,p,q,\alpha}\int_{\Omega}f^{ \alpha+\frac{p}{2}+t}\eta^{2}+\frac{a_{2}t}{2}\int_{\Omega}f^{\alpha+\frac{p}{ 2}+t-3}|\nabla f|^{2}\eta^{2}\\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1} \eta^{2}++\frac{4(p-1)^{2}}{a_{2}t}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}| \nabla\eta|^{2}.\end{split} \tag{3.26}\]
On the other hand, we have
\[\begin{split}\left|\nabla\left(f^{\frac{\alpha+t-1}{2}+\frac{p}{4}} \eta\right)\right|^{2}\leq&\left|\nabla f^{\frac{\alpha+t-1}{2}+ \frac{p}{4}}\right|^{2}\eta^{2}+f^{\alpha+t-1+\frac{p}{2}}|\nabla\eta|^{2}\\ =&\frac{16f^{\alpha+t+\frac{p}{2}-3}}{(2\alpha+2t+p- 2)^{2}}|\nabla f|^{2}\eta^{2}+f^{\alpha+t-1+\frac{p}{2}}|\nabla\eta|^{2}.\end{split} \tag{3.27}\]
Substituting (3.27) into (3.26) gives
\[\begin{split}&\frac{\beta_{n,p,q,\alpha}}{2}\int_{\Omega}f^{ \alpha+\frac{p}{2}+t}\eta^{2}+\frac{8a_{2}t}{(2\alpha+2t+p-2)^{2}}\int_{\Omega }\left|\nabla\left(f^{\frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right)\right|^{2} \\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\alpha+t+\frac{p}{2}- 1}\eta^{2}+\frac{4(p-1)^{2}}{a_{2}t}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}| \nabla\eta|^{2}\\ &+\frac{8a_{2}t}{(2\alpha+2t+p-2)^{2}}\int_{\Omega}f^{\alpha+t+ \frac{p}{2}-1}|\nabla\eta|^{2}.\end{split} \tag{3.28}\]
Now we choose \(a_{3},a_{4}\) depending on \(n,p,q,\alpha\) such that
\[\frac{a_{3}}{t}\leq\frac{8a_{2}t}{(2\alpha+2t+p-2)^{2}},\quad\text{and}\quad \frac{8a_{2}t}{(2\alpha+2t+p-2)^{2}}+\frac{4(p-1)^{2}}{a_{2}t}\leq\frac{a_{4}} {t}. \tag{3.29}\]
For example, we can choose \(a_{3}=\frac{8a_{2}t_{0}^{2}}{(2\alpha+2t_{0}+p-2)^{2}},a_{4}=2a_{2}+\frac{4(p- 1)^{2}}{a_{2}}\).
It follows from (3.28) and (3.29) that
\[\begin{split}&\frac{\beta_{n,p,q,\alpha}}{2}\int_{\Omega}f^{ \alpha+\frac{p}{2}+t}\eta^{2}+\frac{a_{3}}{t}\int_{\Omega}\left|\nabla\left(f^ {\frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right)\right|^{2}\\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\alpha+t+\frac{p}{2}- 1}\eta^{2}+\frac{a_{4}}{t}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\left|\nabla \eta\right|^{2}.\end{split} \tag{3.30}\]
Saloff's Sobolev inequality implies
\[e^{-C_{n}(1+\sqrt{\kappa}R)}V^{\frac{2}{n}}R^{-2}\left\|f^{\frac{\alpha+t-1}{2 }+\frac{p}{4}}\eta\right\|_{L^{\frac{2n}{n-2}}(\Omega)}^{2}\leq\int_{\Omega} \left|\nabla\left(f^{\frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right)\right|^{2}+R ^{-2}\int_{\Omega}f^{\alpha+t+\frac{p}{2}-1}\eta^{2},\]
we obtain
\[\begin{split}&\frac{\beta_{n,p,q,\alpha}}{2}\int_{\Omega}f^{ \alpha+\frac{p}{2}+t}\eta^{2}+\frac{a_{3}}{t}e^{-C_{n}(1+\sqrt{\kappa}R)}V^{ \frac{2}{n}}R^{-2}\left\|f^{\frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right\|_{L^ {\frac{2n}{n-2}}}^{2}\\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\alpha+t+\frac{p}{2}- 1}\eta^{2}+\frac{a_{4}}{t}\int_{\Omega}f^{\alpha+t+\frac{p}{2}-1}|\nabla\eta|^ {2}+\frac{a_{3}}{t}\int_{\Omega}R^{-2}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}. \end{split} \tag{3.31}\]
Now we choose \(t_{0}=c_{n,p,q,\alpha}(1+\sqrt{\kappa}R),c_{n,p,q,\alpha}=\max\left\{C_{n}, \frac{8a_{1}^{2}}{8a_{2}\beta_{n,p,q,\alpha}}\right\}\). We choose \(t\) such that \(t\geq t_{0}\). Since
\[2(n-1)\kappa R^{2}\leq\frac{2(n-1)}{c_{n,p,q,\alpha}^{2}}t_{0}^{2}\quad\text{ and}\quad\frac{a_{3}}{t}\leq\frac{a_{3}}{c_{n,p,q,\alpha}},\]
there exists \(a_{5}=a_{5}(n,p,q,\alpha)>0\) such that
\[2(n-1)\kappa R^{2}+\frac{a_{3}}{t}\leq a_{5}t_{0}^{2}=a_{5}c_{n,p,q,\alpha}^{2} \left(1+\sqrt{\kappa}R\right)^{2}. \tag{3.32}\]
It follows from (3.31) and (3.32) that
\[\begin{split}&\frac{\beta_{n,p,q,\alpha}}{2}\int_{\Omega}f^{\alpha+ \frac{p}{2}+t}\eta^{2}+\frac{a_{3}}{t}e^{-t_{0}}V^{\frac{2}{n}}R^{-2}\left\|f^{ \frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right\|_{L^{\frac{2n}{n-2}}}^{2}\\ \leq& a_{5}t_{0}^{2}R^{-2}\int_{\Omega}f^{\alpha+ \frac{p}{2}+t-1}\eta^{2}+\frac{a_{4}}{t}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1 }|\nabla\eta|^{2}.\end{split} \tag{3.33}\]
### \(L^{\beta}\) bound of gradient in \(3r/4\) radius ball
We first prove the following lemma.
**Lemma 3.2**.: _Let \(\beta=\left(\alpha+t_{0}+\frac{p}{2}-1\right)\frac{n}{n-2}\), then there exists \(a_{8}=a_{8}(n,p,q)>0\) such that_
\[\|f\|_{L^{\beta}(B_{3R/4}(o))}\leq a_{8}V^{\frac{1}{\beta}}\frac{t_{0}^{2}}{R^ {2}}, \tag{3.34}\]
_where \(V\) is the volume of geodesic ball \(B_{R}(o)\)._
Proof.: By (3.33), if
\[f\geq\frac{4a_{5}t_{0}^{2}}{\beta_{n,p,q,\alpha}R^{2}},\]
then we have
\[a_{5}t_{0}^{2}R^{-2}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}\leq\frac{ \beta_{n,p,q,\alpha}}{4}\int_{\Omega}f^{\alpha+\frac{p}{2}+t}\eta^{2}.\]
If we denote \(\Omega_{1}=\{f\geq\frac{4a_{5}t_{0}^{2}}{\beta_{n,p,q,\alpha}R^{2}}\}\), thus we can decompose \(\Omega=\Omega_{1}\cup\Omega_{2}\) to two regions, where \(\Omega_{2}\) the completement of \(\Omega_{1}\). We have
\[a_{5}\alpha_{0}^{2}R^{-2}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}\leq \frac{2a_{5}t_{0}^{2}}{R^{2}}\left(\frac{4a_{5}t_{0}^{2}}{\beta_{n,p,q,\alpha} R^{2}}\right)^{\alpha+\frac{p}{2}+t-1}V+\frac{\beta_{n,p,q,\alpha}}{4}\int_{ \Omega}f^{\alpha+\frac{p}{2}+t}\eta^{2}, \tag{3.35}\]
where \(V\) is the volumne of \(B_{R}(o)\). Choosing \(t=t_{0}\), it follows from (3.33) and (3.35)
\[\begin{split}&\frac{\beta_{n,p,q,\alpha}}{4}\int_{\Omega}f^{ \alpha+\frac{p}{2}+t}\eta^{2}+\frac{a_{3}}{t_{0}}e^{-t_{0}}V^{\frac{2}{n}}R^{- 2}\left\|f^{\frac{\alpha+t-1}{2}+\frac{p}{4}}\eta\right\|_{L^{\frac{2n}{n-2}}} ^{2}\\ \leq&\frac{2a_{5}t_{0}^{2}}{R^{2}}\left(\frac{4a_{5} t_{0}^{2}}{\beta_{n,p,q,\alpha}R^{2}}\right)^{\alpha+\frac{p}{2}+t-1}V+\frac{a_{4}}{t_ {0}}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}|\nabla\eta|^{2}.\end{split} \tag{3.36}\]
We can choose \(\eta_{1}\in C_{0}^{\infty}(B_{R}(o))\) satisfying
\[\begin{cases}0\leq\eta_{1}\leq 1,\quad\eta_{1}\equiv 1\text{ in }B_{\frac{3R}{4}}(o);\\ |\nabla\eta_{1}|\leq\frac{C(n)}{R},\end{cases}\]
and \(\eta=\eta_{1}^{\alpha+\frac{p}{2}+t_{0}}\), then we have
\[a_{4}R^{2}|\nabla\eta|^{2}\leq a_{4}C^{2}(n)\left(t_{0}+\frac{p}{2}+\alpha \right)^{2}\eta^{\frac{2\alpha+2t_{0}+p-2}{\alpha+p/2+t_{0}}}\leq a_{6}t_{0}^{ 2}\eta^{\frac{2\alpha+p+2t_{0}-2}{\alpha+p/2+t_{0}}}. \tag{3.37}\]
By Holder inequality and Young inequality, we have
\[\begin{split}\frac{a_{4}}{t_{0}}\int_{\Omega}f^{\frac{p}{2}+\alpha+t _{0}-1}|\nabla\eta|^{2}\leq&\frac{a_{6}t_{0}}{R^{2}}\int_{\Omega}f^ {\frac{p}{2}+\alpha+t_{0}-1}\eta^{\frac{2\alpha+p+2t_{0}-2}{\alpha+p/2+t_{0}}} \\ \leq&\frac{a_{6}t_{0}}{R^{2}}\left(\int_{\Omega}f^{ \alpha+t_{0}+\frac{p}{2}}\eta^{2}\right)^{\frac{\alpha+p/2+t_{0}-1}{\alpha+p/2 +t_{0}}}V^{\frac{1}{\alpha+t_{0}+p/2}}\\ \leq&\frac{\beta_{n,p,q,\alpha}}{4}\left[\int_{ \Omega}f^{\alpha+t_{0}+\frac{p}{2}}\eta^{2}+\left(\frac{4a_{6}t_{0}}{\beta_{n, p,q,\alpha}R^{2}}\right)^{\alpha+t_{0}+p/2}V\right].\end{split} \tag{3.38}\]
It follows from (3.36) and (3.38) that
\[\begin{split}&(\int_{\Omega}f^{\frac{n(p/2+\alpha+t_{0}-1)}{n-2 }}\eta^{\frac{2n}{n-2}})^{\frac{n-2}{n}}\\ \leq&\frac{t_{0}}{a_{3}}e^{t_{0}}V^{1-\frac{2}{n}}R^ {2}\left[\frac{2a_{5}t_{0}^{2}}{R^{2}}\left(\frac{4a_{5}t_{0}^{2}}{\beta_{n,p,q,\alpha}R^{2}}\right)^{t_{0}+\frac{p}{2}+\alpha-1}+\frac{a_{6}t_{0}^{2}}{R^ {2}}\left(\frac{4a_{6}t_{0}}{\beta_{n,p,q,\alpha}R^{2}}\right)^{\alpha+t_{0}+ \frac{p}{2}-1}\right]\\ \leq& a_{7}e^{t_{0}}V^{1-\frac{2}{n}}t_{0}^{3}\left( \frac{t_{0}^{2}}{R^{2}}\right)^{\alpha+t_{0}+\frac{p}{2}-1},\end{split} \tag{3.39}\]
where \(a_{7}\) depending only on \(n,p,q\) is defined by
\[a_{7}=\frac{2a_{5}}{a_{3}}\left(\frac{4a_{5}}{\beta_{n,p,q,\alpha}}\right)^{ \alpha+t_{0}+\frac{p}{2}-1}+\frac{a_{6}}{a_{3}}\left(\frac{4a_{6}}{\beta_{n,p, q,\alpha}t_{0}}\right)^{\alpha+t_{0}+\frac{p}{2}-1}.\]
Taking \(\frac{1}{\alpha+t_{0}+p/2-1}\) power on both sides of (3.39) gives
\[\left\|f\eta^{\frac{2}{\alpha+t_{0}+p/2-1}}\right\|_{L^{\beta}(\Omega)}\leq a _{7}^{\frac{1}{\alpha+t_{0}+p/2-1}}V^{1/\beta}t_{0}^{\frac{3}{\alpha+t_{0}+p/2 -1}}\frac{t_{0}^{2}}{R^{2}}\leq a_{8}V^{\frac{1}{\beta}}\frac{t_{0}^{2}}{R^{2}}, \tag{3.40}\]
where
\[a_{8}=a_{7}^{\frac{1}{\alpha+t_{0}+p/2-1}}t_{0}^{\frac{3}{\alpha+t_{0}+p/2-1}}.\]
Since \(\eta\equiv 1\) in \(B_{3R/4}\), we obtain that
\[\|f\|_{L^{\beta}(B_{3R/4}(o))}\leq a_{8}V^{\frac{1}{\beta}}\frac{t_{0}^{2}}{R^ {2}}.\]
We need to point out that \(a_{8}\) is bounded as \(\alpha\to\infty\) since \(a_{7},a_{3}\) depend on a power of \(\alpha\).
### Moser iteration
We omit the first term in (3.33) and obtain
\[\frac{a_{3}}{t}e^{-t_{0}}V^{\frac{2}{n}}R^{-2}\left\|f^{\frac{\alpha+t-1}{2}+ \frac{p}{4}}\eta\right\|_{L^{\frac{2n}{n-2}}}^{2}\leq a_{5}\alpha_{0}^{2}R^{- 2}\int_{\Omega}f^{\alpha+\frac{p}{2}+t-1}\eta^{2}+\frac{a_{4}}{t}\int_{\Omega }f^{\alpha+\frac{p}{2}+t-1}|\nabla\eta|^{2}. \tag{3.41}\]
We denote \(r_{k}=\frac{R}{2}+\frac{R}{4^{k}}\) and \(\Omega_{k}=B_{r_{k}}(o)\). We choose \(\eta_{k}\in C^{\infty}(\Omega_{k})\) satisfying
\[\begin{cases}0\leq\eta_{k}\leq 1,\quad\eta_{k}\equiv 1\text{ in }B_{r_{k+1}}(o);\\ |\nabla\eta_{k}|\leq\frac{C4^{k}}{R},\end{cases} \tag{3.42}\]
and substituting \(\eta\) by \(\eta_{k}\) in (3.41), we arrive at
\[\begin{split} a_{3}e^{-t_{0}}V^{\frac{2}{n}}\left\|f^{\frac{\alpha+t- 1}{2}+\frac{p}{4}}\eta_{k}\right\|_{L^{\frac{2n}{n-2}}(\Omega_{k})}^{2}\leq& a_{5}t_{0}^{2}t\int_{\Omega_{k}}f^{\alpha+\frac{p}{2}+t-1}\eta_{k}^{2}+a_{ 4}R^{2}\int_{\Omega_{k}}f^{\alpha+\frac{p}{2}+t-1}\left|\nabla\eta_{k}\right|^ {2}\\ \leq&\left(a_{5}t_{0}^{2}t+C^{2}16^{k}\right)\int_{ \Omega_{k}}f^{\alpha+\frac{p}{2}+t-1}.\end{split} \tag{3.43}\]
Now we choose \(\beta_{1}=\beta,\beta_{k+1}=\frac{n\beta_{k}}{n-2}\) and let \(t=t_{k}\) such that \(t_{k}+\frac{p}{2}+\alpha-1=\beta_{k}\), it follows that
\[a_{3}\left(\int_{\Omega_{k}}f^{\beta_{k+1}}\eta_{k}^{\frac{2n}{n-2}}\right)^{ \frac{n-2}{n}}\leq e^{t_{0}}V^{-\frac{2}{n}}\left(a_{5}t_{0}^{2}\left(t_{0}+ \frac{p}{2}+\alpha-1\right)\left(\frac{n}{n-2}\right)^{k}+C^{2}16^{k}\right) \int_{\Omega_{k}}f^{\beta_{k}}. \tag{3.44}\]
Since \(\frac{n}{n-2}<16,\forall n>2\), if we denote \(a_{9}=\max\left\{a_{5}t_{0}^{2}\left(\alpha+t_{0}+\frac{p}{2}-1\right),C^{2}\right\}\), then we have
\[a_{3}\left(\int_{\Omega_{k}}f^{\beta_{k+1}}\eta_{k}^{\frac{2n}{n-2}}\right)^{ \frac{n-2}{n}}\leq 2a_{9}e^{t_{0}}V^{-\frac{2}{n}}16^{k}\int_{\Omega_{k}}f^{\beta_{k}}. \tag{3.45}\]
Taking power of \(\frac{1}{\beta_{k}}\) on both sides of (3.45), we obtain
\[\left\|f\right\|_{L^{\beta_{k+1}}(\Omega_{k+1})}\leq \left(2a_{9}e^{t_{0}}V^{-\frac{2}{n}}\right)^{\frac{1}{\beta_{k}} }16^{\frac{k}{\beta_{k}}}\left\|f\right\|_{L^{\beta_{k}}(\Omega_{k})}. \tag{3.46}\]
Since
\[\sum_{k=1}^{\infty}\frac{1}{\beta_{k}}=\frac{\frac{1}{\beta_{1}}}{1-\frac{n-2 }{n}}=\frac{n}{2\beta_{1}},\quad\sum_{k=1}^{\infty}\frac{k}{\beta_{k}}<\infty,\]
we have
\[\left\|f\right\|_{L^{\infty}(B_{R/2}(o))}\leq a_{10}V^{-\frac{1}{\beta}}\|f\|_{L^{\beta}(B_{3R/4}(o))}, \tag{3.47}\]
where
\[a_{10}=\left(2a_{9}e^{t_{0}}\right)^{\frac{n}{2\beta_{1}}}16^{\sum_{k=1}^{ \infty}\frac{k}{\beta_{k}}}.\]
We need to point out that \(a_{10}\) is uniformly bounded for any \(t_{0}\). By (3.34), we obtain
\[\left\|f\right\|_{L^{\infty}(B_{R/2}(o))}\leq a_{11}\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}, \tag{3.48}\]
where \(a_{11}=a_{10}a_{8}c_{n,p,q,\alpha}\).
We combine the main Section 3.2, Section 3.3, Section 3.4 together, we obtain
**Lemma 3.3**.: _Let \((M,g)\) be an complete Riemannian manifold with \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\), where \(\kappa\) is a non-positive constant. For any \(\alpha>1\), if \(a,q\) and \(p>1\) satisfy one of the following conditions,_
1. \[a\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right)\geq 0,\]
2. \[\delta_{n,p,q,\alpha}=\frac{1}{n-1}-\left(\frac{n+1}{n-1}-\frac{q}{p-1}\right) ^{2}\frac{(2\alpha-1)(n-1)+p-1}{4(2\alpha-1)}>0,\]
_and then for any positive solution of \(\Delta_{p}v+av^{q}=0\) on a geodesic ball \(B_{R}(o)\subset M\), we have_
\[\sup_{B_{\frac{R}{2}}(o)}\frac{|\nabla v|^{2}}{v^{2}}\leq c(n,p,q,\alpha)\frac{( 1+\sqrt{\kappa}R)^{2}}{R^{2}}.\]
**Proof of Theorem 1.1.** The condition
\[p-1<q<\frac{n+3}{n-1}(p-1)\]
is equivalent to
\[\left|\frac{q}{p-1}-\frac{n+1}{n-1}\right|<\frac{2}{n-1}.\]
Since
\[\lim_{\alpha\to\infty}2\sqrt{\frac{2\alpha-1}{(n-1)((2\alpha-1)(n-1)+p-1)}}= \frac{2}{n-1},\]
for any \(q\) satisfying
\[p-1<q<\frac{n+3}{n-1}(p-1)\]
we can choose \(\alpha=\alpha(n,p,q)\) large enough such that
\[\left|\frac{q}{p-1}-\frac{n+1}{n-1}\right|<2\sqrt{\frac{2\alpha-1}{(n-1)((2 \alpha-1)(n-1)+p-1)}}. \tag{3.49}\]
If we choose \(\alpha=\alpha(n,p,q)\), inequality (3.49) implies \(\delta_{n,p,q,\alpha}>0\). then by Lemma 3.3, Theorem 1.1 establishes.
**Proof of Corollary 1.3.** When \(a>0\), the union of the range of \(q\) in condition (1.4) and the range of \(q\) in (1.5) is
\[q<\frac{n+3}{n-1}(p-1),\]
When \(a<0\), the union of the range of \(q\) in condition (1.4) and (1.5) is
\[q>p-1.\]
So by Theorem 1.1, Corollory 1.2 establishes.
**Proof of Theorem 1.4.**\((M,g)\) has non-negative Ricci curvature implies that we can choose \(\kappa=0\), we infer from Theorem 1.1 that
\[\sup_{B_{R/2}(o)}\frac{|\nabla v|}{v}\leq c(n,p,q)\frac{1}{R}. \tag{3.50}\]
Let \(R\to\infty\) in (3.50), we obtain
\[\nabla v=0.\]
thus \(v\) is a constant and \(\Delta_{p}v=0\), which contradict with (1.1) since \(v\) is positive.
**Proof of Theorem 1.5.** For any \(p\in M\), by Theorem 1.1, we have
\[|\nabla u(p)|\leq\sup_{B_{\frac{R}{2}}(p)}|\nabla u|\leq c(n,p,q)\frac{1+\sqrt{\kappa}R}{R}. \tag{3.51}\]
Letting \(R\to\infty\), we obtain that
\[|\nabla u(p)|\leq c(n,p,q)\sqrt{\kappa},\quad\forall p\in M.\]
Fix \(x_{0}\in M\), for any \(x\in M\), choose a minimizing geodesic \(\gamma(t)\) connecting \(x_{0}\) and \(x\):
\[\gamma:[0,d]\to M,\quad\gamma(0)=x_{0},\quad\gamma(d)=x.\]
where \(d=dist(x,x_{0})\) is the distance of \(x_{0}\) and \(x\). So we have
\[u(x)-u(x_{0})=\int_{0}^{d}\frac{d}{dt}u\circ\gamma(t)dt. \tag{3.52}\]
Since
\[\left|\frac{d}{dt}u\circ\gamma(t)\right|\leq|\nabla u||\gamma^{\prime}(t)|=c( n,p,q)\sqrt{\kappa}, \tag{3.53}\]
it follows from (3.52) and (3.53) that
\[u(x_{0})-c(n,p,q)\sqrt{\kappa}\leq u(x)\leq u(x_{0})+c(n,p,q)\sqrt{\kappa} \tag{3.54}\]
Since \(u=-(p-1)\ln v\), we can derive the required inequality. Thus we finish the proof by (3.54).
|
2307.09476 | Overthinking the Truth: Understanding how Language Models Process False
Demonstrations | Modern language models can imitate complex patterns through few-shot
learning, enabling them to complete challenging tasks without fine-tuning.
However, imitation can also lead models to reproduce inaccuracies or harmful
content if present in the context. We study harmful imitation through the lens
of a model's internal representations, and identify two related phenomena:
"overthinking" and "false induction heads". The first phenomenon, overthinking,
appears when we decode predictions from intermediate layers, given correct vs.
incorrect few-shot demonstrations. At early layers, both demonstrations induce
similar model behavior, but the behavior diverges sharply at some "critical
layer", after which the accuracy given incorrect demonstrations progressively
decreases. The second phenomenon, false induction heads, are a possible
mechanistic cause of overthinking: these are heads in late layers that attend
to and copy false information from previous demonstrations, and whose ablation
reduces overthinking. Beyond scientific understanding, our results suggest that
studying intermediate model computations could be a promising avenue for
understanding and guarding against harmful model behaviors. | Danny Halawi, Jean-Stanislas Denain, Jacob Steinhardt | 2023-07-18T17:56:50Z | http://arxiv.org/abs/2307.09476v3 | # Overthinking the Truth: Understanding how Language Models Process False Demonstrations
###### Abstract
Modern language models can imitate complex patterns through few-shot learning, enabling them to complete challenging tasks without fine-tuning. However, imitation can also lead models to reproduce inaccuracies or harmful content if present in the context. We study harmful imitation through the lens of a model's internal representations, and identify two related phenomena: _overthinking_ and _false induction heads_. The first phenomenon, overthinking, appears when we decode predictions from intermediate layers, given correct vs. incorrect few-shot demonstrations. At early layers, both demonstrations induce similar model behavior, but the behavior diverges sharply at some "critical layer", after which the accuracy given incorrect demonstrations progressively decreases. The second phenomenon, false induction heads, are a possible mechanistic cause of overthinking: these are heads in late layers that attend to and copy false information from previous demonstrations, and whose ablation reduces overthinking. Beyond scientific understanding, our results suggest that studying intermediate model computations could be a promising avenue for understanding and guarding against harmful model behaviors.2
Footnote 2: All code needed to reproduce our results can be found at [https://github.com/dannyallover/overthinking_the_truth](https://github.com/dannyallover/overthinking_the_truth)
## 1 Introduction
A key behavior of modern language models is context-following: large-scale transformer models are able to infer and imitate the patterns in their prompt (Brown et al., 2020). At its best, this allows language models to perform well on benchmarks without the need for fine-tuning (Rae et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022; Srivastava et al., 2022). This has led researchers to study how context affects few-shot performance (Min et al., 2022; Kim et al., 2022; Xie et al., 2021; Zhao et al., 2021) as well as the internal mechanisms that produce it (Olsson et al., 2022).
However, context-following can also lead to incorrect, toxic, or unsafe model outputs (Rong, 2021). For example, if an inexperienced programmer prompts Codex with poorly written or vulnerable code, the model is more likely to produce poorly written or vulnerable code completions (Jones and Steinhardt, 2022; Perry et al., 2022). Intuitively, the issue is that context-following learns too much--in addition to inferring the overall intent of the in-context task (what code a user is trying to write), it also learns the pattern of user errors and reproduces it, similar to how gradient-based learning algorithms reproduce label errors in their predictions (Sambasivan et al., 2021).
In this work, we seek to better understand harmful context-following. Since models often perform well zero-shot, we conjecture that when presented with a harmful context, the model _knows_ the right answer, but imitates and _says_ the wrong answer (Meng et al., 2022). This lead us to study how incorrect imitations emerge over the course of the model's processing, and to look for the model components that cause them.
To investigate this, we set up a contrast task, where models are provided either correct or incorrect labels for few-shot classification (Figure 1, left). We study the difference between these two settings by decoding from successively later layers of the residual stream (Nostalgebraist, 2020) (Figure 1, center). Intuitively, this allows us to decode the model's intermediate predictions as it iteratively builds its final output, and to determine which stages of computation propagate the incorrect labels.
We find that correct and incorrect demonstrations yield similar accuracy at early stages of computation, until some "critical layer" at which they sharply diverge. After the critical layer, performance improves given correct demonstrations but drops given incorrect demonstrations. In particular, when demonstrations are incorrect, the neural network "overthinks" (Kaya et al., 2018): stopping the model early increases its accuracy.
We localize overthinking to specific attention heads that attend to and reproduce previous incorrect demonstrations, analogous to the "induction heads" identified in Olsson et al. (2022). These heads are concentrated in the later layers of the model (after the critical layer), perhaps because they attend to complex features (the correctness of an example) that are not present in earlier layers. Removing 5 such heads (1% of heads) reduced the accuracy gap between correct and incorrect prompts by an average of 38.3% over 14 datasets, with negligible effects on the performance given correct prompts (Figure 1, right).
In summary, we found that harmful context-following only appears late in a model's computation, and identified specific attention heads that contribute to these incorrect limitations. More generally, our findings suggest that benign and harmful model behaviors are often processed differently. Indeed, follow-up work (Belrose et al., 2023) has used and extended our insights to detect prompt injection attacks (Perez and Ribeiro, 2022). To proactively understand and reduce harmful model behaviors, researchers should continue to build tools to understand their intermediate computations.
## 2 Related Work
Our work is related to Min et al. (2022), Kim et al. (2022), and Wei et al. (2023), who examine the role of inaccurate demonstrations on model accuracy. Min et al. (2022, figure 4) find that for the pre-trained model GPT-J, the correctness of demonstrations has a large effect on classification accuracy. These works measure the input-output behavior of models on misleading prompts, whereas our work investigates model internals: early-exiting allows us to study how the model builds its representations, and our ablations make it possible to understand the role of specific attention heads.
This high-level perspective matches that of recent work in _mechanistic interpretability_(Cammaata et al., 2021; Geiger et al., 2021; Elhage et al., 2021), which analyzes model internals to reverse engineer the algorithms learned by the network. Mechanistic interpretability techniques have previously been used to study behaviors such as modular arithmetic (Nanda et al., 2023), or factual recall (Meng et al., 2022, 2022). However, we take a less "bottom-up" approach than most mechanistic interpretability work: we focus on the role of layers and attention heads, rather than lower-level components such as individual neurons or key, query and value vectors. Moreover, mechanistic interpretability techniques
Figure 1: **Left:** Given a prompt of incorrect demonstrations, language models are more likely to output incorrect labels. **Center:** When demonstrations are incorrect, zeroing out the later layers increases the classification accuracy, here on Financial-Phrasebank. **Right:** We identify 5 attention heads and remove them from the model: this reduces the effect of incorrect demonstrations by 32.6% on Financial-Phrasebank, without decreasing the accuracy given correct demonstrations.
are typically applied to small scale, synthetic tasks, such as indirect object identification (Wang et al., 2022). In contrast, we study model behavior across a variety of more realistic tasks, including sentiment analysis, natural language inference, and topic classification.
The literature on early-exiting and overthinking (Kaya et al., 2018; Panda et al., 2015; Teerapittayanon et al., 2017; Figurnov et al., 2017; Hou et al., 2020; Liu et al., 2020; Xin et al., 2020; Zhou et al., 2020; Zhu, 2021; Schuster et al., 2022) also investigates decoding from intermediate layers. These works focus on using early-exiting to improve inference speed, although Mehra et al. (2022) also study the accuracy under distribution shift. In contrast, we use early exiting to scientifically understand the intermediate steps of the model's computation. Moreover, most early exiting methods modify the training process to allow for early exit, or train additional probes to decode intermediate states. In contrast, we use the logit lens (Nostalgebraist, 2020), which does not require any extra training to decode answers from internal representations.
## 3 Preliminaries: Few-shot Learning with False Demonstrations
We begin by introducing the setting we study: few-shot learning for classification, given demonstrations with correct or incorrect labels. Incorrect demonstrations consistently reduce classification performance, which is the phenomenon that we aim to study in this work.
**Few-shot learning.** We consider autoregressive transformer language models, which produce a conditional probability distribution \(p(t_{n+1}\mid t_{1},...,t_{n})\) over the next token \(t_{n+1}\) given previous tokens. We focus on few-shot learning (Brown et al., 2020) for classification tasks: given a task instruction \(u\), we sample \(k\) demonstrations (input-label pairs) from the task dataset, denoted \((x_{1},y_{1}),...,(x_{k},y_{k})\). To query the model on a new input \(x\), we use the predictive distribution \(p(y\mid u,x_{1},y_{1},...,x_{k},y_{k},x)\).
**Datasets and models.** We consider fourteen text classification datasets: SST-2 (Socher et al., 2013), Poem Sentiment (Sheng and Uthus, 2020), Financial Phrasebank (Malo et al., 2014), Ethos (Mollas et al., 2020), TweetEval-Hate, -Atheism, and -Feminist (Barbieri et al., 2020), Medical Questions Pairs (McCreery et al., 2020), MRPC (Wang et al., 2019), SICK (Marelli et al., 2014), RTE (Wang et al., 2019), AGNews (Zhang et al., 2015), TREC (Voorhees and Tice, 2000), and DBpedia (Zhang et al., 2015). We used the same prompt formats as in Min et al. (2022) and Zhao et al. (2021) (Table 7, 6). For SST-2 we use the 15 prompt formats in Zhao et al. (Table 8). We also considered a toy dataset, Unnatural, that extends a task in Rong (2021). In Unnatural, demonstrations are of the form "[object]: [label]" and the labels are "plant/vegetable", "sport", and "animal". We evaluated 3 autoregressive language models: GPT-J-6B (Wang and Komatsuzaki, 2021), GPT2-XL-1.5B (Radford et al., 2019), and GPT-NeoX-20B (Black et al., 2022).
**Evaluation metrics.** Given our focus on classification tasks, we are interested in how often the model assigns higher probability to the true label than to any other label. However, model predictions
Figure 2: GPT-J behavior in the permuted labels setting (3.1). **Left:** The difference in accuracy between correct and incorrect prompts increases with the number of demonstrations. **Right:** As the number of false demonstrations increases, the model chooses the permuted label \(\sigma(\text{class}(x))\) more often than the other labels, rather than making random errors.
can be very unstable with respect to small prompt perturbations (Gao et al., 2021). To mitigate this variability, we measure the _calibrated_ classification accuracy (Zhao et al., 2021). Concretely, for a 2-class classification task, we measure how often the correct label has a higher probability than its median probability over the dataset. Assuming the dataset is balanced, which we enforce by sampling demonstration labels with equal probability, this step has been shown to improve performance and reduce variability across prompts. Calibration for multi-class tasks follows a similar procedure, detailed in Appendix A.1.
### False demonstration labels decrease accuracy
We first set up our contrast task and confirm that the models we study exhibit false context-following behavior. Concretely, we compare the performance of models when the demonstration labels are all correct, i.e. \(y_{i}=\text{class}(x_{i})\), and when they are all incorrect, i.e. \(y_{i}=\sigma(\text{class}(x_{i}))\), for a cyclic permutation \(\sigma\) over the set of classes (Figure 1, left). In particular, inputs from the same class are always assigned the same (possibly incorrect) label within each prompt. Because all few-shot labels are chosen according to a permutation of the classes, we call this the permuted labels setting.
For each model and dataset, we sample 1000 sequences each containing \(k\) demonstrations and evaluate the model's calibrated accuracy. We sample different demonstrations \((x_{i},y_{i})\) and label permutations \(\sigma\) for every sequence, and vary \(k\) from \(0\) to \(40\) (from \(0\) to \(20\) for GPT2-XL, due to its smaller context size).
Figure 2 (left) shows the difference between GPT-J's calibrated accuracy given correct and incorrect prompts as the number of demonstrations increases (see Figure 17 for GPT2-XL and GPT-NeoX). As expected, incorrect demonstrations lead to worse performance, and the accuracy gap tends to increase with \(k\) for most datasets. These results are in agreement with Min et al. (2022), who found that incorrect demonstrations decreased GPT-J's performance on classification tasks (Min et al., Figure 4).
Models could lose accuracy by copying the incorrect label, or by becoming confused and choosing random labels. To confirm it is the former, we also measure which labels the model chooses for tasks with more than 2 labels. Specifically, we measure the _permuted score_: how often the model chooses the permuted label \(\sigma(\text{class}(x))\) over the other labels. For each dataset, a random classifier would have a permuted score of \(\frac{1}{\#\text{labels}}\). To make the results comparable across datasets, we divide the permuted scores by this random baseline. Figure 2 (right) shows these normalized permuted scores for GPT-J
Figure 3: GPT-J early-exit classification accuracies across 6 task categories, given accurate and inaccurate demonstrations (here in the permuted labels setting). Plots are grouped by task type: sentiment analysis (a-b), hate speech detection (c), paraphrase detection (d), natural language inference (e), topic classification (f-g), and a toy task (h). Given incorrect demonstrations, zeroing out all transformer blocks after layer 16 outperforms running the entire model.
on the 9 multi-class datasets in our collection, as well as the average across datasets. The permuted score increases steadily and reaches twice its initial value after 40 demonstrations.
### Random and partially correct labels lead to lower accuracy than correct labels
In the previous subsection, we presented a particular kind of misleading prompt, in which all demonstration labels are chosen according to a permutation of the classes. To study other kinds of misleading prompts, we consider variations on this setup: prompts in which half the demonstrations have correct labels and half have permuted labels (_half correct labels_), and prompts where each demonstration label is chosen at random (_random labels_). These prompts also lead to worse classification accuracy compared to true demonstrations: the accuracy gap at \(k=40\) is \(0.15\) for random labels and \(0.12\) for half correct labels, which is around half the value for permuted labels (\(0.28\)).
## 4 Zeroing Later Layers Improves Accuracy
In this section, to study false context-following, we decode model predictions directly from intermediate layers. This allows us to evaluate the model's performance midway through processing the inputs. On incorrect demonstrations, we find that the model performs _better_ midway through processing, especially for GPT-J, and investigate this phenomenon in detail.
**Intermediate layer predictions: the logit lens.** Given an autoregressive transformer language model with \(L\) layers, we decode next-token probabilities for each intermediate layer, using the "logit lens" method (Nostalgebraist, 2020). Intuitively, these intermediate distributions represent model predictions after \(\ell\in\{1,...,L\}\) layers of processing.
In more detail, let \(h_{\ell}^{(i)}\in\mathbb{R}^{d}\) denote the hidden state of token \(t_{i}\) at layer \(\ell\), i.e. the sum of everything up to layer \(\ell\) in the residual stream. For a sequence of tokens \(t_{1},...,t_{n}\in V\), the logits of the full model's predictive distribution \(p(t_{n+1}\mid t_{1},...,t_{n})\) are given by
\[[\text{logit}_{1},...,\text{logit}_{|V|}]=W_{U}\cdot\text{LayerNorm}(h_{L}^{(n )}),\]
where LayerNorm is the the pre-unembedding layer normalization, and \(W_{U}\in\mathbb{R}^{|V|\times d}\) is the unembedding matrix. The logit lens mimics this operation, but replaces \(h_{L}\) with an intermediate hidden state \(h_{\ell}\). This yields the intermediate layer distribution \(p_{\ell}(t_{n+1}\mid t_{1},...,t_{n})\), defined as
\[[\text{logit}_{1}^{\ell},...,\text{logit}_{|V|}^{\ell}]=W_{U}\cdot\text{LayerNorm }(h_{\ell}^{(n)}).\]
This provides a measurement of what predictions the model represents at layer \(\ell\), without the need to train a new decoding matrix. It can therefore be interpreted as a form of early exiting (Panda et al., 2015; Teerapittayanon et al., 2017; Figurnov et al., 2017).
We compute the intermediate layer distributions \(p_{\ell}\) for the same three models as before, and measure the corresponding calibrated accuracies on the fifteen datasets from Section 3. Figure 4 shows the
Figure 4: Average calibrated accuracy across 14 tasks for GPT2-XL (a), GPT-J (b), and GPT-NeoX (c). Early-exiting outperforms running the entire model when the demonstrations contain permuted, random, or half correct labels.
average accuracy over the fourteen non-toy datasets as a function of \(\ell\), given demonstrations with correct labels, permuted labels, random labels, half correct labels, as well as no demonstrations.
**Accurate and incorrect demonstrations sharply diverge at "critical layers".** Given correct demonstrations, the accuracy tends to increase with layer depth. With permuted or random labels, the accuracy follows a similar trend at early layers, but then diverges and decreases at the later layers. This trend is consistent across individual datasets (Figures 3, 8 and 10).
Moreover, for each model, the accuracies for correct and incorrect prompts diverge at the same layers across almost all datasets: we call these the _critical layers_. For example, for GPT-J, the accuracies diverge between layers 13 and 14 for all but two datasets (Figure 9)3. We observe similar results for GPT-NeoX with layers 10 to 13 and for GPT2-XL with layers 20 to 24 (Figures 8 and 10).
Footnote 3: We formalize this by measuring the layer at which the accuracy gap first reaches half of its final value.
**Early-exiting improves classification performance given incorrect demonstrations.** Given incorrect demonstrations, decoding from earlier layers performs _better_ than decoding from the final layer. For example, for GPT-J, using \(p_{16}\) (the first \(16\) layers) achieves a better accuracy than the full model on all but one dataset (Figures 3, 4b). For GPT2-XL and GPT-NeoX, the intermediate predictions \(p_{30}\) and \(p_{32}\) also outperform the full model for most datasets, although the magnitude of the effect is smaller (Figures 4a, 4c). Finally, early exiting also helps for other misleading prompts: our results were qualitatively similar given random labels and half correct labels (see Figure 4 and 11-13).
**Ablating attention heads only improves accuracy further** We hypothesize that correct and incorrect demonstrations diverge at the critical layers because the correctnessness of each demonstration is only encoded after these layers. This would imply that overthinking is caused by the late _attention_ layers, which attend back to the late layers of previous demonstrations. To test this, we zero out only the attention heads (and not the MLPs) in late layers of the model. For GPT2-XL and GPT-J, where overthinking is most pronounced, we find that ablating just the attention heads has a similar effect to ablating the entire layer, whereas ablating just MLPs has a much smaller effect (Table 1). Since removing only late attention heads recovers almost the full effect of early-exiting, we conclude that these late heads, more than MLPs, are responsible for overthinking. This motivates understanding the attention heads in detail, which we turn to next.
## 5 Zooming into attention heads
Previously, we found that the gap between true and false demonstrations is predominantly due to attention heads in the later layers of the model. This suggests that false context-following is due to heads attending to complex features in previous demonstrations. In this section, we look for particular heads that are responsible for this context-following behavior.
Drawing from Olsson et al. (2022), we hypothesize that there are _false induction heads_ that attend to false labels in similar past demonstrations, and make the model more likely to output them. For example, for the input "beet" in Figure 5, the right-most head attends consistently to the previous incorrect demonstrations of the token "sport".
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{3}{c}{**Permuted Labels**} & \multicolumn{3}{c}{**Correct Labels**} \\ \cline{2-9} & Full Model & Late Heads & Late MLP & Late Layers & Full Model & Late Heads & Late MLP & Late Layers \\ \hline GPT2-XL & \(41.97_{2.94}\) & \(\mathbf{46.09_{2.94}}\) & \(42.88_{2.90}\) & \(44.63_{2.96}\) & \(54.19_{2.97}\) & \(\mathbf{54.09_{2.94}}\) & \(52.47_{2.98}\) & \(53.68_{2.95}\) \\ GPT-J & \(37.42_{2.88}\) & \(47.58_{2.88}\) & \(37.97_{2.93}\) & \(\mathbf{47.72_{2.88}}\) & \(65.54_{2.80}\) & \(64.46_{2.73}\) & \(\mathbf{65.84_{2.76}}\) & \(64.00_{2.78}\) \\ GPT-NeoX & \(45.19_{2.89}\) & \(44.44_{2.91}\) & \(\mathbf{44.78_{2.91}}\) & \(\mathbf{46.06_{2.93}}\) & \(61.68_{2.77}\) & \(60.86_{2.81}\) & \(56.78_{2.63}\) & \(\mathbf{62.15_{2.80}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average calibrated accuracy on correct and incorrect labels when running the full model, zeroing out late layers, zeroing out late attention heads (but not MLPs), and zeroing out late MLPs (but not attention heads). We ablate after layer 16 for GPT-J, 30 for GPT2-XL, and 32 for GPT-NeoX. The best and second best ablated accuracy are bolded and underlined respectively. We find that ablating late attention heads and ablating late layers have similar performance: this suggests that late attention heads play an especially important role in overthinking.
More formally, we introduce three properties that make a head a false induction head. First, it should be (1) _label-attending_, i.e. concentrate its attention on labels in the previous demonstrations. Second, it should be (2) _class-sensitive_, meaning it attends specifically to labels that follow inputs from the same class (e.g "tomato", "garlic" and "kale" in Figure 5). Finally, it should be (3) _label-promoting_, meaning it increases the probability of the labels it attends to.
To identify false induction heads, we define a score that quantifies how label-attending and class-sensitive an attention head is (we will return to the label-promoting property at the end of this section). For a sequence of demonstrations \((x_{i},y_{i})\) and a final input \(x\), the **prefix-matching score** (PM\({}^{h}\)) of a head \(h\) is:
\[\text{PM}^{h}=\sum_{i=1}^{n}\text{Att}^{h}(x,y_{i})\cdot\mathbf{1}_{\text{class }(x)=\text{class}(x_{i})}-\frac{1}{\#\text{labels}-1}\sum_{i=1}^{n}\text{Att} ^{h}(x,y_{i})\cdot\mathbf{1}_{\text{class}(x)\neq\text{class}(x_{i})}.\]
This score is high when the head attends strongly to the labels following inputs from \(\text{class}(x)\) (first term), and low when the head attends to the labels following other inputs (second term). We compute the prefix-matching score of each head by averaging over incorrect prompts on the Unnatural dataset, and plot the distribution of PM scores across each layer (Figure 6). For all models, the scores remain low at early layers, then increase around the critical layers that we identified in Section 4. This lends correlational support to our hypothesis that false induction heads cause false context-following.
**Ablating false induction heads.** However, we are interested in causal evidence. Therefore, we check whether removing false induction heads reduces false context-following. We select the 5 heads from GPT-J with the highest PM scores, and ablate them by setting their values to zero. We evaluate the resulting lesioned model on all 14 datasets, comparing its layerwise performance to the original model's. As a control baseline, we also perform the same analysis for 5 heads selected at random.
Our ablations significantly increase accuracy given incorrect demonstrations: they reduce the gap between correct and incorrect prompts by an average of \(38.3\%\), with only a small loss in accuracy for correct demonstrations (Table 2). In contrast, ablating random heads barely improves the accuracy given false demonstrations, and sometimes even increases the size of the accuracy gap. These results suggest that false induction heads cause a significant fraction of the false context-following behavior. In addition, since false induction heads were identified using only the toy Unnatural dataset but affect context-following on all datasets, this implies their behavior generalizes across tasks.
**Verifying that our heads are label-promoting.** So far, we have identified label-attending and class-sensitive heads and shown that they contribute to false context-following behavior. To test our initial hypothesis, we next check that they are also label-promoting, i.e. that they increase the probability of the false labels they attend to. We therefore study the outputs of our heads to understand how they affect the residual stream, focusing here on the Unnatural dataset.
We follow the methodology in Wang et al. (2022) to apply the logit lens to each head individually, by applying layer normalization followed by the unembedding matrix to its outputs. This tells us how much the head increases or decreases the intermediate logits of each token. For every head, we define its _false label promoting score_ as the difference between the logit increases of the permuted and correct labels. A high score means that the head greatly increase the probability of the permuted label, whereas a score of zero means that it promotes the correct and permuted labels equally.
Figure 5: Examples of attention patterns on incorrect demonstrations from the toy Unnatural dataset, for heads that are label-attending but not class-sensitive (Left), heads that are class-sensitive but not label-attending (Center), and heads that are both label-attending and class-sensitive (Right).
Our 5 heads have an average false label promoting score of \(6.5\): they increase the permuted label logit by \(6.5\) more than the correct label on average. In contrast, when sampling 100 sets of 5 random heads, we find an average score of \(-0.04\), with a standard deviation of \(0.41\). These results confirm that our label-attending and class-sensitive heads are indeed false induction heads.
In summary, our results validate our hypothesis at the beginning of this section: we found a small number of false induction heads in the later layers that contribute to false context-following, by attending to false labels in past demonstrations, and increasing their probability.
## 6 Discussion
In this paper, we studied why language models imitate incorrect demonstrations in their context. By extracting predictions from intermediate model layers, we showed that models _overthink_: given incorrect prompts, the final layers hurt its performance. We then identified a small number of _false induction heads_ that attend to and reproduce false information from past demonstrations, and showed via a lesion study that they contribute to incorrect imitation.
**How does the logit lens compare to probing?** Our work, especially Section 4, relies heavily on the "logit lens" (Nostalgebraist, 2020). We find it useful to think of this method in comparison to probing.
If a layer has a high probing accuracy, this means that the correct answer can be decoded from the hidden states. However, this is often a low bar to clear, especially when the classification task is easy and the hidden states are high-dimensional (Hewitt and Liang, 2019). In contrast, if a layer has a high logit lens accuracy, this shows that it encodes correct answers along a direction in the residual stream that the model subsequently decodes from, which is more meaningful. In particular, it implies a high probing accuracy, but the reverse is not necessarily true.
One intermediate between probing and zeroing out later layers is the tuned lens (Belrose et al., 2023): instead of training a new probe for each classification task or directly using the final layer's decoding matrix, Belrose et al. train a single universal "translator matrix" for each layer on a language modelling dataset such as the Pile (Gao et al., 2020). Inspired by our work, Belrose et al. applied the tuned lens to our setup, observing overthinking for additional models such as Pythia-12B.
**Semantically unrelated labels.** One hypothesis about the permuted labels setting is that the model simply learns a relabelling of the classes, and is not sensitive to the substance of the incorrect labels. If this were true, we would observe the same logit lens predictions for permuted labels and for semantically unrelated labels (Wei et al., 2023), i.e. labels that have no relation to the task. However, this is not the case: for SST-2, we tried replacing the demonstration labels "Positive" and "Negative" by "A" and "B", and measured the logit lens accuracies in this new setting given incorrect demonstrations (see Figure 8(o)). While we observe overthinking for related as well as unrelated labels, early-exiting achieves higher than random accuracy for SST-2, but not for its variant. This shows that the ground-truth of demonstration labels is an important factor in our results.
**Realism of our setting.** While we find consistent results across 14 datasets, our experiments are restricted to a specific setting: text classification with a large number of incorrect few-shot
Figure 6: Sum of prefix-matching scores for GPT2-XL (a), GPT-J (b), and GPT-NeoX (c) on the toy Unnatural dataset. The prefix-matching scores increase where the accuracy gap (averaged over tasks) between accurate and inaccurate demonstrations emerges.
examples. Nevertheless, we believe that the permuted labels setting captures important properties of realistic failure modes. Indeed, humans often err in consistent, systematic ways. For example, an inexperienced coder might consistently use the wrong method name, thereby permuting the method names in their prompts to a code completion model.
Moreover, our findings provide valuable information to understand misleading prompts beyond the permuted labels setting. Indeed, Belrose et al. (2023) drew inspiration from our work to detect another failure of large models: "prompt injection" (Branch et al., 2022). We ran a preliminary analysis of the intermediate predictions in this setting, and found that injected prompts, like incorrect demonstrations, exhibit overthinking (see Figure 18).
**Ablations on true prefix.** Surprisingly, we find that even with correct demonstrations, models have a tendency to overthink. When removing late layers and late attention in GPT2-XL, we observed a net benefit in performance. Furthermore, early exiting at the critical layer improves performance on a majority of datasets across all models. This signifies a potential misalignment between the pretraining objective and the downstream few-shot task, which is an interesting direction for future study.
**Limitations and future work.** Our head ablations do not fully remove the accuracy gap between correct and incorrect demonstrations. This could be because we did not identify some of the model components that cause false context-following. However, there is another possibility: if an attention head's outputs are on average far from zero, zeroing out that head takes the intermediate states off-distribution, which can decrease overall performance. Thus, one promising future direction would be to replace head outputs by their average value, as in Nanda et al. (2023).
Our work relates to mechanistic interpretability, which seeks to reverse engineering model behaviors from a bottom-up understanding of low-level components. In contrast, we embrace a more top-down strategy, extracting predictions from entire layers. This shift not only enhances efficiency, compute, and time, but also allows us to scrutinize model behavior on more realistic tasks. Our results suggest that aberrant and normal model behaviors are often processed differently, so more comprehensively measuring model internals could help us to understand and fix a broad variety of unwanted behaviors.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Heads**} & \multicolumn{2}{c}{**Permuted Labels**} & \multicolumn{2}{c}{**Half Permuted Labels**} & \multicolumn{2}{c}{**Random Labels**} \\ \cline{3-8} & & \(\Delta\) TP (\(\uparrow\)) & \(\Delta\) Gap (\(\uparrow\)) & \(\Delta\) TP (\(\uparrow\)) & \(\Delta\) Gap (\(\uparrow\)) & \(\Delta\) TP (\(\uparrow\)) & \(\Delta\) Gap (\(\uparrow\)) \\ \hline \multirow{2}{*}{Poem-Sentiment} & top & \(1.67_{0.03}\) & \(\mathbf{30.76_{0.39}}\) & \(2.43_{0.05}\) & \(\mathbf{66.36_{0.21}}\) & \(1.63_{0.03}\) & \(\mathbf{38.97_{0.29}}\) \\ & random & \(1.47_{0.02}\) & \(4.68_{0.09}\) & \(1.27_{0.02}\) & \(17.40_{0.13}\) & \(0.37_{0.01}\) & \(-17.08_{0.24}\) \\ \hline \multirow{2}{*}{Ethos} & top & \(-6.00_{0.14}\) & \(\mathbf{20.90_{0.44}}\) & \(-4.20_{0.11}\) & \(-5.21_{0.07}\) & \(-3.20_{0.08}\) & \(-1.19_{0.01}\) \\ & random & \(-3.00_{0.08}\) & \(5.97_{0.15}\) & \(0.60_{0.02}\) & \(7.29_{0.09}\) & \(1.40_{0.04}\) & \(-2.38_{0.03}\) \\ \hline \multirow{2}{*}{MRPC} & top & \(-5.70_{0.04}\) & \(\mathbf{62.20_{0.12}}\) & \(-1.20_{0.01}\) & \(\mathbf{7.69_{0.01}}\) & \(0.00_{0.00}\) & \(\mathbf{115.79_{0.04}}\) \\ & random & \(-3.50_{0.03}\) & \(23.17_{0.09}\) & \(-1.00_{0.01}\) & \(-38.46_{0.09}\) & \(0.60_{0.00}\) & \(47.37_{0.06}\) \\ \hline \multirow{2}{*}{SICK} & top & \(-3.63_{0.05}\) & \(\mathbf{15.29_{0.33}}\) & \(-9.43_{0.13}\) & \(\mathbf{-19.68_{0.28}}\) & \(-6.20_{0.08}\) & \(\mathbf{10.97_{0.15}}\) \\ & random & \(2.27_{0.04}\) & \(-2.82_{0.07}\) & \(-1.80_{0.03}\) & \(-10.99_{0.15}\) & \(0.13_{0.00}\) & \(-0.51_{0.02}\) \\ \hline \multirow{2}{*}{AGNews} & top & \(2.40_{0.11}\) & \(\mathbf{32.34_{0.39}}\) & \(-0.80_{0.04}\) & \(\mathbf{46.59_{0.24}}\) & \(-1.30_{0.07}\) & \(\mathbf{33.77_{0.33}}\) \\ & random & \(2.70_{0.12}\) & \(-11.06_{0.21}\) & \(-1.10_{0.05}\) & \(9.09_{0.07}\) & \(-1.50_{0.08}\) & \(6.49_{0.09}\) \\ \hline \multirow{2}{*}{Average} & top & \(-1.32_{0.01}\) & \(\mathbf{38.38_{0.29}}\) & \(-1.53_{0.01}\) & \(\mathbf{15.06_{0.04}}\) & \(-2.84_{0.02}\) & \(\mathbf{13.11_{0.14}}\) \\ & random & \(-1.83_{0.05}\) & \(-23.14_{0.02}\) & \(-2.07_{0.04}\) & \(-13.11_{0.12}\) & \(-2.36_{0.05}\) & \(-9.81_{0.10}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablating false induction heads recovers a significant fraction of the accuracy gap between correct and incorrect prompts, without hurting performance given correct demonstrations. We show the percent reduction in the accuracy gap (“Gap”) and absolute change in correct prompt performance (“TP”) when ablating the 5 false induction heads chosen using the Unnatural dataset (“top”) or 5 random heads (“random”). We bold gap reductions when they are greater for our heads than for the random heads. We show results for one dataset in each task category; full results are in Table 4.
#### Acknowledgements
Thanks to Erik Jones, Collin Burns, Nora Belrose, Lisa Dunlap, Alex Pan and our anonymous reviewers for helpful comments and feedback. JSD is supported by the NSF Division of Mathematical Sciences Grant No. 2031899.
|
2306.01303 | DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model | Multilingual self-supervised speech representation models have greatly
enhanced the speech recognition performance for low-resource languages, and the
compression of these huge models has also become a crucial prerequisite for
their industrial application. In this paper, we propose DistilXLSR, a distilled
cross-lingual speech representation model. By randomly shuffling the phonemes
of existing speech, we reduce the linguistic information and distill
cross-lingual models using only English data. We also design a layer-jumping
initialization method to fully leverage the teacher's pre-trained weights.
Experiments on 2 kinds of teacher models and 15 low-resource languages show
that our method can reduce the parameters by 50% while maintaining
cross-lingual representation ability. Our method is proven to be generalizable
to various languages/teacher models and has the potential to improve the
cross-lingual performance of the English pre-trained models. | Haoyu Wang, Siyuan Wang, Wei-Qiang Zhang, Jinfeng Bai | 2023-06-02T07:03:06Z | http://arxiv.org/abs/2306.01303v1 | # DistilXLRS: A Light Weight Cross-Lingual Speech Representation Model
###### Abstract
Multilingual self-supervised speech representation models have greatly enhanced the speech recognition performance for low-resource languages, and the compression of these huge models has also become a crucial prerequisite for their industrial application. In this paper, we propose DistiliXLSR, a distilled cross-lingual speech representation model. By randomly shuffling the phonemes of existing speech, we reduce the linguistic information and distill cross-lingual models using only English data. We also design a layer-jumping initialization method to fully leverage the teacher's pre-trained weights. Experiments on 2 kinds of teacher models and 15 low-resource languages show that our method can reduce the parameters by 50% while maintaining cross-lingual representation ability. Our method is proven to be generalizable to various languages/teacher models and has the potential to improve the cross-lingual performance of the English pre-trained models.
Haoyu Wang\({}^{1}\), Siyuan Wang\({}^{1}\), Wei-Qiang Zhang\({}^{1}\), Jinfeng Bai\({}^{2}\)\({}^{1}\)Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
\({}^{2}\)TAL Education, Beijing 100084, China
[email protected], [email protected]
**Index Terms**: Knowledge Distillation, Low-resource Speech Recognition, Representation Learning
## 1 Introduction
Self-supervised pre-trained models have made many significant breakthroughs in low-resource speech recognition. By learning from a large amount of multilingual unlabeled data, these self-supervised pre-trained models can provide cross-lingual phoneme-level representations for almost any language. Models fine-tuned from multilingual pre-trained models can achieve satisfactory word error rates (WER) with extremely limited or even no speech data [1, 2, 3, 4].
However, these multilingual pre-trained models, represented by XLS-R and XLSR53, typically have hundreds of millions of parameters, which is an obstacle to their application on mobile devices such as laptops and smartphones. Considering the excellent performance of these models in low-resource speech recognition, a compressed multilingual speech representation model is of undoubted importance to the industrial application of speech recognition in minority languages.
Model pruning is an efficient method to reduce the parameters of the pre-trained models. The lottery ticket hypothesis assumes that a sparse subnetwork can be extracted from a dense network without sacrificing the performance [5, 6]. PARP proposes a feasible measure to discover the subnetwork from self-supervised speech representation models by alternating pruning and fine-tuning [7]. Similarly, by alternate quantization and pruning, Wang et al. successfully remove 50% of the parameters from the Wav2vec 2.0 model and quantize it down to a 4-bit precision [8]. Although these pruning-based compression methods retain most of the performance, their acceleration still requires the support of specific hardware devices.
Knowledge distillation is a hardware-friendly way to transfer the representation ability to a compact student that can be used in normal computing devices. DistilHuBERT compresses a 12-layer Hubert-based model to get a 2-layer student model and appreciably reduces the model size [9]. FitHuBERT designs a thin but deep student model to improve the representation ability of the student model and achieves better performance with fewer parameters than the DistilHuBERT model [10].
Compared to the Hubert base model used in previous studies, the distillation of cross-lingual speech representation models faces new challenges. First, it is difficult to obtain training data for low-resource languages, collecting and formatting data from multiple languages also requires time and effort. To address this challenge, we found inspiration in the RNN-transducer (RNN-T) domain adaptation problem. Zhao et al. proposed a data splicing method, which randomly selects speech segments from existing data to generate new training utterances [11]. This method can adapt a pre-trained RNN-T model to new domains with negligible cost. For different languages, the phonotactics of a sentence is one of the most important features [12]. Therefore, we want to distill the multilingual pre-trained models using only unlabeled English data by randomly selecting phonemes from existing utterances.
Second, the parameters of large pre-trained models such as XLSR-53 have more complex interrelationships, which is a barrier for the learning of the student. Therefore, we design a layer-jumping initialization method to better exploit the pre-trained parameters and retain the inter-layer similarity of the teacher.
In this paper, we propose DistilXLRS, a compact multilingual speech representation model1. We verify the effectiveness of our method on XLS-R and XLSR-53. Experiments on 15 low-resource languages prove that our method can maintain most of the performance and achieves comparable performance with multilingual distillation.
Footnote 1: Available at [https://github.com/backspace/distilXLRS](https://github.com/backspace/distilXLRS)
## 2 Method
### Wav2vec 2.0 Models
XLS-R and XLSR-53 are two of the most commonly used multilingual pre-trained models, and both can be considered as multilingual versions of the Wav2vec 2.0 model. Wav2vec 2.0 models are composed of a CNN feature extractor and a multi-layer
transformer encoder [13]. For the XLS-R and XLSR-53 models, the feature extractors have 6 CNN layers and the encoders have 24 layers. These models are trained through the contrastive prediction coding (CPC) task, where the future frames ought to be distinguished from some randomly sampled distractors. Through CPC training, the outputs of each transformer layer, or the hidden states, will contain higher-level information about the input audio.
Our distilled model also has a similar structure to the XLS-R and the XLSR-53 models. Considering that an excessively large difference in size may have a negative effect on distillation [14], we decide to use a 12-layer transformer encoder which leads to around a 50% reduction in the number of parameters.
### Distillation Objective
Typically, in knowledge distillation, the student model tries to learn from the teacher model by mimicking the teacher's behavior. For transformer-based wav2vec 2.0 teachers, the students usually learn from the hidden states, the attention score, or the logits of the CPC task. Some previous works [14, 15] and our preliminary experiments show that minimizing the mean square error (MSE) of the hidden states and using multi-task distillation to learn from different depths will lead the students to the best performance. Formally speaking, let \(H\) be the hidden states of the teacher model and \(\hat{H}\) be those of the student model, the distillation loss is computed as follows:
\[l_{\text{disit}}(\hat{H},H)=\sum_{(i,j)\in S}l_{\text{MSE}}(h_{i},\hat{h}_{j}) \tag{1}\]
Each tuple \((i,j)\) in \(S\) denotes a student-teacher layer pair where \(h_{i}\), \(\hat{h}_{j}\) are the hidden states from layer \(i\) and \(j\) of the student and teacher model, respectively.
### Layer-Jumping Initialization
Speech signal contains a lot of information. Emotion, prosody, and semantic information can all be encoded in an utterance. In pre-trained speech representation models, it is widely regarded that hidden states from different layers contain different kinds of information [9, 16]. Phoneme-level semantic information is usually contained in the last few layers. As a result, learning from these layers helps to achieve better performance in speech recognition tasks.
In previous works, the student models usually load weights from the lower teacher layers (e.g., the first two transformer layers, depending on the number of layers of the student model) or are simply trained from scratch [9]. However, we assume that this may not be appropriate for larger pre-trained models such as XLS-R or XLSR-53. Fig 4a and 4b show the Centered Kernel Alignment (CKA) inter-layer similarity [17] of the wav2vec 2.0 base model and the XLSR-53 model, respectively. The CKA similarity, which is based on the inner product of the hidden states, shows that the last few layers of the XLSR-53 model are more different from the previous ones, and the relationship between these layers is more complex.
It makes intuitive sense that directly loading the weight of the last few layers would help deal with such complexity. As a result, we propose the layer-jumping initialization method, where the teacher layers are selected at intervals when initializing the student, to take full advantage of the pre-trained parameters. Formally speaking, the student layer \(\hat{h}_{s}^{i}\) is initialized as
\[\hat{\theta}_{s}^{i}=\theta_{t}^{2i}, \tag{2}\]
where \(\hat{\theta}_{s}^{i}\) are the parameters of student layer \(i\) and \(\theta_{t}^{2i}\) are those of teacher layer \(2i\). Due to the layer drop strategy, where the transformer layers are randomly dropped during pre-training, so the teacher models are actually robust to such deletion of layers.
### Data Splicing
Using only English data to distill cross-lingual pre-trained models can help to fully utilize the large English datasets. Moreover, pre-training can also benefit from similar techniques and the cross-lingual representation ability of English pre-trained models can be improved.
To reduce the language-dependent information in an English speech utterance, we randomly shuffle the syllables in the utterances. The reason for choosing syllables rather than phonemes as the basic unit in our data splicing method is to keep the phoneme context coherent and to avoid continuous multiple constants which are rare in human languages.
We train a Gaussian Mixture Model - Hidden Markov Model (HMM-GMM) to align the audio with the phoneme sequences. We add syllable-separating symbols to all the pronunciations in the lexicon and tag the utterances with syllable-level timestamps. During training, the syllables in an utterance are shuffled and spliced into a new speech with less language-dependent information. Fig 1 provides an overview of our method.
## 3 Experiments
**Datasets**. For distillation, we only use the Librispeech English dataset [18]; for fine-tuning, we select 15 languages from the MATERIAL2, Babel [19] and Common Voice datasets [20].
Footnote 2: [https://www.iarpa.gov/index.php/research-programs/material](https://www.iarpa.gov/index.php/research-programs/material)
Table 1 shows the details of our datasets. For the languages from MATERIAL and Babel, the datasets are provided by the OpenASR21 challenge3, which is a track of the NIST Open Speech Analytic Technologies (OpenSAT) evaluations. A 10-hour training set and a 10-hour development set are provided for each of the languages, consisting mainly of telephone conversations. 10 languages from the MATERIAL and Babel datasets are used for fine-tuning. To compare the result between data splicing and real-world multilingual distillation, we also select 5 additional languages to ensure that the multilingual distillation and fine-tuning sets do not overlap.
Footnote 3: [https://sat.nist.gov/openasr21](https://sat.nist.gov/openasr21)
Common Voice is a cloud-sourced multilingual dataset with
Figure 1: An overview of our method. Syllables in existing utterances are shuffled to get training data with less language-dependent information.
clearer speech quality. Considering that the MATERIAL and Babel datasets contain mainly African and Asian languages, we select 5 European languages from the Common Voice dataset, and randomly sample a 5-hour subset to simulate a low-resource scenario for each language. All the training audio is resampled to 16KHz.
**Splicing Setup**. The GMM-HMM used to generate the timestamps is trained using kaldi's Librispecech recipe4 and we used the tri6b model for alignment. The syllable boundaries are generated from a syllabified CMU dictionary5. During training, 37.5% of the utterances are randomly shuffled.
Footnote 4: [https://github.com/kaldi-asr/kaldi/blob/master/egs/librispecech/s5/run.sh](https://github.com/kaldi-asr/kaldi/blob/master/egs/librispecech/s5/run.sh)
Footnote 5: [http://webdocs.cs.ualberta.ca/](http://webdocs.cs.ualberta.ca/) kondrak/cmudict.html
**Distillation Setup**. The proposed DistilXLSR model consists of a 6-layer CNN feature extractor and a 12-layer transformer encoder. The transformer encoder is initialized by layer-jumping initialization according to Eq. 1. We also apply the masked speech denoising strategy according to WavLM where 15% of the utterances are mixed with another one in the same batch [21]. The distillation is performed on an RTX 3090 GPU for 200k updates and around 37 hours with a batch size of 6 utterances and a learning rate of 2.0e-4.
**Fine-tuning Setup**. We fine-tune the models using the Fairseq toolkit following the experimental settings of Zhao et.al [1]. For each language, we add a linear layer on the top and optimize the model using the Connectionist Temporal Classification (CTC) loss. Parameters are updated every 8 steps and the model is trained for 20k updates and around 5 hours. The learning rate is set to 1.0e-4 with a tri-stage rate schedule, where the learning rate increases linearly to the set value for the first 2k updates, holds constant for the next 8k updates, and decreases linearly to 0 for the remaining updates. The batch size is set to 1.28M samples, while 55% of the frames and 25% of the channels of the CNN features are masked.
## 4 Results
### Comparing With Teacher Models
Table 2 shows the performance of our proposed model on 15 low-resource languages. Using the teacher models as benchmarks, we do not find significant gaps in the performance across languages and teacher models, demonstrating the generalizability of our approach. Our proposed models achieve lower word error rates on the Common Voice dataset, and the degradation compared to the teacher models is relatively small. In an extremely low-resource setting, where only a 5-hour training set is available for each Common Voice language, the WER of the proposed model is only 2.5% higher than the XLSR-53 teacher model, while the average error rate increases by 4.18% in absolute terms.
The degradation is more obvious in the Babel and MATERIAL datasets. As mentioned above, the Babel and MATERIAL datasets consist mainly of telephone conversations at a sample rate of 8KHz, and the signal-to-noise ratio (SNR) is much higher than that of the Common Voice dataset. Due to the lower complexity and fewer parameters, the compressed models are more prone to underfitting and more sensitive to the noise in the data. Although we have applied some data augmentation strategies such as masked speech denoising, the degradation is still present. For the XLS-R teacher, on the 10 languages of the Babel and MATERIAL datasets, the average WER is 47.06%, which is 5.3% lower than the student model.
Despite this, our model still retains the cross-lingual representation ability. Figure 2 provides a visualization of the WERs on the Babel and MATERIAL datasets of 4 different pre-trained models. The results show that our model can achieve comparable or even better performance than the w2v-EN-60k and HuBERT-EN-60k models, which are pre-trained on a 60,000-hour English dataset. The experiments on 15 languages demonstrate the cross-lingual representation ability of the proposed models, even though they are trained on 960h of English data. Our model requires only 1 GPU for training with 50% fewer parameters, demonstrating the trade-off between training cost, computation, and performance. 6
Footnote 6: Our preliminary experiments also shows that DistilXLSR significantly outperforms E2E or Hybrid Models with same amount of labeled data. Detailed results can be found at our github page.
### Ablation Studies
#### 4.2.1 The Effectiveness of Data Splicing
Figure 2(a) compares the performance of 4 models distilled from the XLSR-53 teacher. For all 6 low-resource languages, the application of data splicing reduces the word error rates, especially for the Kurrami-Kurdish, where models with data splicing outperform the multilingual distillation model while the model without data splicing does not. This phenomenon demonstrates the effectiveness of data splicing. Moreover, using a larger
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Split** & **Source** & **Languages** \\ \hline \multirow{6}{*}{Fine-tune} & MATERIAL & Tamil (ta), Farsi (fa) \\ \cline{2-3} & Common & Basque (eu), Dutch (nl), \\ & Voice & Greek (el), \\ & & Interlingua (ia), Polish (pl) \\ \cline{2-3} & & Amharc (am), \\ & & Cantonese (yue), \\ & & Georgian (ka), Guarani (gn), \\ & & Kurmanji-kurdish (ku), \\ & & Mongolian (mn), Pashto (ps), \\ & & Swahili (sw), Tagalog (tl) \\ \hline \multirow{6}{*}{Distillation} & MATERIAL & Farsi (fa), Somali (so) \\ \cline{2-3} & & Amharc (am), Georgian (ka), \\ \cline{1-1} & & Guarani (gn), Javanese (jv), \\ \cline{1-1} & & Kazakh (kk), Mongolian (mm), \\ \cline{1-1} & & Pashto (ps), Vietnamese (vi) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The low-resource languages. Besides the fine-tuning languages, we also select 5 languages to compare the result between data splicing and real-world multilingual distillation.
Figure 2: Word error rates of 5 Babel languages of the HuBERT-EN-60k [22], w2v-EN-60k [13], XLSR-53, and the proposed method inferred without language models. Results of the first 2 models are from Zhao et al. [1].
amount of data can bring further improvement, suggesting the possibility of using large-scale unsupervised English datasets, such as the Libri-light [23] or Gigaspeech [24], to improve distillation or even pre-training.
For the multilingual distillation model, Amharic (am) and Somali (so) appear in the training set while Swahili (sw) and Tamil (ta) do not. Kurmanji-Kurdish (ku) and Tagalog (tl) are also absent, but both of them have similar languages from the same language family in the training set (Pashto and Farsi for Kurmanji-Kurdish, and Vietnamese for Tagalog). However, we do not find that the models behave differently in these 6 languages, which shows that the multilingual distillation model, and the proposed data splicing model, have learned the language-independent cross-lingual representation ability from the XLSR-53 teacher.
#### 4.2.2 The effectiveness of Layer-Jumping Initialization
Figure 2(b) compares the performance of the proposed model with continuous (e.g. 0-11) initialization and layer-jumping initialization, where we can observe a significant increase in the WERs for all the languages, proving the importance of fully exploiting the pre-trained weights. Figures 3(c) and 3(d) show the CKA interlayer similarities of these two models. It can be seen that the model with layer-jumping initialization better captures the interlayer similarity of the teacher model. In addition, the layer-jumping initialization allows the proposed model to learn the differences between the 22nd/23rd and the 24th layers, which is unclear without the layer-jumping initialization.
## 5 Discussions
The performance degradation on the Babel and MATERIAL datasets illustrates the importance of solving the underfitting problem. Structured pruning, although not yet successfully applied to large-scale pre-trained acoustic models, may have the potential to further preserve the performance without dedicated hardware. In addition, it is useful to validate the effectiveness of data splicing on large English datasets. We leave these questions for future work.
## 6 Conclusions
In this paper, we propose a method to distill cross-lingual speech representation model using only English data. Our experiments on 15 low-resource languages show that our proposed model can maintain the cross-lingual representation ability with 50% fewer parameters. Further experiments demonstrate the effectiveness of using layer-jumping initialization and applying data splicing. Our method provides compressed cross-lingual representation models and is also able to improve the cross-lingual performance of the English pre-trained models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{10}{c}{Languages} \\ \cline{2-13} & el & nl & eu & ia & pl & ta & ps & ku & sw & tl & am & gn & ka & mn & fa & Avg. \\ \hline XLSR-53 & 10.7 & 12.4 & 29.5 & 27.1 & 25.5 & 65.5 & 45.5 & 65.1 & 40.5 & 43.9 & 47.7 & 41.2 & 41 & 46.4 & 33.8 & 38.38 \\ \hline S1 & 14.2 & 14.9 & 33.8 & 34.4 & 28.8 & 69.8 & 50.5 & 65.6 & 45.3 & 49.8 & 50.6 & 48.5 & 47.7 & 52.8 & 43 & 43.31 \\ \hline XLS-R & 9.0 & 13.4 & 28.2 & 25.2 & 24.7 & 63 & 43.1 & 61.2 & 37.2 & 41.1 & 41.4 & 38.9 & 38.4 & 43.3 & 32.6 & 36.04 \\ \hline S2 & 13.2 & 14.6 & 29.4 & 34.8 & 28.9 & 67.7 & 49.2 & 67.2 & 43.8 & 48.2 & 48.6 & 46.2 & 45.7 & 50.6 & 40.6 & 41.91 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The word error rate for 15 low-resource languages. Fine-tuning parameters are set according to the best results in OpenASR21. S1 and S2 are distilled from XLSR-53 and XLS-R, respectively.
Figure 3: WERs for ablation studies, inferred with 4-gram LMs. |
2305.18302 | What We Know So Far: Artificial Intelligence in African Healthcare | Healthcare in Africa is a complex issue influenced by many factors including
poverty, lack of infrastructure, and inadequate funding. However, Artificial
intelligence (AI) applied to healthcare, has the potential to transform
healthcare in Africa by improving the accuracy and efficiency of diagnosis,
enabling earlier detection of diseases, and supporting the delivery of
personalized medicine. This paper reviews the current state of how AI
Algorithms can be used to improve diagnostics, treatment, and disease
monitoring, as well as how AI can be used to improve access to healthcare in
Africa as a low-resource setting and discusses some of the critical challenges
and opportunities for its adoption. As such, there is a need for a
well-coordinated effort by the governments, private sector, healthcare
providers, and international organizations to create sustainable AI solutions
that meet the unique needs of the African healthcare system. | Naome Etori, Ebasa Temesgen, Maria Gini | 2023-05-10T19:27:40Z | http://arxiv.org/abs/2305.18302v2 | # What We Know So Far: Artificial Intelligence in African Healthcare
###### Abstract
Healthcare in Africa is a complex issue influenced by many factors including poverty, lack of infrastructure, and inadequate funding. However, Artificial intelligence (AI) applied to healthcare, has the potential to transform healthcare in Africa by improving the accuracy and efficiency of diagnosis, enabling earlier detection of diseases, and supporting the delivery of personalized medicine. This paper reviews the current state of how AI Algorithms can be used to improve diagnostics, treatment, and diseases monitoring, as well as how AI can be used to improve access to healthcare in Africa as a low-resource setting and discusses some of the critical challenges and opportunities for its adoption. As such, there is a need for a well-coordinated effort by the governments, private sector, healthcare providers, and international organizations to create sustainable AI solutions that meet the unique needs of the African healthcare system.
## Introduction
The application of AI in healthcare dates back to the 1950s. Researchers at the Massachusetts Institute of Technology (MIT) developed "Project MAC" a program to help analyze and interpret medical data, one of the first examples of AI in healthcare. MAC stood for "machine-aided cognition" [14].
In the 1960s, AI was used to create knowledge-based expert systems, that mimic the decision-making processes of human experts. The "MYCIN" system, developed at Stanford University to aid in diagnosing and treating infectious diseases, was an early example of an expert system in healthcare [13].
During the 1980s and 1990s, AI-based Machine learning algorithms could learn and adapt to new data without being explicitly programmed. This led to the development of programs that could analyze medical images, such as X-rays, to help diagnose diseases and AI algorithms used for diagnosing, identifying, and analyzing public health threats [15].
More recently, the improvements in natural language processing (NLP) has been used to understand and interpret human language. NLP techniques have been applied in healthcare to extract and analyze data such as electronic health records (EHRs),medical images and assist with tasks (such as appointment scheduling and medication management). As a result of increases in computing power, the availability of large amounts of data, and the rapid development of big data analytics have enabled AI applications. Deep learning (DL) machine learning technology, which involves training artificial neural networks (ANN) on large datasets, has been a major driver of recent AI advances. This has significantly impacted modernizing the global healthcare system to improve diagnosis and clinical care accuracy and efficiency. A convolutional neural network (CNN) is a type of DL algorithm that simulates the behavior of interconnected neurons in the human brain [15].
The recent surge in AI healthcare research in Africa indicates the potential of AI to improve patient outcomes and reduce the burden on the healthcare system. This paper discusses AI algorithms (such as expert systems, machine learning, deep learning, natural language processing, and image processing) and how they are applied in the African healthcare systems, including challenges in the application of AI systems in Africa as well as opportunities for AI adoption.
## Problem Definition
There has been a lot of research on AI in healthcare, but there has been minimal discussion about AI in African healthcare. AI researchers have also found that there is a lack of diverse representation of AI models in healthcare and particularly from low-resource languages. To fill this gap, we address two main research questions:
**RQ1:**_What role do AI algorithms play in African healthcare systems?_
**RQ2:**_How does a lack of resources impact AI implementation in Africa?_
Our goal was to understand and explain the current integration and implementation of AI in African healthcare systems to inspire future AI research in African Healthcare.
## Related Work
Many researchers have shown that AI has the potential to improve patient care and lower healthcare costs. Our review of the literature yielded the following topics for discussion.
### Recent AI Advances in Healthcare in Developed Countries
AI has made significant advancements in the healthcare industry in developed countries in recent years. This is because AI algorithms can handle and learn from vast amount of healthcare data generated due to advancement of AI technologies. Deep learning, in particular has impacted how we view AI tools today and is the source of the recent advancement in AI applications [1].
Sophisticated AI algorithms have been applied in diagnosis and treatment planning to analyze medical images, such as X-rays and CT scans, to help doctors diagnose diseases [12, 13, 14] and decide plan treatment. For example, [15, 16, 17, 18] shows how an AI algorithm was able to detect lung cancer from CT scans with a very high degree of accuracy, comparable to that of an experienced radiologist.
Big data has created opportunities for AI-based Machine learning (ML) algorithms, enabling them to learn from previous data to make accurate predictions about new data. This can assist healthcare providers in prioritizing their resources and intervening before a patient condition deteriorates[2, 16, 17, 18].
Pharmaceutical companies use AI algorithms to identify new candidates for drug development and analyze the effects of different drugs [1, 19, 20, 21, 22, 23, 24].
AI-powered Virtual assistants help triage patients and provide them with information about their condition and treatment options [1, 18]
The use of clinical decision support [16, 17] and medication management [19] have improved patient care while reducing the workload for healthcare professionals in hospitals.
### Recent AI Advances in African Healthcare
AI has enormous potential to help in the prediction and prevention of outbreaks of infectious diseases. Data from the World Health Organization (WHO) [20] shows the relationship between malaria incidence and various climate variables, found that specific patterns in the data were indicative of increased malaria risk, and used machine learning algorithms to create a model for predicting malaria incidence based on these patterns. Research conducted by [1] used past Ebola outbreak data to train the ML algorithm to accurately predict the outcomes of Ebola patients with a high degree of accuracy which could be used in future outbreaks to help guide clinical decision-making and improve patient outcomes.
AI can improve the accuracy of diagnoses and treatment recommendations in Africa. Due to a lack of resources for optimal healthcare provision, trained physicians, and underfunded public healthcare facilities, ML models can be used to analyze medical images and provide diagnoses and chronic diseases such as cancer. [1] shows the application and implementation phases of oncological AI tools in Africa using patient cohorts in Africa.
### Status of AI in African Healthcare system
The application of AI in the African healthcare system is still in its early stages. However, there is a growing interest and investment in the implementation of AI to improve different aspects of healthcare delivery in Africa. However, the implementation of AI in Africa confronts numerous challenges[14]. Many African countries, for example, still have limited access to reliable electricity and to high-speed internet, making it challenging to implement and use AI systems. The successful implementation of AI in health care depends on the availability and dependability of the infrastructure[19].
For many years, African healthcare systems have suffered from man-made issues such as institutional, human resource, financial, technical, and political developments [10]. Hence, most African countries cannot meet the fundamental requirements for effective healthcare systems. Ineffective service integration is linked to poor governance, and human resource challenges [21]; Marais and Petersen (2015). The lack of basic access to healthcare makes it even harder to implement AI solutions [20].
Additionally, the scarcity of trained professionals with expertise in AI in Africa is a major concern; hence, many healthcare systems are struggling to meet the rising demand for services while also facing significant shortages of trained health workers and essential medicines [12, 13]. As a result, most African countries rely heavily on developed nations for AI technologies to solve critical healthcare challenges due to inadequate funding [1].
Ethical considerations around the use of AI in healthcare in Africa, including issues related to data privacy and the potential for biased decision-making could contribute to slow AI implementation in the African region [1].
### AI Gaps in Africa: Key Challenges
African healthcare systems are lagging in implementing AI mainly due to lack of resources. Implementing AI in healthcare can be expensive, and securing funding for such initiatives can be challenging in most African countries. Ensuring that AI models are sustainable in the long term is also an important consideration [15].
* There is often a lack of high-quality data available for training machine learning models due to poor recordkeeping and inadequate infrastructure for data collection and storage. For example, doctors and nurses still use hand written notes when seeing patients. This can make it difficult to develop accurate and effective AI systems for the African region. Prior studies have demon
strated disparate AI performance by race AI models, especially on African datasets. The training datasets may not be representative of the African population. If the datasets used to train the models are not diverse or representative enough, the models may not be able to generalize well to new, unseen data from Africa population [2, 1, 19, 20].
* Furthermore, the AI models may not have been specifically designed to handle the unique challenges present in African datasets. For example, African datasets may contain a greater variety of languages, dialects, and accents, and the models may not have been trained on data that reflects this linguistic diversity. In addition, there may be other cultural and societal factors that are unique to Africa that the models have not been designed to handle. According to estimates, 17% of the world's languages, many of which are spoken in Africa, are "low-resource languages" in the digital realm [10].
* Despite these challenges, there have been some efforts to improve healthcare in Africa. For example, some African governments have increased funding for healthcare, and there have been efforts to train more healthcare professionals and build new healthcare facilities, as many developing countries are attempting to move more towards universal healthcare coverage [1]. In addition, some international organizations and charities have provided assistance to boost the delivery of healthcare in Africa [12, 1].
## Methodology
This paper reviews the current state of AI in healthcare in Africa, including the challenges and opportunities for its adoption. A comprehensive search of the literature was conducted using several databases. The survey reviewed 30 journal papers obtained electronically through four scientific databases (Google Scholar, Scopus, IEEE, Pum Med, and Science Direct) searched using three sets of keywords: (1) Artificial Intelligence in Africa (2) Artificial Intelligence in Healthcare (3) African Healthcare Systems. We limited our search to articles that made algorithmic contributions and addressed diseases in their applications. We disqualified papers that only provided opinions, theories, surveys, or datasets. This yielded a total of 12 papers.
## Findings
### RQ1: Role of AI Algorithms on African Healthcare
Based on our findings, different algorithms have been used to address different healthcare, such as Expert systems, machine learning, and deep learning. Although expert systems have been extensively researched globally, little research has been conducted on its application in African healthcare systems. Expert systems can be beneficial, especially where scarcity of trained physicians and medical personnel exists, as is usually the case in African countries. Expert systems can help African healthcare systems diagnose patients and select treatment plans without extensively trained medical personnel, where a decision must be made quickly to save lives [13]. An example is where an expert system has been incorporated with fuzzy logic systems, to improve the diagnosis of chronic conditions like STDs, HIV/AIDS, cholera, abdominal pain, and diabetes decision support application in South Africa [14, 15, 16, 17].
Natural language processing (NLP) has been used in developing a Medical Chatbot to diagnose patients in their early stages of the disease or to use social media data for surveillance and monitoring of infectious disease outbreaks [18, 19]. The use of NLP in African healthcare is still at its infancy stage. [10] conducted mental health condition research following the outbreak of coronavirus(COVID-19), using Twitter data from Nigeria and South Africa. [18] demonstrated that Likita, a chatbot, could be used to diagnose common ailments and improve healthcare delivery in Africa.
Deep learning (DL) can process large amounts of data, such as images, and could potentially aid medical workers in decision-making, using an X-ray image to analyze multiple diseases. For instance, [19] used X-ray images to classify Pneumonia with a validation accuracy of 93.73%. [20] also used X-ray data to diagnose early stage Tuberculosis using a DL algorithm and achieved 99% accuracy. [10] used three pre-trained DL models (faster R-CNN, single-shot multi-box detector (SSD), and RetinaNet) for the microscopic diagnosis of malaria parasites, malaria is one of the deadliest diseases in Sub-Saharan Africa in thick blood smears. The result found that a faster R-CNN has a higher accuracy than the other two models used in the study with an average precision of over 0.94. Hence this approach can improve the accuracy and efficiency of malaria diagnosis compared to traditional methods.
ML models are used to predict and classify chronic diseases in Africa. They are gaining popularity for their simplicity and the availability of data in a few African countries. Table 1 has summarized some of the research on applying AI to address different diseases in African countries.
### RQ2: How does the lack of resources impact AI implementation in Africa?
African healthcare systems suffer from the lack of data access due to inadequate resources, which is often a key ingredient in the development and training of AI systems. Lack of infrastructure and investment in data collection and storage has made it difficult for organizations to obtain the data they need to develop and train AI systems [2, 1, 19, 20]
AI in African healthcare has generally been used in diseases mapping such as HIV. ML techniques identify HIV predictors for screening in sub-Saharan Africa [17, 18].
2021), malaria predictions (Nkiruka, Prasad, and Clement, 2021). Even though African healthcare systems strive to implement AI to assist with healthcare operations, it is still difficult to respond to public health emergencies such as disease outbreaks, resulting in increased mortality and morbidity.
## Discussion
AI has been applied in various healthcare settings in Africa, as shown in Table 1. Hence has the potential to significantly improve Human Immunodeficiency Virus (HIV) care in Africa by providing more accurate diagnoses, predicting patient outcomes, and optimizing treatment programs. For example, (Bauer and Schedl, 2019) used DL to improve the accuracy of HIV diagnoses in sub-Saharan Africa, (Mutai et al., 2021) applied ML approaches for building models in identifying HIV predictors as well as predicting persons at high risk of infection, the XGBoost algorithm significantly improved the identification of HIV positivity by f1 scoring mean of 90 and 92% for males and females. respectively. Also, (Balzer et al., 2020) used ML to Identify Persons at High-Risk of HIV Acquisition in rural Kenya and Uganda, and the result shows that ML improved efficiency by 78%.
Malaria has been (and still is) a major public health crisis in Africa, with an estimated 435 million cases and 1.3 million deaths yearly (WHO, 2021). Experts say malaria slows economic growth in Africa by up to 1.3 percent per year. AI can be used in the fight against malaria in Africa; for example, (Taconet et al., 2021) used data-driven and interpretable ML modeling Algorithm (Random Forest) to explore malaria vector biting rates in rural Burkina Faso. The results identified several aspects of the bio-ecology of the main malaria vectors (Masinde, 2020) Decision trees algorithm and climate data produced accuracy results of 99%.
Furthermore, AI has the potential to significantly improve the response to Ebola outbreaks in Africa by providing more accurate and timely predictions, faster diagnoses, and more efficient use of resources; for example, Bayesian (ML) Models were used to enable new in Vitro Leads(Anantpadma et al., 2019; Lane et al., 2019).
Research has highlighted the potential benefits of using AI to improve cancer diagnosis and treatment in Africa. For example, in predicting colorectal cancer recurrence and patient survival using Artificial neural network(ANN), a supervised ML approach where scored the highest AUC-ROC for recurrence (87%) and survival (82%) in South Africa (Achilonu et al., 2021). Random forest (ML) models have been used in Breast cancer risk prediction among African women in Nigeria by (Macaulay et al., 2021); the Chi-Square selected features gave the best performance with 98.33% accuracy, 100% sensitivity, 96.55% specificity, and 98% AUC. Also,(Ahishakiye et al., 2020) predicted Cervical Cancer Based on risk factors using the Ensemble Learning
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Disease** & **Country** & **Algorithm** & **Dataset** & **Accuracy** & **Reference** \\ \hline HIV/AIDS & S.Saharan.A & XGBoost & PHA & 90\% males, 92\% female & (Mutai et al., 2021) \\ \hline HIV/AIDS & Kenya, Uganda & - & Data from 16 communities & 78\% & (Balzer et al., 2020) \\ \hline Malaria & Burkina Faso & Random forest & georeferenced raster(SPOT) & 84\% & (Taconet et al., 2021) \\ \hline Ebola & W.Africa & Central Bayesian & 842 molecules & 83\% & (Anantpadma et al., 2019) \\ \hline Colorectal Cancer & S.Africa & Artificial NN & WDGMC CRC & 87.0\% & (Achilonu et al., 2021) \\ \hline Breast cancer & Nigeria & Random Forest & Surgical data(LASUTH) & 96.67\% & (Macaulay et al., 2021) \\ \hline Cervical cancer & African countries & Ensemble learning & SMOTE & 87.21\% & (Ahishakiye et al., 2020) \\ \hline COVID-19 & African Countries & Ensemble & (WHO) database & MAD = 0.0073, MSE = 0.0002, R2 & (Ibrahim et al., 2023) \\ \hline Diabetes & Nigeria & Binarized Naïve Bayes (BNB) & Collected(nairaland) Bayes & 87.08\% & (Oyebode and Orji, 2019) \\ \hline Coronary Heart D & South Africa & Naïve Bayes,SVM and Decision Tree & S.Africa-KEEL & higher than 70\% & (Gonsalves et al., 2019) \\ \hline Obstetric Fistula & Tanzania & Logistic Regression & Collected from CCBRT,Dar-es-Caisam & 86\% & (Fihavango et al., 2021) \\ \hline Malaria & Sub-Saharan & Decision Tree & (WHO) datasets & 75.10\% & (Masinde, 2020) \\ \hline \end{tabular}
\end{table}
Table 1: Application of AI and Machine Learning in Africa healthcare System
model with an accuracy of 87.21%.
Since the COVID-19 pandemic, the demand for use of AI in the African healthcare sector to aid in the pandemic response has increased. Many African countries have been conducting studies on this area; for example, [22] conducted a study to predict daily COVID-19 cases spreading across the north, south, east, west, and central Africa regions and countries using ML, including Artificial neural network(ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM) and convert and the result shows high accuracy performance.
According to the International Diabetes Federation (IDF), 24 million adults (ages 20-79) have diabetes, which accounts for 416,000 deaths in the IDF Africa Region in 2021. AI can potentially improve diabetes care in Africa by improving access to care and providing more personalized treatment options. For example, using social media and ML [23] detected factors responsible for Diabetes prevalence in Nigeria.
Management and treatment of coronary heart disease (CHD) in African healthcare systems have been a huge challenge. Recently, there has been a growing recognition of the potential for AI to improve CHD disease treatment and care. For example, by using historical medical and ML models such as Naive Bayes, SVM, and Decision Tree (DT) data to predict CHD [10].
WHO estimated more than 2 million young women live with untreated obstetric fistula (OF) in Asia and sub-Saharan Africa. Limited research on the use of AI in obstetric fistula disease management and treatment in Africa has been seen. For example, [24] used Data Mining techniques to Predict Obstetric Fistula in Tanzania.
This survey did not cover the entire African continent. The results are promising, and the future of AI tools and algorithms to improve healthcare in the African continent look promising.
## Conclusions
AI has the potential to predict and control diseases, expand and augment service delivery, and address several lingering social inequities by improving the accuracy and efficiency of diagnosis, enabling earlier detection of diseases, and supporting the delivery of personalized medicine. However, several challenges must be addressed to realize its full potential, including the lack of infrastructure, limited access to data, and the need for regulatory frameworks. African countries must build the necessary infrastructure to support technological advancements. This can include investing in high-speed internet, data centers, and cyber security systems. Governments can help promote the development and adoption of AI by establishing clear AI regulatory standards. Private sector, healthcare providers, partnership with international stakeholders can design a sustainable AI solutions that meet the unique needs of the African healthcare system
## Future Work
AI and machine learning are critical to addressing health care inadequacies in African countries. AI models may perform poorly on African datasets, However, to improve the performance of AI models on African datasets, it will be necessary to create a more diverse and representative datasets as well as design models that are specifically tailored to deal with the unique challenges presented in African data.
|
2304.00664 | What You See is Not What You Get: The Role of Email Presentation in
Phishing Susceptibility | Phishing is one of the most prevalent social engineering attacks that targets
both organizations and individuals. It is crucial to understand how email
presentation impacts users' reactions to phishing attacks. We speculated that
the device and email presentation may play a role, and, in particular, that how
links are shown might influence susceptibility. Collaborating with the IT
Services unit of a large organization doing a phishing training exercise, we
conducted a study to explore the effects of the device and the presentation of
links. Our findings indicate that mobile device and computer users were equally
likely to click on unmasked links, however mobile device users were more likely
to click on masked links compared to computer users. These findings suggest
that link presentation plays a significant role in users' susceptibility to
phishing attacks. | Sijie Zhuo, Robert Biddle, Lucas Betts, Nalin Asanka Gamagedara Arachchilage, Yun Sing Koh, Danielle Lottridge, Giovanni Russello | 2023-04-03T00:30:41Z | http://arxiv.org/abs/2304.00664v1 | # What You See is Not What You Get:
###### Abstract
Phishing is one of the most prevalent social engineering attacks that targets both organizations and individuals. It is crucial to understand how email presentation impacts users' reactions to phishing attacks. We speculated that the device and email presentation may play a role, and, in particular, that how links are shown might influence susceptibility. Collaborating with the IT Services unit of a large organization doing a phishing training exercise, we conducted a study to explore the effects of the device and the presentation of links. Our findings indicate that mobile device and computer users were equally likely to click on unmasked links, however mobile device users were more likely to click on masked links compared to computer users. These findings suggest that link presentation plays a significant role in users' susceptibility to phishing attacks.
## 1 Introduction
Phishing attacks have become a common method of cyberattack in recent years. Even with the existence of phishing filters and countermeasures, the end users are still the last line of defense against such attacks. Once a phishing email passes all the technical defenses and reaches the users' inbox, the users themselves have to make the right judgment. Users' phishing susceptibility can be influenced by many factors, including their level of education, knowledge and experience. Their ability to detect phishing emails and making good decisions is also influenced by their awareness, mood, beliefs and involvement in reading the email. The characteristics of the email can also influence users' decision-making [14, 43, 25].
Various human-centered solutions have been developed to help reduce users' phishing susceptibility. Phishing training is one of the most common approaches to educating about phishing and how to prevent it. Different training tools have been developed to help users identify phishing cues, including text-based materials, video-based training [37], and game-based training solutions [32, 40]. There is no doubt that phishing training can reduce users' susceptibility to phishing [5, 22], but periodic follow-up training is then necessary to help them maintain the level of knowledge and awareness [18, 29]. In practice, when checking emails, users usually focus on the email content and treat the emails as legitimate by default. Only when some cues in the email make them feel suspicious, do users change their mindset to validate the legitimacy of the email [39]. When reading emails, what the user sees is how their email client renders the email. We speculate that email clients also play a role in influencing users' phishing susceptibility.
The device used for checking emails may impact the email's presentation and how users interact with it. One common difference between computer and smartphone involves links. Phishers usually manipulate the phishing link to look trustworthy; they often hide the link behind a button or text to ensure their phishy URL is less visible to the user. On computers, users can easily view the landing page URL of an embedded link by hovering over it. However, on smartphones, users have to tap and hold the link to view the URL. This interaction is less familiar and more complicated for the users, making them more likely to click on phishing links directly, and thus take the first step in being phished. Further, the limited physical size of smartphones leads to certain design choices in email clients, such as hiding certain details of the email (e.g. the sender's email address and utility function buttons), which may bias users' judgment. These issues suggest that users may be more susceptible to phishing when using mobile devices.
In this paper, we report on a study conducted with the
IT service unit of a large organization while it conducted a regular phishing training exercise. The study had two main aims: a) to investigate whether the use of different devices can influence users' tendency of clicking on phishing links; b) to study whether the visual presentation of phishing links influences users clicking behavior and thus their susceptibility to phishing. To the best of our knowledge, we are the first paper that explores these topics on a large scale.
The rest of this paper is organized as follows. In Section 2, we provide relevant background on the influence of UI design on users' behavior, and the differences in UI design of existing email clients. In Section 3, we explain our methodology, and how our study was conducted as part of a large-scale phishing training exercise. In Section 4, we present the results of our study and our analyses. Then in Section 5, we discuss our findings and suggest the implications. Lastly, in Section 6, we present our conclusions and suggest future directions.
## 2 Related Work
This section first recalls evidence from the research literature on how user interface (UI) design can influence user behaviors, and how design in email clients can affect security risks. Then we discuss the differences between computers and mobile devices for checking email, including the differences in the devices' physical characteristics, interaction methods, and users' security awareness. Finally, we give an overview of past research on the impact of phishing link presentation on user behavior and susceptibility to phishing.
### Influence of UI design
Many studies have shown that UI design can influence users' behavior and choices [2, 4, 30, 23, 31]. For instance, Schneider et al.'s research [31] shows that UI designers can nudge users into making certain choices unconsciously through the designs of interfaces. These nudging techniques have been used widely in commercial websites to attract customers. For example, one can increase the attractiveness of an option by placing it next to an unattractive option.
UI designs not only influence users' selection of options, but also their perceptions and attitudes toward the interfaces. Both Anwar et al.'s study [4] and Rendell et al.'s study [30] show that an appropriate selection of images, styles, and color schemes will encourage users' engagement with the website. In the example of online shopping websites, this would greatly increase users' purchase intentions and revisit intentions. In addition, Stojmenovic et al. show that visual appeal influences the perception of security [34]. Users are more likely to trust a website that is beautifully designed.
The impact of UI design on users can differ from individual to individual. Alves et al.'s review [2] shows that users with different personalities have different preferences for UI designs and aesthetics, and these differences can affect their task performance and information-seeking. It has been found that users tend to be more efficient when the interface is designed to match their personality [20].
The impact of UI design can also apply to email clients. Users' security awareness and ability to detect phishing emails can be improved by manipulating the UI elements in the email client. Anderson's study [3] explored the design of phishing warnings and shows that users gradually ignore warning messages after several occurrences. The use of polymorphic warning messages can slow down the forming of such habituation, which can help users stay alert longer. Petlka et al. [27] modified the embedded links in emails to help users pay more attention to the landing page URL. In their study, the most effective approach was to deactivate the link, and force the users to interact with the raw URL in the hover box to reach the landing page. These studies demonstrate the potential in improving users' security performance through changes in email client design.
### Computers vs. mobile devices
In recent years, the ubiquity of smartphones and tablets has led to a rising trend of using them for checking emails [9]. Due to the differences in functionality and characteristics, the interaction mode is different when using different devices. For instance, users interact with smartphones by directly tapping and swiping on the screen. In contrast, when using a computer, the interaction is done by controlling the cursor or typing keys on the keyboard. Besides, smartphones have smaller screens than computers, meaning smartphones can only display a limited amount of content at one time. When designing email clients for mobile devices, designers need to carefully choose important content to be displayed on the small screen, meanwhile supporting utility components for interaction and usability. This sometimes leads to hiding some details, such as the sender's full email address. From the security perspective, hiding the email address means that users would have one fewer cue which might help them identify malicious emails, and thus would lead to higher phishing susceptibility.
Moreover, the mobility of smartphones means that they are often used during casual situations or outdoor environments, which usually involve more distractions than using a computer. User performance can worsen in a distracting environment, since users have to spend additional working memory to suppress the distractions so that they can focus on their tasks [6]. When checking emails, less cognitive effort and attention spent on the email may lead to careless behaviors and a lower chance of detecting phishing emails, thus under higher risk of falling for phishing [25, 43].
Research has found that users' mindsets when using different devices are different. Breitinger et al.'s study [11] shows that users are more concerned about protecting their smartphones from attackers getting physical access, so they tend
to focus more on password security and messaging security when using their phones. Whereas for their computers, they tend to focus more on secure detection and virus scanners to protect their data and privacy. Further, users tend to have lower information security awareness (ISA) when using smartphones than computers [8, 11, 13, 21, 24]. Many users install security software on their computer, but only about one-third of users may consider it as necessary for their smartphone [11].
Most phishing-related training primarily focuses on computer users, including training materials and solutions. Thus, educated users usually have the mindset of how to deal with emails when using their computer, such as hovering over links before clicking. However, there is a large population of users who check their email using their smartphones or tablets. Due to the lack of specific training targeting mobile device users, these users could have less willingness and knowledge of how to view the URL of the link without opening the link.
### Visual presentation of phishing links
There has been some research on phishing email visual presentation. For example, users are more likely to trust emails with professional-looking visual presentation, including the presence of logos, copyright statements and other seemingly authentic cues [10, 41]. The same phishing message may display differently across clients because email clients have different ways of presenting messages. Some email clients have settings that disallow downloading of images, and in this case, logos are not shown to users. The presentation of an email in a desktop email client may be different from a mobile device email client because of the screen size.
The phishing link is one of the most important parts of phishing mail. For attackers, users clicking on the phishing link can be considered as an indicator of the potential success of phishing, and the link may allow the attacker to identify the email address of the user. The attractiveness of the phishing link plays an essential part in the success of the attack. There have been studies that focus on identifying techniques that have been used to manipulate phishing URLs to make them look legitimate to users [1, 15], and studies on developing tools to help users pay attention to links [36]. Attackers will often not display the full phishing URL directly in the email. The link is usually masked with text or buttons, so unless users hover over the link on a desktop or tap and hold the link on a smartphone, they will not see the actual phishing URL.
## 3 Method
In this section, we first introduce the research questions and our hypotheses. Then we explain our study setting, the materials used, and the data collection process.
### Research Questions and Hypotheses
Our study aims to answer two research questions. The first one is **RQ1: is user phishing susceptibility influenced by the devices they use when checking emails?** Our hypothesis is _H1: Users are more likely to click on phishing links when using email clients on mobile devices than on computers._
The second research question is **RQ2: Does the presentation of email phishing links influence users' phishing susceptibility?** We hypothesize that _H2: Users are more likely to click on phishing links when masked with buttons or hypertext than when shown as URLs._
Both buttons and hypertext are common in websites and emails. Both can be designed to look more visually attractive to the user than raw URLs. But in general, there is more flexibility in designing the visual presentation of a button than hypertext, hence users may be more likely to distinguish a button from the surrounding non-link content than hypertext. Also, buttons have more affordance for pushing and clicking, which may further motivate users to click. Hence, we formed an additional last hypothesis: _H3: Users are more likely to click on phishing links if they are hidden with HTML buttons, compared with being hidden with hypertext_.
Our data is categorical, indicating counts of users opening the email, clicking on links, and on which devices. We therefore use chi-squared tests to determine the significance of differences, and Pearson's phi (\(\phi\)) coefficient for effect size; our tests have one degree of freedom, so a value of.1 is considered a small effect size,.3 is moderate, and.5 is large [17]. Our hypotheses are directional, and with the 2x2 structure of the contingencies we can use the chi-squared residuals to test whether any difference is in the direction we hypothesize.
### Study Setting
In order to understand how users realistically interact with phishing emails, we need to study how users react to real emails received via their real email addresses. To do this, we collaborated with a large organization that regularly conducts cybersecurity training and phishing simulations to educate employees. The organization's IT services unit sends a simulated phishing email to all their employees, where those clicking on a simulated phishing link are directed to a webpage with training material about phishing.
We collaborated with the IT services unit to conduct our study within their training exercise. We collaborated with the organization's cybersecurity team and scheduled the training for November 2022, in which they sent out one phishing email to each staff member in the organization. We helped the team design the phishing email templates and the landing pages. At the end of the campaign, we received anonymous data from the cybersecurity team for further analysis. Our study protocol was reviewed and approved by our Human Participants Ethics Committee and by the organization's Chief
Information Security Officer.
Before the actual campaign, we pilot tested the phishing simulation with 12 members of the organization's cybersecurity team. We then tested within the whole IT service unit of 374 users to test with a larger, more diverse group. All this was to pilot test our process, and we do not present any data for these steps in the results. The main campaign was sent to over 12,000 employee email addresses in the organization.
### Study Materials
To study the second research question, we crafted three variations of a phishing email using the same sender address, subject, and main text. The email claims that the user has waiting email messages that must be checked and released from an online system. This mimics notifications sometimes sent in this organization. The organization is using the NIST phish scale [33] in phishing training exercises to assess the detection difficulty of phishing emails, which limits our choices of phishing emails and difficulty. For this campaign, we were asked by the IT service unit of the organization to use a phishing email template with a detection difficulty of moderate difficulty.
The three versions differ only in the visual presentation of the phishing link, as follows:
* Version 1 (raw URL): Display the real (simulated) phishing link "[https://foxypdf.nzs.net](https://foxypdf.nzs.net)" as clickable in the email. Note that we have slightly modified the domain name of the URL to keep the organization anonymized.
* Version 2 (button with text): Display the phishing link as a button, with the text "Release held emails". In HTML, this is done by placing a button tag inside a hyperlink tag.
* Version 3 (hypertext): Display as the clickable anchor text "Release held emails", instead of the real URL.
In Figure 1, we show how the three versions of the email appeared. In all three versions, the header details were the same. The sender was: **Security Gateway Notification <[email protected]>**, and the subject line: was **You have new on hold emails**. The actual URL and landing page were the same. The landing page itself, if users did open it, explained that this was a phishing simulation, and led to phishing education material.
In the campaign, we separated the users randomly into three groups, and members of each group received one of the variations of the phishing email.
### Data Collection
Working with the organization's cybersecurity team, we managed the simulated phishing email and collected data, using the open-source phishing framework GoPhish1. The GoPhish installation was installed and managed by the cybersecurity team.
Footnote 1: GoPhish, Open-Source Phishing Framework by Jordan Wright: [https://getgophish.com/](https://getgophish.com/)
Each email address used with GoPhish is given a unique anonymous ID. GoPhish then records three types of events: email sent, email opened, and link clicked. The email opened event data is collected by attaching a tracking image in the email that requires access to the GoPhish server, and the tracking image can be related to the tracking ID. The link clicked event is generated by the user clicking on the link in the email (via the raw URL, or button, or anchor text) that contacts the GoPhish server, and again can be related to the tracking ID. For each of these open and click events, GoPhish is able to record the timestamp of the event, the IP address, and the user-agent string. The user-agent string can then be used to identify the application, operating system, vendor, and/or version of the application that sends the user-agent request [12].
The data produced by GoPhish was first pre-processed to remove non-user actions. Due to the complexities in the email
Figure 1: The three campaign emails: raw URL, button, and anchor text
transferring system and the use of different email systems, some open/click attempts are generated before reaching the recipient's inbox, which creates additional noise in the data. Some email clients (such as Gmail and Apple Mail) route their emails through proxy servers or caching servers before the users receive the emails, which can generate open/click events that are not performed by the user. We filtered this noise by checking the IP address associated with the events, and removed the ones that came from hosting service companies. If the Autonomous System Number (ASN) of the IP address belongs to the organization of the email client or some hosting service companies instead of the user's local network provider, we determined it is likely that this event was not actually the result of the user activity.
Both the open and clicked events recorded by GoPhish include user-agent data, but the click event will indicate the client software, typically a web browser, used to display the landing page. For our analysis, we therefore used the user-agent string of the open event when classifying the platform used both to open the email, and to click the link: typically an email client.
The image tracking system used by GoPhish to determine email openings is not perfect. As mentioned earlier, some email clients allow users to block images within emails, either optionally or by default. As also mentioned above, email systems (laudably) include measures to obfuscate email access in other ways. Moreover, user-agent strings are not always reliable, and are sometimes made vague for privacy and security reasons. We acknowledge that these issues constitute limitations in our study. However, we have no reason to believe that users who block images or are impacted by other security measures would be differently distributed across our three email version groups. We therefore propose that while our overall numbers may be affected, the differences in our conditions are worthy of notice.
## 4 Results
### Campaign Overview
The campaign started on a workday Monday at noon, and emails with sent to 12,639 email addresses. We received data until Wednesday afternoon, for a duration of just over two days. The data showed that between two and three thousand users opened the email. This number is much lower than the number of emails sent for several reasons. While the organization has several thousand current full-time employees, it provides email addresses to many part-time staff, and even maintains email addresses for former staff, and many of these people likely never read their emails. It is also possible some users may have seen the subject line and simply ignored the email as irrelevant. Moreover, we only received data for about two days, so there will have been users who opened the email after we received the data. Since most users who open email do so within the first few hours after receiving [26], we would only expect a small number of activities after two days, so we believe our results were not overly affected. This bounded timeline is utilized by the IT security team in order to scope the workload of the IT support teams who receive reported phishing emails and questions from employees.
We first inspected the data for unusual activity, as we expect usual user behavior to be at most a few clicks from a single email client. Thus we removed 27 users from the dataset who performed more than 20 interactions (including both email open activities and link click activities), and 22 users who clicked on the phishing link from multiple email clients; both these made it difficult to determine a single device and sequence of behavior. We also became uncertain of presentation issues relating to "web mail" clients, i.e., email clients running in a web browser. In particular, we learned that the organization configured their web mail service's default setting to disallow the automatic downloading of images, which prevented valid counts of email opening. We therefore decided to focus on email-specific application software, and filtered out results where email was opened within web browsers.
After filtering, the data showed 2,375 users who opened our email, about 20% of the total number sent. Of these, 285 clicked on the link, so about 12% of those who opened the email. These users generated 3,921 open actions, so many opened the email more than once, and there were 304 click actions, which means a few people clicked more than once using the same platform.
### Device Use
By processing the user-agent string of each event, we extracted the platform, operating system and email client used to perform the event. We grouped our data based on the device used for viewing the email: computers, mobile devices, and unclassified. The unclassified category mainly contains users who use Apple Mail and other email clients where it is not possible to identify the platform based on the user-agent string. Due to Apple Mail's mail privacy protection mechanism, the device used to open the email is not identified 2. From our observation, all platforms (Mac, iPad or iPhone) use the same user-agent at the time of writing ("Mozilla/5.0"). The distribution of the groups is shown in Table 1. The overall total count is slightly larger than the number of users who opened the email, this is because some users can open the email using multiple devices and email clients, but since these users at most clicked on one type of email client, our data is not affected.
Footnote 2: [https://www.apple.com/legal/privacy/data/en/mail-privacy-protection/](https://www.apple.com/legal/privacy/data/en/mail-privacy-protection/)
#### 4.2.1 H1 Testing
The data allows us to test Hypothesis H1: _Users are more likely to click on phishing links when using mobile devices
than on computers._
A chi-squared test was performed to examine if there was a connection between users link clicking behaviors and the device used for checking emails. For this chi-squared test, we removed the unclassified group because of the uncertainty of which kind of device was used. The result was significant, \(\chi^{2}(1,N=1352)=5.02,p=.025\) (Pearson residual: \(+1.4\)). Here, and below, we give the residual for clicks in the first stated condition - mobile devices, in this case. The positive value shows that mobile devices had more than the expected number of clicks were the categories the same. We thus conclude H1 is supported, and user behavior is significantly different when using different devices for checking emails. However, we calculated the phi coefficient for effect size as \(\phi=.06\), showing only a very small effect, so we conclude that the difference in clicking behavior attributable to the device used is quite minor.
These results are also shown in the first row of Table 2.
We further carried out tests to explore whether the use of different devices might have an influence on users' clicking behavior. We compared the data for mobile devices and for computers, looking at each link presentation group. For the raw URL, the result was: \(\chi^{2}(1,N=396)<.01,p=.93\) (Pearson residual: \(-0.1\)), so no significant difference. In other words, when the raw URL is presented, the device does not matter.
For the button group, the result was: \(\chi^{2}(1,N=371)=5.39,p=.02,\phi=.12\) (Pearson residual: \(+1.3\)), and similarly for the hypertext group, it was: \(\chi^{2}(1,N=585)=5.98,p=.01,\phi=.10\) (Pearson residual: \(+1.6\))). These results broadly align with our findings above, that it is link presentation, more than device type, that shows different behavior. These results are also shown in Table 2.
We observed that for the raw URL group, out of the 163 users who opened the email using a computer email client, 8.0% of them clicked, and for those who opened it with mobile devices, out of 233 users, 7.7% clicked; these two click rates are reasonably close. However, we observed a difference when the phishing link is hidden from the user. For the button group, 10.9% out of the 138 computer users clicked on the link, whereas for mobile device users, 20.2% out of 233 users clicked. A similar difference can be observed for the hypertext group, out of the 325 computer users, 18.5% of them clicked on the link, and out of the 260 mobile device users, 26.9% of them clicked. These results add to our findings above, that link presentation interacts with device type to further shape users' behavior.
### Link Presentation
We further categorized our data into three groups based on the specific phishing email users received. During the campaign, we also observed that for the button version, the Windows Outlook application would display the button as hypertext. From the user's perspective, they are seeing the hypertext version, which is the same as if the user is been grouped in the hypertext group. Therefore, we decided to move the Windows Outlook application data from the button version, and add it to the hypertext version. Table 3 shows the number of users who opened and clicked on the phishing link using computers and mobile devices in each condition. Based on these data, we performed a series of chi-squared tests, including tests within the group between devices, and users clicking behavior between groups, as shown in Table 4.
#### 4.3.1 H2 Testing
Hypothesis H2 stated: _Users are more likely to click on phishing links when masked with buttons or hypertext than when shown as URLs._
To test H2, we combined the button group and hypertext group into a combined masked text group, and compared it with the raw URL group, across all platforms, as shown in Table 5. We then performed a chi-squared test, and also calculated the effect size, with result: \(\chi^{2}(1,N=2614)=34.8,p<.001,\phi=.12\) (Pearson residual: \(+3.1\)). H2 is thus supported, and with a small to moderate effect size. We therefore conclude that there is a difference in user behavior when seeing a raw link vs. a link concealed by a button or hypertext.
The interpretation of the effect size of \(.12\) is small to moderate, but we note that the difference is between the raw link
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline & **Computer** & **Mobile** & **Unclassified** & **Sum** \\ \hline
**Users** & 88 & 135 & 62 & 285 \\
**clicked** & & & & \\ \hline
**Users not** & 538 & 591 & 1200 & 2339 \\
**clicked** & & & & \\ \hline
**Sum** & 626 & 726 & 1262 & 2614 \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of the campaign users’ clicking actions using different devices
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline
**Condition** & \(\chi^{2}(1)\) & **Effect** & Pearson \\ & **p-value** & **size \(\phi\)** & Residual \\ \hline All click behavior &.025 & 0.06 & +1.4 \\
2-5 &.93 & \(<.01\) & -0.1 \\ \hline
\begin{tabular}{l} \end{tabular} &.02 &.12 & +1.3 \\ \hline Hypertext click behavior &.01 &.10 & +1.6 \\ \hline \end{tabular}
\end{table}
Table 2: Click behavior for mobile devices vs. computers: chi-squared test results; residual shown for mobile device clicks
case of 42 clicked vs. 742 non-clicked (5% of total) and the masked link case of 152 vs. 877 (14% of total). In practice, this could be a substantial difference in phishing susceptibility.
We also performed a chi-squared test between each of the two masked text variations, the button version, and the hypertext version, with the raw URL version. Both resulted in significant results. For the button variation, \(\chi^{2}(1,N=1585)=17.4,p<.001,\phi=.10\) (Pearson residual: \(=+2.8\)), and for the hypertext version, \(\chi^{2}(1,N=1813)=41.27,p<.001,\phi=.15\) (Pearson residual: \(=+4.0\)).
#### 4.3.2 H3 Testing
Hypothesis H3 stated: _Users are more likely to click on phishing links if they are hidden with HTML buttons, compared with being hidden with hypertext._
To test H3, we compared the hypertext group data with the button group, across all platforms. We then performed a chi-squared test, and also calculated the effect size, with result: \(\chi^{2}(1,N=1830)=5.2,p=.02,\phi=.05\) (Pearson residual: \(+1.3\)). The result is significant, but the residuals show the opposite to what we hypothesized. Users are slightly more likely to click on the hypertext version compared with the button version. H3 is thus not supported.
The above results are also shown in Table 4.
### Summary
As shown in Table 6, out of the three hypotheses we propose, all were significant at an alpha of.05, but only H2 had a non-negligible effect size. Users are more likely to click on the phishing link if it is displayed as a button or masked as text, compared with the raw phishing URL. We observed an important difference in users' behavior when the phishing link is been visually obscured. If the link is obvious, users' behavior is consistent regardless of the device. Whereas if the link is masked with a button or text, users are more likely to click on the link when using mobile devices than using computers. Among these two visual techniques, we hypothesized that the button version would make users more likely to click than the hypertext version, because of its affordance and visual appearance. However, in the study we observed a significant but opposite and very small effect.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Conditions** & \(\chi^{2}(1)\)**p-value** & **Effect size**\(\phi\) & Pearson Residual for clicks \\ \hline Masked vs. Raw URL & \(<.001\) &.12 & +3.1 \\ \hline Button vs. Raw URL & \(<.001\) &.10 & +2.8 \\ \hline Hypertext vs. Raw URL & \(<.001\) &.15 & +4.0 \\ \hline \hline Hypertext vs. Raw URL & \(<.001\) &.15 & +4.0 \\ \hline \hline Hypertext vs & \(.02\) &.05 & +1.4 \\ Button & & & \\ \hline \end{tabular}
\end{table}
Table 4: Click behavior over all platforms: chi-squared test results; residual shown for first condition shown
\begin{table}
\begin{tabular}{|p{113.8pt}|r|r|r|r|} \hline & **Computer** & **Mobile** & **Unclassified** & **Sum** \\ \hline
**Raw URL** & 13 & 18 & 11 & 42 \\
**clicked** & & & & \\ \hline
**Raw URL not clicked** & 150 & 215 & 377 & 742 \\ \hline
**Button clicked** & 15 & 47 & 29 & 91 \\
**clicked** & & & & \\ \hline
**Button clicked** & 123 & 186 & 401 & 710 \\ \hline \hline
**Hypertext clicked** & 60 & 70 & 22 & 152 \\ \hline
**Hypertext clicked** & 265 & 190 & 422 & 877 \\
**not clicked** & & & & \\ \hline \hline
**Sum** & 626 & 726 & 1262 & 2614 \\ \hline \end{tabular}
\end{table}
Table 3: Distribution of the campaign users’ clicking actions using different devices
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline & **Raw URL** & **Masked URL** & **Sum** \\ \hline
**clicked** & 42 & 243 & 285 \\ \hline
**not clicked** & 742 & 1587 & 2329 \\ \hline
**Sum** & 784 & 1830 & 2614 \\ \hline \end{tabular}
\end{table}
Table 5: Contingency table for raw URL vs. masked text
Discussion
As pointed out in Steves et al.'s study [33], "not all cues are created equally". Different phishing cues have different levels of detection difficulty and effect. For instance, displaying phishing links as raw URLs is a very obvious cue that could help users determine that the email is phishing. Our study shows that regardless of the device people use for checking emails, they are less likely to click when seeing the raw URL version. The ease of identifying such a cue is partially because users are able to see the raw phishing link without any interaction with the interface (e.g., hovering). The same phishing cue can be much more difficult to identify if it is masked with a button or hypertext. When no obvious cues are visible to the users, they may process the email normally as if it is legitimate. And only when they see some cue, such as a link with an unfamiliar URL, might they become suspicious, might the change their mindset to be careful about potential phishing attacks. The extra step of expecting users to hover over the link is a danger. Our study found that more users clicked on the link when it was displayed as a button or hypertext. The process whereby suspicion is raised can be explained by the concept of the human mind as a dual process system [19], which has been applied in the domain of phishing susceptibility [16, 35, 38, 42]. This theory proposes that our mind has two reasoning systems, one is fast, effortless and based on heuristics, and the other is slow and requires systematic reasoning. When users see the raw phishing URL, their heuristic system is more likely to quickly raise a red flag and intuitively sense that something is wrong. This then triggers the systematic reasoning system to examine the email with more awareness, so they are less likely to click on the URL. Without visibility, Kahneman explains the conclusion is often WYSIATI: "What you see is all there is".
The use of different devices makes the problem of phishing presentation more complex. Our study reveals that mobile device users were more likely to click on the link than computer users. This is in line with the literature [24, 8, 11, 21]. Moreover, our findings add nuance to the literature: phishing link presentation matters -- mobile users were not more susceptible for the raw URL but were for the masked URL. This suggests that the dangers of masked presentation are exacerbated for mobile users. Factors such as screen size, the interaction mechanics of viewing the full URL, and the external environment may contribute.
Detection of phishing cues may vary among individual users, as their prior knowledge and experience influence their identification of cues. Across organizations, users will have varying knowledge about URLs and web domains. This type of knowledge will shape users' ability to determine the legitimacy of links.
It was anticipated that the button presentation would result in a higher number of clicks due to its affordance and visual design. However, we found that the hypertext presentation of the link elicited more clicks. The legitimate security notification sent by the organization also uses hypertext links for releasing quarantined emails, which may make hyperlinks more familiar and trustworthy. It is possible that buttons are typically utilized for actions that affect a current web page, while hyperlinks are used for redirecting users to a different web page. Since users are not expecting to release quarantined emails within the email itself, they may be less likely to expect a button compared to a hyperlink that redirects them to a website. It is also possible that the click rates of phishing buttons and hyperlinks may be context-specific and depend on the topic of the phishing message.
The nature of the masking text may impact susceptibility. In our main study, we used 'Release held emails'. In our pilot study, we tested a different approach: we masked the phishing URL as a legitimate-looking URL from an internal domain that was familiar to all staff in the organization. We compared this approach with the raw URL version. This was piloted with 374 members of the IT service unit who were randomly divided into two groups for each variation. We observed a large difference in the number of user clicks between the two groups. The raw URL version resulted in a 3.8% click rate (3 clicks out of 79 users who opened the email), whereas the masked URL version resulted in a 19.6% click rate (11 clicks out of 56 users who opened the email). These users were from the IT service unit and were expected to have more experience in dealing with phishing emails, yet we still observed a substantial difference in the click rate. Our pilot study suggests that masking with a legitimate URL would be dangerous and would persuade users to click on the link. We chose not to include this 'legitimate URL' variation in the main campaign as we are focusing on generic phishing. Knowing the internal legitimate domains that the users are trusted and familiar with would be considered as tailored phishing, which is outside the scope of the current paper. In addition, both we and the IT service unit thought this variation was too difficult for the general population. We will consider this in future campaigns.
The presentation of the same phishing email can vary depending on the email client used. We observed that the button (created using a button tag inside a hyperlink tag) is displayed as a normal hypertext in Windows Outlook client. Windows Outlook client can display buttons normally through other methods, such as modifying the CSS style of the hyperlink. If hyperlinks are more effective in attracting clicks compared to buttons, as we found in our study, this means that Windows Outlook users may be more susceptible than users of other clients, as they may see more hyperlinks rather than buttons.
Masked links increase susceptibility, and the display of those masked links differs by device and by platform. When computer web mail users hover over the link, the landing page URL appears in the bottom left corner of the browser. In contrast, when computer application users hover, the landing page URL generally would be displayed next to the mouse cursor.
In some email clients, such as Outlook, the URL also appears on the bottom left corner of the application. The placement of URLs may impact the readability of the URL and cause a difference in the click rate. Moreover, hyperlink tags in HTML can be configured to include a tooltip with customizable content. While the tooltip attribute generally has no effect on the presentation of links in email client applications, it can mimic a hover box for web clients when a link is hovered. Attackers can exploit this technique to display the "real landing page URL" next to the phishing link, thereby tricking web client users into believing the link is genuine. In an extreme case, a web client may display three different URLs when hovering over a single link: the link is visually displayed as one link, then the tooltip displays a second link, and the browser displays the real URL (could be a third URL) at the bottom left corner.
Users' susceptibility to phishing can be reduced when the raw URL is easily visible. However, presenting raw URLs sacrifices usability for security. HTML rendering can make the text easier to understand, and can make the purpose of links clear, but it can also make text and links more deceptive. In particular, allowing links to be concealed is a gift to attackers.
Our research was not able to record hovering activities. It would be interesting to assess whether the users hovered over the link or not and how this impacts susceptibility. Others have used eye trackers to examine where people look when reading emails [28]. Unfortunately, eye-trackers are not yet feasible for the scale of organization-level field studies.
The design elements in email clients deserve more attention in future phishing research. For example, email service providers such as Gmail 3 hide the sender's email address when a user has replied to them. Given that most phishing emails are sent by unknown senders, highlighting the difference between known and unknown senders in the email client could serve as a helpful cue for identifying phishing emails. In light of the results of our study, a potential solution could be to display the raw URLs when the sender is unknown. Future research could manipulate design elements in email clients to demonstrate the potential in helping users protect themselves from phishing.
Footnote 3: [https://support.google.com/mail/answer/1311182?hl=en](https://support.google.com/mail/answer/1311182?hl=en)
### Limitations
The findings of our study may seem obvious, but our study provides evidence using a large-scale phishing campaign to validate that users are more susceptible to phishing when using mobile devices, and when seeing phishing links as masked text. This represents a danger that suggests a need for specific training or changes to email clients.
Our study has a number of limitations. First, the GoPhish framework uses invisible tracking images to determine email opening. If the email client does not allow the automatic downloading of pictures, this tracking method will fail. Therefore, we expect that we would have had more open attempts than are shown in our data. In our analysis, we discarded all the link clicks if they did not map to any open events. In reality, these click events were potentially legitimate user actions, but since we can not identify the corresponding open attempt, we can not determine the user-agent associated with the action. Hence this portion of the data is lost in our study.
Second, the classification of email clients and devices relies on the user-agent string, which is not always reliable. For instance, for Apple Mail users, by default, the user-agent associated with their actions would be "Mozilla/5.0" across all devices, and the users can manually select which user-agent to be used when sending messages. We are pleased to see the benefit of using such an approach to protect users' privacy, but at the same time, it creates extra difficulties for us in verifying the applications and devices.
Third, in our study, we assume that none of the emails are forwarded. We acknowledge that there will be a small group of users who may automatically forward their emails to a different account, but we are unable to separate these users from the rest. For these users, the sender of the forwarded email will be the users themselves, which could potentially influence their behavior because users may pay less attention to the original sender of the email. We assume that these users would be evenly distributed in the three groups, so our results would not be affected.
Fourth, we should note that the URL used in our study was not specially crafted to resemble one that the users would recognize and trust. Instead, we chose one that attackers adopted or hijacked for general use. Domain names for phishing are now used and abandoned frequently, typically less than 24 hours, to avoid blocklisting [7]. We therefore suggest our URL is realistic for many attacks, though not representative of specialized and targeted campaigns.
Finally, in our study, we moved the Windows Outlook application users from the button version to the hypertext version because the Outlook Windows application consistently displays the phishing URL as hypertext in both groups. To ensure that this migration of data will not affect the findings we present, we also performed the same test with Windows Outlook application users being removed from all conditions and recomputed our tests to compare the results. As shown in Table 7, the results are very similar.
Further investigations into this area will aid in the development of more robust solutions that can aid users in identifying and avoiding phishing scams. We encourage researchers to replicate our study and build upon the knowledge gathered to contribute to the ongoing efforts of improving online security.
## 6 Conclusions
In this paper we presented the results of a phishing campaign study conducted in a large organization. Our study focuses on the visual presentation of emails, and how it influences users' behavior. Our result shows that users are somewhat more susceptible to phishing when using mobile devices, compared to using computers. More importantly, we found that masking phishing links with buttons or hypertext has a greater impact in persuading users to click on the phishing link, as opposed to displaying the raw URL. With these masked URL conditions, mobile users were more susceptible compared to computer users. Our study identified novel link presentation variables that contribute to users' susceptibility to phishing. We recommend that future research further explore the presentation of phishing emails and how email clients can be designed to assist them in protecting themselves.
## Acknowledgments
T.B.A.
|
2305.17031 | Calibration method for complex permittivity measurements using s-SNOM
combining multiple tapping harmonics | Scattering-type scanning near-field optical microscopy (s-SNOM) enables
sub-diffraction spectroscopy, featuring high sensitivity to small spatial
permittivity variations of the sample surface. However, due to the near-field
probe-sample interaction, the quantitative extraction of the complex
permittivity leads to a computationally demanding inverse problem, requiring
further approximation of the system to an invertible model. Black-box
calibration methods, similar to those applied to microwave vector network
analysers, allow the extraction of the permittivity without detailed
electromagnetic modelling of the probe-sample interaction. These methods,
however, are typically designed for stationary setups. In contrast, the
distance between the sample and the probe tip of the s-SNOM is slowly
modulated, which is required for the lock-in detection used to extract the
near-field interaction buried in the far-field background. Here we propose an
improved calibration method that explicitly takes probe tapping into account.
We validate our method for an s-SNOM operating in a mid-infrared spectral range
by applying it to measurements of silicon microstructures of different but well
characterised doping. | Dario Siebenkotten, Bernd Kaestner, Arne Hoehl, Shuhei Amakawa | 2023-05-26T15:40:04Z | http://arxiv.org/abs/2305.17031v1 | Calibration method for complex permittivity measurements using s-SNOM combining multiple tapping harmonics
###### Abstract
Scattering-type scanning near-field optical microscopy (s-SNOM) enables sub-diffraction spectroscopy, featuring high sensitivity to small spatial permittivity variations of the sample surface. However, due to the near-field probe-sample interaction, the quantitative extraction of the complex permittivity leads to a computationally demanding inverse problem, requiring further approximation of the system to an invertible model. Black-box calibration methods, similar to those applied to microwave vector network analysers, allow the extraction of the permittivity without detailed electromagnetic modelling of the probe-sample interaction. These methods, however, are typically designed for stationary setups. In contrast, the distance between the sample and the probe tip of the s-SNOM is slowly modulated, which is required for the lock-in detection used to extract the near-field interaction buried in the far-field background. Here we propose an improved calibration method that explicitly takes probe tapping into account. We validate our method for an s-SNOM operating in a mid-infrared spectral range by applying it to measurements of silicon microstructures of different but well characterised doping.
## 1 Introduction
Scattering-type scanning near-field optical microscopy (s-SNOM) [1] enables sub-diffraction Fourier transform infrared nanospectroscopy (nano-FTIR) [2, 3] by focussing electromagnetic radiation of a desired spectral range onto the metalised probe of an atomic force microscope (AFM). The probe can be regarded as an optical antenna [4], locally coupling the incident radiation with the sample surface, with a spatial resolution determined by the probe-tip radius. The frequency-domain electric field phasor, \(E_{\mathrm{out}}\), coming out of the s-SNOM system after being scattered by the probe, depends on the permittivity of the sample due to the near-field interaction between the probe tip and the sample. The interferometric nano-FTIR detection scheme allows phase-resolved detection of \(E_{\mathrm{out}}=SE_{\mathrm{in}}\), where \(E_{\mathrm{in}}\) is the field incident on the s-SNOM system, and \(S\) is its complex-valued scattering coefficient [2], in which the complex permittivity of the sample is encoded. The aim of quantitative s-SNOM measurements (as opposed to qualitative imaging) is to extract the spatial distribution of the complex permittivity of the sample under test. This usually necessitates computationally expensive detailed electromagnetic modelling of the probe and the sample, which include assumptions about the shape of the probe and its optical properties [5, 6, 7, 8], and further approximation
of the system to an invertible model [9]. In principle, alternative calibration methods based on black-box models allow the extraction of the permittivity without detailed electromagnetic modelling of the probe-sample interaction. In this sense, the problem is comparable with that of scanning microwave microscopy (SMM), where permittivity measurement techniques in the GHz frequency range have been developed [10, 11, 12, 13] using a vector network analyser (VNA) [14]. In their pioneering work, Guo _et al._ recently applied a similar calibration method to an s-SNOM at THz frequencies [15]. They subsequently extended it to multi-layer samples [16]. However, such black-box calibration methods are typically designed for stationary setups [14]. In contrast, the distance between the probe tip and the sample of the s-SNOM is slowly modulated [17], so that the information about the near-field interaction, buried in the far-field background, can be extracted by high-order lock-in detection. However, methods of dealing with the tapping of the probe within such black-box calibration approaches do not appear to have been reported so far. Here we propose an improved calibration method that explicitly takes the probe tapping into consideration. We validate our method for an s-SNOM operating in the mid-infrared spectral range by applying it to measurements of silicon microstructures of different but well characterised doping.
## 2 Sketch of calibration and envelope-domain reconstruction
The nano-FTIR setup used for the validation is shown in Fig. 1(a). Broadband infrared synchrotron radiation, provided by the electron storage ring Metrology Light Source [18, 3], is used in an asymmetric Michelson interferometer, in which the probe above the sample is placed at the end of one arm, and a movable mirror at the end of the other. When the reference mirror is moved, the optical path difference between the two arms changes, and consequently so does the measured detector voltage. This mechanism is used to interferometrically extract the detector voltage \(V\) as a function of the frequency \(\nu\), or equivalently the wavenumber \(\tilde{\nu}\equiv\nu/c=1/\lambda\), with \(c\) being the speed of light in a vacuum, and \(\lambda\) the wavelength. This is in contrast with narrowband measurement systems, and complicates frequency/wavenumber-resolved measurements somewhat. In a VNA-based SMM, for example, a swept monochromatic signal source is used.
Fig. 1(b) shows the signal flow graph [19, 20] for the black-box "far-field-to-near-field adapter" model [15], in which \(e_{\mathrm{d}}(h,\tilde{\nu})\), \(e_{\mathrm{h}}(h,\tilde{\nu})\), and \(e_{\mathrm{r}}(h,\tilde{\nu})\) are the adapter coefficients that depend, in general, on the distance \(h\) between the probe tip and the sample surface and on the wavenumber \(\tilde{\nu}\). \(e_{\mathrm{d}}(h,\tilde{\nu})\), \(e_{\mathrm{h}}(h,\tilde{\nu})\), and \(e_{\mathrm{r}}(h,\tilde{\nu})\) are called the directivity, source match, and reflection tracking coefficients, respectively, in VNA calibration terminology [14], but these names
Figure 1: (a) Schematic optical beam paths as used in the broadband nano-FTIR measurements. Infrared radiation from the storage ring is split into a reference beam, directed to a movable mirror, and a sample beam. The latter is focussed on the probe tip by optics (not shown) and scattered back, carrying information about the sample. The reference beam and the scattered sample beam then interfere at the detector. (b) A signal flow graph for the adapter that abstracts the far-field-to-near-field conversion via the probe. \(E_{\mathrm{in}}(\tilde{\nu})\) is the travelling wave incident on the s-SNOM system. \(E_{\mathrm{out}}(h,\tilde{\nu})\) is the outgoing (scattered) travelling wave. \(S(h,\tilde{\nu})\) is the complex-valued scattering coefficient of the s-SNOM system. \(\Gamma(\tilde{\nu})\) is the reflection coefficient of the sample under test, assumed to be independent of the probe-tip height \(h\). \(e_{\mathrm{d}}(h,\tilde{\nu})\) represents sample-independent scattering of the incident wave \(E_{\mathrm{in}}(\tilde{\nu})\) by the s-SNOM optics. \(e_{\mathrm{s}}(h,\tilde{\nu})\) represents the reflection by the s-SNOM system of the near field reflected by the sample. \(e_{\mathrm{r}}(h,\tilde{\nu})\) is the total transmission coefficient of the s-SNOM optics.
may as well be forgotten in the s-SNOM context because they may have somewhat different meanings [15]. From Fig. 1(b), we obtain, using the known graph reduction rules [20], the following formula describing the height- and wavenumber-dependent scattering coefficient \(S(h,\tilde{\nu})\):
\[S(h,\tilde{\nu})\equiv\frac{E_{\rm out}(h,\tilde{\nu})}{E_{\rm in}(\tilde{\nu}) }=e_{\rm d}(h,\tilde{\nu})+\frac{e_{\rm r}(h,\tilde{\nu})\Gamma(\tilde{\nu})}{ 1-e_{\rm s}(h,\tilde{\nu})\Gamma(\tilde{\nu})}, \tag{1}\]
where \(\Gamma(\tilde{\nu})\) is the complex reflection coefficient of the sample. \(\Gamma(\tilde{\nu})\) is related to the complex permittivity \(\varepsilon(\tilde{\nu})\) of the sample as follows [21]:
\[\Gamma(\tilde{\nu})=\frac{\varepsilon(\tilde{\nu})-1}{\varepsilon(\tilde{\nu })+1}. \tag{2}\]
Note that \(\Gamma(\tilde{\nu})\) is a sample property and therefore independent of \(h\). The sample reflection coefficient can be determined by using the following formula derived from Eq. (1), if the values of the adapter coefficients are known:
\[\Gamma(\tilde{\nu})=\frac{S(h,\tilde{\nu})-e_{\rm d}(h,\tilde{\nu})}{\left[S( h,\tilde{\nu})-e_{\rm d}(h,\tilde{\nu})\right]e_{\rm s}(h,\tilde{\nu})+e_{ \rm r}(h,\tilde{\nu})}. \tag{3}\]
For a given probe-tip height \(h\), the adapter can be _calibrated_ or equivalently, the coefficients can be determined, by making three (or more) calibration measurements; that is, by measuring the scattering coefficients, \(S^{(m)}(h,\tilde{\nu})\)\((m=1,\;2,\;3)\), of three materials (or _calibration standards_) with known and sufficiently different reflection coefficients, \(\Gamma^{(m)}(\tilde{\nu})\)\((m=1,\;2,\;3)\). Inserting these values in Eq. (1) gives the simultaneous equations that can be numerically solved for \(e_{\rm r}(h,\tilde{\nu})\), \(e_{\rm d}(h,\tilde{\nu})\), and \(e_{\rm s}(h,\tilde{\nu})\).
Although the complex scattering coefficient \(S(h,\tilde{\nu})\) as defined in Eq. (1) is a frequency-domain quantity, it can be considered to depend periodically on time because of the _slow_ modulation of the probe-tip height, \(h(t)\), at an angular frequency \(\Omega\), where \(\Omega\) is orders of magnitude smaller than the mid-infrared frequency of \(E_{\rm in}(\tilde{\nu})\) and \(E_{\rm out}(h,\tilde{\nu})\): \(\Omega\ll 2\pi\nu\). Such a slowly time-varying component is called an _envelope_[22]. The time-dependent envelope-domain (i.e., mixed time- and frequency-domain) [22]\(S(h(t),\tilde{\nu})\) contains harmonic components at \(n\Omega\)\((n=0,\;1,\;2,\;\cdots)\) because the scattering depends nonlinearly on the height \(h(t)\). The slowly time-varying scattering coefficient can then be written as a Fourier series:
\[S(h(t),\tilde{\nu})=\sum_{n=-\infty}^{\infty}S_{n}(\tilde{\nu})\exp\left(-{ \rm i}n\Omega t\right), \tag{4}\]
where the complex-valued Fourier coefficients \(S_{n}(\tilde{\nu})\) are independent of time. The detector voltage phasors, \(V_{n}(\tilde{\nu})\), obtained by \(n\)th-order lock-in detection (or demodulation) can be considered to be proportional to \(S_{n}(\tilde{\nu})\), that is, \(V_{n}(\tilde{\nu})\propto S_{n}(\tilde{\nu})\). We will use this fact in the next section to establish the actual calibration equations to be used. Note also that the adapter coefficients in Eq. (3) also vary slowly and periodically in time _in such a way that makes \(\Gamma(\tilde{\nu})\) time-invariant_.
The importance of using height-dependent adapter coefficients for determining the sample reflection coefficient \(\Gamma(\tilde{\nu})\) can be demonstrated by simulations using the finite dipole model [5], which calculates \(S(h,\tilde{\nu})\) quasi-statically for different heights. We assume here that the height measured from the surface is given by
\[h(t)=\hat{h}\left[1+\cos\left(\Omega t\right)\right] \tag{5}\]
over a probe tapping period \(T\equiv 2\pi/\Omega\) at a wavenumber \(\tilde{\nu}\), with \(\hat{h}\) being the amplitude of the probe height modulation. The value of \(\Gamma(\tilde{\nu})\) to be measured by the s-SNOM has been chosen to be non-trivial; that is, different from \(\Gamma^{(m)}(\tilde{\nu})\)\((m=1,\;2,\;3)\) used for calibration. The result of using Eq. (4) and time-dependent adapter coefficients for reconstructing the value of \(\Gamma(\tilde{\nu})\) over one period \(T\) is shown in Fig. 2 (red straight lines). Although all quantities on the right-hand side of Eq. (3) change with time, the resulting \(\Gamma(\tilde{\nu})\) is constant and correct as it should be.
Also shown in Fig. 2 are the cases of including only the terms with \(0\leq n\leq 4\) and with \(n=2\) for both calibration and reconstruction. The \(\Gamma(\tilde{\nu})\) reconstructed from \(0\leq n\leq 4\) (blue curves) shows periodic deviations that grow towards the start and the end of the period. For this case, excellent agreement with the correct value is reached only around the centre of the period (\(t=T/2\)), corresponding to the minimal tip-sample distance. In contrast, the use of only \(n=2\) (because of the assumed use of the second-order lock-in detection) leads to a constant deviation from the correct value (purple straight lines). This illustrates that the use of multiple tapping harmonics, rather than just a single one, for the envelope-domain reconstruction is essential.
In the next section, we show that continuous-time (or fine-time-step) tracking of the adapter coefficients over a modulation period \(T\) is, actually, not necessary and establish the calibration equations used.
## 3 Calibration equations
In order to establish a calibration procedure allowing for tip-height modulation, we need to consider the typical number of higher harmonics that can be measured with acceptable signal-to-noise ratio. Here we assume that measurements up to the 4th tapping harmonic are possible. Although the result we saw in Fig. 2 might arouse some concern for using only \(n\leq 4\), it also suggests that the choice of right timing may give a good result. Eq. (4), together with Eq. (1), then reduces to the following approximation:
\[e_{\mathrm{d}}(t,\tilde{\nu})+\frac{e_{\mathrm{r}}(t,\tilde{\nu})\Gamma( \tilde{\nu})}{1-e_{\mathrm{s}}(t,\tilde{\nu})\Gamma(\tilde{\nu})}\approx\sum_{ n=-4}^{4}S_{n}(\tilde{\nu})\exp\left(-\mathrm{i}n\Omega t\right). \tag{6}\]
Note that we used a shorthand notation for the adapter coefficients in Eq. (6), ignoring the fact that they have different functional dependences on \(t\) and \(h(t)\) (cf. Eq. (1)). For air, \(\Gamma(\tilde{\nu})\) is zero in the infrared spectral range. Therefore, according to Eq. (2), \(e_{\mathrm{d}}(t,\tilde{\nu})\) does not depend on \(t\) or \(\tilde{\nu}\). As the tapping amplitude \(\hat{h}\) is kept small compared to the wavelength \(\lambda\), no variation of the scattered signal due to tapping in air is expected. Thus, \(S_{n}^{\mathrm{air}}=0\) for \(n>0\), and \(e_{\mathrm{d}}(t,\tilde{\nu})\) can be identified as
\[e_{\mathrm{d}}\equiv S_{0}^{\mathrm{air}}. \tag{7}\]
Since \(S_{0}^{\mathrm{air}}\) is usually hard to measure directly, we assume that for a tapping amplitude \(\hat{h}\) much larger than the probe tip radius, the electrical near-field has decayed sufficiently at time \(t=t_{\mathrm{max}}\) (\(t=0\) in Eq. (5), for example), when the probe tip is at its maximum distance from the sample surface [23], such that
\[S(t_{\mathrm{max}},\tilde{\nu})=\sum_{n=-4}^{4}S_{n}(\tilde{\nu})\exp\left(- \mathrm{i}n\Omega t_{\mathrm{max}}\right)\approx S_{0}^{\mathrm{air}} \tag{8}\]
holds. This leads to
\[S_{0}(\tilde{\nu})\approx S_{0}^{\mathrm{air}}-2\sum_{n=1}^{4}S_{n}(\tilde{ \nu})\cos\left(n\Omega t_{\mathrm{max}}\right), \tag{9}\]
where we used \(S_{n}(\tilde{\nu})=S_{-n}(\tilde{\nu})\). Then, \(S(t,\tilde{\nu})\), which also is a shorthand notation, can be written as
\[S(t,\tilde{\nu})\approx S_{0}^{\mathrm{air}}+2\sum_{n=1}^{4}\left[\cos\left( n\Omega t\right)-\cos\left(n\Omega t_{\mathrm{max}}\right)\right]S_{n}(\tilde{ \nu}). \tag{10}\]
Defining
\[\hat{S}(t,\tilde{\nu})\equiv 2\sum_{n=1}^{4}\left[\cos\left(n\Omega t\right)- \cos\left(n\Omega t_{\mathrm{max}}\right)\right]S_{n}(\tilde{\nu}) \tag{11}\]
Figure 2: Shown are the reconstructed real and imaginary parts of \(\Gamma\) versus time, plotted over one period \(T\equiv 2\pi/\Omega\) for \(\Gamma=1.32+0.24\mathrm{i}\). Solid lines are the envelope-domain reconstructions, dashed lines the temporal averages.
and combining equations (6), (7), and (10), the term \(S_{0}^{\rm air}\) drops out, and one obtains
\[\frac{e_{\rm r}(t,\tilde{\nu})\Gamma(\tilde{\nu})}{1-e_{\rm s}(t,\tilde{\nu}) \Gamma(\tilde{\nu})}=\hat{S}(t,\tilde{\nu}). \tag{12}\]
In principle, this equation could be used for the adapter calibration, provided the scattering coefficient, \(S(h,\tilde{\nu})\) in Eq. (1), or its transformed version, \(\hat{S}(t,\tilde{\nu})\) in Eq. (11), is measurable. However, unlike in a VNA, the scattering coefficient is not measured in our interferometric measurement setup. What are actually measured instead are the detector voltage phasors, \(V_{n}(\tilde{\nu})\), which are proportional to the Fourier components, \(S_{n}(\tilde{\nu})\), in Eq. (4). Taking the ratio of Eq. (12) at two different times, \(t_{0}\) and \(t_{1}\), during a modulation cycle eliminates the need to determine the proportionality constant.
\[\frac{\left[1-e_{\rm s}(t_{1},\tilde{\nu})\Gamma(\tilde{\nu})\right]e_{\rm r} (t_{0},\tilde{\nu})}{\left[1-e_{\rm s}(t_{0},\tilde{\nu})\Gamma(\tilde{\nu}) \right]e_{\rm r}(t_{1},\tilde{\nu})}=\frac{\hat{S}(t_{0},\tilde{\nu})}{\hat{S} (t_{1},\tilde{\nu})}=\frac{\hat{V}(t_{0},\tilde{\nu})}{\hat{V}(t_{1},\tilde{ \nu})}, \tag{13}\]
where, just as in Eq. (11),
\[\hat{V}(t,\tilde{\nu})\equiv 2\sum_{n=1}^{4}\left[\cos\left(n\Omega t \right)-\cos\left(n\Omega t_{\rm max}\right)\right]V_{n}(\tilde{\nu}). \tag{14}\]
There could actually be slight systematic deviations between calibration measurements. Such systematic deviations could result from slow temporal fluctuations of the light source and from unwanted reflections from the sample surface [5, 24, 25]. Taking a ratio as in Eq. (13), using two time instants of a single measurement, has the added benefit of mitigating the adverse effects of such systematic deviations.
Now \(e_{\rm r}(t_{0},\tilde{\nu})\) and \(e_{\rm r}(t_{1},\tilde{\nu})\) in Eq. (13) can be eliminated by using Eqs. (12) and (14) for, without loss of generality, the third calibration measurement. Thus, we obtain
\[\frac{\left[1-e_{\rm s}(t_{1},\tilde{\nu})\Gamma(\tilde{\nu})\right]\left[1-e _{\rm s}(t_{0},\tilde{\nu})\Gamma^{(3)}(\tilde{\nu})\right]}{\left[1-e_{\rm s }(t_{0},\tilde{\nu})\Gamma(\tilde{\nu})\right]\left[1-e_{\rm s}(t_{1},\tilde{ \nu})\Gamma^{(3)}(\tilde{\nu})\right]}=\frac{\hat{V}(t_{0},\tilde{\nu})\hat{V }^{(3)}(t_{1},\tilde{\nu})}{\hat{V}(t_{1},\tilde{\nu})\hat{V}^{(3)}(t_{0}, \tilde{\nu})}, \tag{15}\]
where \(\hat{V}^{(3)}(t_{0},\tilde{\nu})\) and \(\hat{V}^{(3)}(t_{1},\tilde{\nu})\) are the values of Eq. (14) from the third calibration measurement. The remaining unknowns to be determined in Eq. (15) are \(e_{\rm s}(t_{0},\tilde{\nu})\) and \(e_{\rm s}(t_{1},\tilde{\nu})\). A set of simultaneous equations for determining these can be obtained by inserting the results of the first and the second calibration measurements in Eq. (15) as follows:
\[\left\{\begin{array}{l}\frac{\left[1-e_{\rm s}(t_{1},\tilde{\nu})\Gamma^{(1 )}(\tilde{\nu})\right]\left[1-e_{\rm s}(t_{0},\tilde{\nu})\Gamma^{(3)}( \tilde{\nu})\right]}{\left[1-e_{\rm s}(t_{0},\tilde{\nu})\Gamma^{(1)}(\tilde{ \nu})\right]\left[1-e_{\rm s}(t_{1},\tilde{\nu})\Gamma^{(3)}(\tilde{\nu}) \right]}=\frac{\hat{V}^{(1)}(t_{0},\tilde{\nu})\hat{V}^{(3)}(t_{1},\tilde{\nu })}{\hat{V}^{(1)}(t_{1},\tilde{\nu})\hat{V}^{(3)}(t_{0},\tilde{\nu})}\\ \frac{\left[1-e_{\rm s}(t_{1},\tilde{\nu})\Gamma^{(2)}(\tilde{\nu})\right] \left[1-e_{\rm s}(t_{0},\tilde{\nu})\Gamma^{(3)}(\tilde{\nu})\right]}{\left[1-e _{\rm s}(t_{1},\tilde{\nu})\Gamma^{(3)}(\tilde{\nu})\right]}=\frac{\hat{V}^{( 2)}(t_{0},\tilde{\nu})\hat{V}^{(3)}(t_{1},\tilde{\nu})}{\hat{V}^{(2)}(t_{1}, \tilde{\nu})\hat{V}^{(3)}(t_{0},\tilde{\nu})}\end{array}\right. \tag{16}\]
## 4 Results and discussion
In the following, Eq. (16) serves as the set of calibration equations. We illustrate their use with the nano-FTIR setup introduced above (Fig. 1(a)).
To validate the calibration method, we apply it to a commercially available silicon sample, featuring several 2.5-\(\mu\)m wide n-type doped stripes of different doping levels, manufactured and characterised via secondary ion mass spectroscopy by Infineon Technologies [26, 27, 28]. Four differently doped stripes are used for the validation. These will hereafter be referred to as stripes 1 to 4, in the order of increasing doping. The doping concentration of stripe 1 is significantly lower than those of the other three stripes, making it indistinguishable from undoped silicon in the infrared spectral range. Three stripes (1, 2, and 4) are used as calibration standards, while stripe 3 is treated as the unknown sample under test. The permittivity \(\varepsilon(\tilde{\nu})\) of the doped Si is assumed to follow the Drude model for free electrons [29, 30]:
\[\varepsilon(\tilde{\nu})=\varepsilon_{\infty}\left(1-\frac{\tilde{\nu}_{\rm p}^ {2}}{\tilde{\nu}^{2}+{\rm i}\gamma\tilde{\nu}}\right), \tag{17}\]
where \(\varepsilon_{\infty}\approx 11.7\) is the high-frequency relative permittivity of Si [31], \(\gamma\) is the damping rate, and
\[\tilde{\nu}_{\rm p}=\frac{1}{2\pi c}\sqrt{\frac{Ne^{2}}{\varepsilon_{0} \varepsilon_{\infty}m^{*}}}, \tag{18}\]
is the wavenumber corresponding to the plasma frequency, with \(e\) being the elementary charge, \(\varepsilon_{0}\) the vacuum permittivity, \(N\) the free electron density, and \(m^{*}\) the effective electron mass, which approximately equals \(0.26\,m_{\text{e}}\) in Si [31], where \(m_{\text{e}}\) is the electron mass. The damping rate, which is inversely proportional to the electron mobility [29], was determined by using the relation described by Arora _et al._[32] for each of the calibration stripes.
Fig. 3(a) shows the magnitude of the measured second tapping harmonic component \(V_{2}(\tilde{\nu})\) in Eq. (14) versus the wavenumber \(\tilde{\nu}\) for all four stripes. Fig. 3(b) shows \(|V_{n}(\tilde{\nu})|\) (\(n=1,\ 2,\ 3,\ 4\)) of stripe 3, which serves as the test sample. For each spectrum, ten measurements were recorded consecutively, and the resulting spectra averaged. These averaged spectra are shown in Figs. 3(a) and (b). The topography combined with an optical image at a fixed interferometer position of the test stripe 3 is presented in the inset of (b). All spectral measurements were performed at the centre of each stripe, as indicated by the blue dot for the test stripe. Using all four tapping harmonics, the near-field signal \(\hat{V}(t,\tilde{\nu})\) can be reconstructed in the envelope-domain, and its magnitude is shown in Fig. 3(c) over two probe oscillation periods, \(2T\), for a fixed wavenumber of \(850\,\text{cm}^{-1}\). It changes synchronously with the probe oscillation (black dashed line), assumed here to be given by Eq. (5), with the minimal tip-sample distance corresponding to the maximum near-field intensity. The expected rapid increase in \(|\hat{V}(t,\tilde{\nu})|\) is seen at intermediate distances. However, the near-field signal intensity increases less strongly for smaller distances than expected for the corresponding exponentially growing near-field interaction. This might indicate that the actual tapping is not fully sinusoidal as in Eq. (5). Note, however, that we did not use Eq. (5) in our formulation. Thus, our calibration approach is not affected by the possibility of tapping being non-sinusoidal. From the envelope-domain plot, the two points in time, \(t_{0}\) and \(t_{1}\), needed for the calibration, can be chosen. Here, \(t_{0}\) is chosen to be \(t_{0}=T/2\), and \(t_{1}\) is chosen such that the probe is in downwards motion and the scattering amplitudes still differ significantly among the different stripes (1, 2, and 4); specifically, \(t_{1}=T/4\). The real and imaginary parts of the measured complex permittivity of the test sample are plotted with error bars in Fig. 3(d), together with the nominal permittivity (solid curves) estimated from manufacturer specifications (Table 1) and the Drude model. To validate the accuracy of the calibration, the Drude model Eq. (17) is fitted to the measured permittivity between 540 and 1100 cm\({}^{-1}\), with \(N\) and \(\gamma\) treated as fitting parameters. The best fit is plotted as the dashed black lines in Fig. 3(d).
The uncertainty of each point in the complex permittivity spectrum in Fig. 3(d) was estimated based on the uncertainty of the nominal electron density and measurement uncertainties. An uncertainty of 5% with a uniform distribution was assumed centred around the nominal electron densities of the calibration standards. The uncertainty of each harmonic of the measured data, \(V_{n}(\tilde{\nu})\), was assumed to be normally distributed and determined from the variance of the ten spectra. These uncertainties are propagated to the complex permittivity spectrum by calculating the standard deviation of 1000 repetitions of the calibration with the input spectra and electron densities varied randomly within these limits. To each of these repeated calibrations the Drude model fit was performed individually, and the standard deviations of the extracted electron density and damping rate were calculated from the resulting variation. The software _Wolfram Mathematica_ was used for the calibration and uncertainty propagation [33].
For further validation of the model, the other two highly doped stripes (stripes 2 and 4) were also individually analysed as unknown test samples, with the respective other three stripes used as calibration standards. The electron densities and damping rates extracted by fitting the resulting calibrated permittivities compared to their nominal values are presented in Table 1 for stripes 2, 3, and 4 as test samples. For all three stripes, the fitted electron densities and damping rates show very good agreement with the nominal values. It is particularly noteworthy that the damping rate was fitted as an independent parameter, whereas the nominal damping rate was calculated from the nominal electron density. Nevertheless, still very good agreement is achieved. This shows our method is capable of determining the local charge carrier density and mobility simultaneously. The extraction of the permittivity via fitting also reduces the uncertainty in the determined parameters significantly compared to the relatively large uncertainty in the individual permittivity values in Fig. 3(d). It is apparent that for the highest analysed doping (stripe 4), the uncertainty of the damping rate is significantly larger than those for the other two stripes. This can likely be explained by the nearly metallic behaviour of that stripe in the measured wavenumber regime, where large deviations in permittivity only lead to very slight variations in the scattered field.
\begin{table}
\begin{tabular}{c c c c c} Stripe & \(N_{\text{nom}}\) (cm\({}^{-3}\)) & \(N_{\text{meas}}\) (cm\({}^{-3}\)) & \(\gamma_{\text{nom}}\) (cm\({}^{-1}\)) & \(\gamma_{\text{meas}}\) (cm\({}^{-1}\)) \\ \hline
2 & \(1.2\times 10^{19}\) & \((1.3\pm 0.1)\times 10^{19}\) & \(350\) & \(400\pm 20\) \\
3 & \(5.5\times 10^{19}\) & \((5.2\pm 0.2)\times 10^{19}\) & \(390\) & \(350\pm 20\) \\
4 & \(1.5\times 10^{20}\) & \((1.0\pm 0.1)\times 10^{20}\) & \(400\) & \(380\pm 60\) \\ \end{tabular}
\end{table}
Table 1: Results of the Drude model fits to the calibrated permittivities for three different doping densities and comparison to their nominal values. For each stripe, the other two and the lowly doped stripe were used as calibration standards.
## 5 Conclusion and outlook
In conclusion, we developed a calibration method for nanoscale resolution permittivity measurement using s-SNOM, extending stationary black-box models used in scanning microwave microscopy and the work of Guo _et al._ for s-SNOM in the THz regime [15]. Our method takes probe tapping into account in extracting the time-invariant sample permittivity. To do so, multiple tapping harmonics of the measured detector voltage are used to take the slow temporal variation of the probe height into consideration. A decisive advantage of this calibration method, compared to conventional methods, is that no detailed knowledge or computationally expensive electromagnetic modelling of the probe is required. We validated the method by measuring Si microstructures of different doping levels. We extracted their respective electron densities and damping rates via fitting of the Drude model to the measured permittivities in the infrared spectral range. The extracted parameters showed very good agreement with the nominal values. While the uncertainties of the recovered permittivities are large in our example (Fig. 3(d)), the uncertainties of the extracted electron density and damping rate are significantly smaller (Table 1). In many applications, tunable lasers operating at a single wavelength can be used instead of broadband sources, which should significantly reduce the permittivity uncertainty. Further improvements in calibration accuracy are feasible by using calibration standards that are better characterised, thus known with smaller permittivity uncertainty. Our model extracts the sample permittivity directly, so samples are not limited to those that exhibit Drude-like behaviour. Combined with the nanoscale resolution s-SNOM offers, the proposed method promises both sensitive and quantitative characterisation of electronic nanostructures and quantum devices.
Figure 3: (a) Magnitude spectra of the second tapping harmonic of the detector voltage \(|V_{2}(\tilde{\nu})|\) of all four stripes of different doping. (b) Different tapping harmonics of the test sample (stripe 3). Inset: combined topography and optical image (\(3\times 5\,\text{\mu m}^{2}\)) at a fixed mirror position of the test stripe. The probe position for the measurement is indicated by the blue dot. (c) Envelope-domain reconstruction of the signal amplitude over two tip oscillation periods at a fixed wavenumber of 850 cm\({}^{-1}\) using the 1st to 4th harmonic. The dashed black line shows the distance variation between the probe tip and the sample. The vertical black lines mark the points in time, \(t_{0}\) and \(t_{1}\), used for the calibration. (d) The real and imaginary parts of the permittivity \(\varepsilon(\tilde{\nu})\) of the test sample obtained after the calibration using the other three stripes, together with vertical error bars. Solid curves show expected values according to the Drude model Eq. (17) and manufacturer specifications (Table 1). Dashed curves show the best fit of the Drude permittivity Eq. (17) using the electron density \(N\) and the damping rate \(\gamma\) as the fitting parameters.
## 6 Acknowledgement
We acknowledge fruitful discussions with Georg Gramse, Johannes Hoffmann, and Manuel Marschall.
## 7 Funding
The authors D.S., B.K. and A.H. have received funding from the project "20IND12 ELENA" within the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. B.K. and S.A. also received funding from the Japan Society for the Promotion of Science (Fellowship ID: S19133). S.A. was supported in part by JSPS.KAKENHI (22H00217) and MEXT Initiative to Establish Next-generation Novel Integrated Circuits Centers (X-NICS) Grant Number JPJ011438.
|
2310.14942 | Domain Watermark: Effective and Harmless Dataset Copyright Protection is
Closed at Hand | The prosperity of deep neural networks (DNNs) is largely benefited from
open-source datasets, based on which users can evaluate and improve their
methods. In this paper, we revisit backdoor-based dataset ownership
verification (DOV), which is currently the only feasible approach to protect
the copyright of open-source datasets. We reveal that these methods are
fundamentally harmful given that they could introduce malicious
misclassification behaviors to watermarked DNNs by the adversaries. In this
paper, we design DOV from another perspective by making watermarked models
(trained on the protected dataset) correctly classify some `hard' samples that
will be misclassified by the benign model. Our method is inspired by the
generalization property of DNNs, where we find a \emph{hardly-generalized
domain} for the original dataset (as its \emph{domain watermark}). It can be
easily learned with the protected dataset containing modified samples.
Specifically, we formulate the domain generation as a bi-level optimization and
propose to optimize a set of visually-indistinguishable clean-label modified
data with similar effects to domain-watermarked samples from the
hardly-generalized domain to ensure watermark stealthiness. We also design a
hypothesis-test-guided ownership verification via our domain watermark and
provide the theoretical analyses of our method. Extensive experiments on three
benchmark datasets are conducted, which verify the effectiveness of our method
and its resistance to potential adaptive methods. The code for reproducing main
experiments is available at
\url{https://github.com/JunfengGo/Domain-Watermark}. | Junfeng Guo, Yiming Li, Lixu Wang, Shu-Tao Xia, Heng Huang, Cong Liu, Bo Li | 2023-10-09T11:23:05Z | http://arxiv.org/abs/2310.14942v2 | # Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
###### Abstract
The prosperity of deep neural networks (DNNs) is largely benefited from open-source datasets, based on which users can evaluate and improve their methods. In this paper, we revisit backdoor-based dataset ownership verification (DOV), which is currently the only feasible approach to protect the copyright of open-source datasets. We reveal that these methods are fundamentally harmful given that they could introduce malicious misclassification behaviors to watermarked DNNs by the adversaries. In this paper, we design DOV from another perspective by making watermarked models (trained on the protected dataset) correctly classify some 'hard' samples that will be misclassified by the benign model. Our method is inspired by the generalization property of DNNs, where we find a _hardly-generalized domain_ for the original dataset (as its _domain watermark_). It can be easily learned with the protected dataset containing modified samples. Specifically, we formulate the domain generation as a bi-level optimization and propose to optimize a set of visually-indistinguishable clean-label modified data with similar effects to domain-watermarked samples from the hardly-generalized domain to ensure watermark stealthiness. We also design a hypothesis-test-guided ownership verification via our domain watermark and provide the theoretical analyses of our method. Extensive experiments on three benchmark datasets are conducted, which verify the effectiveness of our method and its resistance to potential adaptive methods. The code for reproducing main experiments is available at [https://github.com/JunfengGo/Domain-Watermark](https://github.com/JunfengGo/Domain-Watermark).
## 1 Introduction
Deep neural networks (DNNs) have been applied to a wide range of domains and have shown human-competitive performance. The great success of DNNs heavily relies on the availability of various open-source datasets (\(e.g.\), CIFAR [1] and ImageNet [2]). With these high-quality datasets, researchers can evaluate and improve the proposed methods upon them. Currently, most of these datasets limit their usage to education or research purpose and are prohibited from commercial applications without authorization. How to protect their copyrights is of great significance [3; 4; 5; 6].
Currently, there are many classical methods for data protection, such as encryption [7; 8; 9], differential privacy [10; 11; 12], and digital watermarking [13; 14; 15; 16]. However, they are not able to protect the copyrights of open-source datasets since they either hinder the dataset accessibility (\(e.g.\), encryption) or require the information of the training process (\(e.g.\), differential privacy and digital watermarking) of suspicious models that could be trained on it.
To the best of our knowledge, backdoor-based dataset ownership verification (DOV) [3; 4; 5] is currently the only feasible approach to protect them, where defenders exploit backdoor attacks [17; 18; 19] to watermark the original dataset such that they can verify whether a suspicious model is trained on the protected dataset by examining whether it has specific backdoor behaviors. Recently, Li _et al_. [4] first discussed the 'harmlessness' requirement of backdoor-based DOV that the dataset watermark should not introduce new security risks to models trained on the protected dataset and proposed untargeted backdoor watermarks towards harmless verification by making the predictions of watermarked samples dispersible instead of deterministic (as a pre-defined target label).
In this paper, we revisit dataset ownership verification. We argue that backdoor-based dataset watermarks can never achieve truly harmless verification since their fundamental mechanism is making watermarked model misclassifies 'easy' samples (\(i.e.\), backdoor-poisoned samples) that can be correctly predicted by the benign model (as shown in Figure 1). It is with these particular misclassification behaviors that the dataset owners can conduct ownership verification. An intriguing question arises: _Is harmless dataset ownership certification possible to achieve_?
The answer to the aforementioned problem is positive. In this paper, we design dataset ownership verification from another angle, by making the watermarked model can correctly classify some 'hard' samples that will be misclassified by the benign model. Accordingly, we can exploit this difference to design ownership verification while not introducing any malicious prediction behaviors to watermarked models that will be deployed by dataset users (as shown in Figure 1). In general, our method is inspired by the generalization property of DNNs, where we intend to find a _hardly-generalized domain_ for the original dataset. It can be easily learned with the protected dataset containing modified samples. Specifically, we formulate the domain generation as a bi-level optimization and leverage a transformation module to generate domain-watermarked samples; We propose to optimize a set of visually-indistinguishable modified data having similar effects to domain-watermarked samples as our _domain watermark_ to ensure the stealthiness of dataset watermarking; We design a hypothesis-test-guided method to conduct ownership verification via our domain watermark at the end. We also provide theoretical analyses of all stages in our method.
In conclusion, the main contributions of this paper are four-folds: **1)** We revisit dataset ownership verification (DOV) and reveal the harmful drawback of methods based on backdoor attacks. **2)** We explore the DOV problem from another perspective, based on which we design a truly harmless DOV method via domain watermark. To the best of our knowledge, this is the first non-backdoor-based DOV method. Our work makes dataset ownership verification become an independent research field instead of the sub-field of backdoor attacks. **3)** We discuss how to design the domain watermark and provide its theoretical foundations. **4)** We conduct experiments on benchmark datasets, verifying the effectiveness of our method and its resistance to potential adaptive methods.
Figure 1: The main pipeline of dataset ownership verification with backdoor-based dataset watermarks and our domain watermark, where BW Sample represents existing backdoor-watermarked sample while DW Sample represents our proposed domain-watermarked sample. Existing backdoor-based methods make the watermarked model (\(i.e.\), the backdoored DNN) misclassify ‘easy’ samples that can be correctly predicted by the benign model and therefore the verification is harmful. In contrast, our ownership verification is harmless since we make the watermarked model correctly predict ‘hard’ samples that are misclassified by the benign model.
Related Work
### Backdoor Attacks
Backdoor attack2[17; 23; 24] is a training-phrase threat of DNNs, where the adversary intends to implant a _backdoor_ (\(i.e.\), the latent connection between the adversary-specified trigger pattern and the target label) into the victim model by maliciously manipulating a few training samples. The backdoored DNNs behave normally while their predictions will be maliciously changed to the target label whenever the testing samples contain the trigger pattern. In general, existing backdoor attacks can be divided into two main categories based on the property of the target label, as follows:
Footnote 2: In this paper, we focus on poison-only backdoor attacks, where the adversaries can only modify a few training samples to implant backdoors. Only these attacks can be used as the dataset watermark for ownership verification. Attacks with more requirements (\(e.g.\), control model training) [20; 21; 22] are out of our scope.
**Poisoned-Label Backdoor Attacks.** In these attacks, the target label of poisoned samples is different from their ground-truth labels. This is the most classical attack paradigm and is more easily to implant hidden backdoors. For example, BadNets [17] is the first backdoor attack, where the adversaries randomly modify a few samples from the original dataset by attaching a pre-defined trigger patch to their images and changing their labels to the target label. These modified samples (dubbed _poisoned samples_) associated with remaining benign samples are packed as the _poisoned dataset_ that is released to victim users for training; After that, Chen _et al_. [25] improved the stealthiness of BadNets by introducing trigger transparency; Nguyen _et al_. [26] proposed a more stealthy backdoor attack whose trigger patterns were designed via image-warping; Recently, Li _et al_. [4] proposed the first untargeted (poisoned-label) backdoor attack (\(i.e.\), UBW-P) for dataset ownership verification.
**Clean-Label Backdoor Attacks.** In these attacks, the target label of poisoned samples is consistent with their ground-truth labels. Accordingly, these attacks are more stealthy, compared to the poisoned-label ones. However, they usually suffer from low effectiveness, especially on datasets with a high image resolution or many classes, due to the _antagonistic effects_ of 'robust features' related to the target class contained in poisoned samples [27]. Label-consistent attack is the first clean-label attack where the adversaries introduced untargeted adversarial perturbations before adding trigger patterns; After that, a more effective attack (\(i.e.\), Sleeper Agent [28]) is proposed, which crafts clean-label poisoned samples via bi-level optimization; Recently, Li _et al_. [4] proposed UBW-C, which generated poisoned samples for leading untargeted misclassifications to attacked DNNs.
### Data Protection
**Classical Data Protection.** Data protection is a classical and important research direction, aiming to prevent unauthorized data usage or protect data privacy. Currently, existing classical data protection can be roughly divided into three main categories, including **(1)** encryption, **(2)** digital watermarking, and **(3)** privacy protection. Specifically, encryption [29; 7; 8] encrypts the whole or parts of the protected data so that only authorized users who hold a secret key for decryption can use it; Digital watermarking [30; 31; 32] embeds an owner-specified pattern to the protected data to claim the ownership; Privacy protection focuses on preventing the leakage of sensitive information of the data in both empirical [33; 34; 35] and certified manners [10; 36; 12]. However, these traditional approaches are not feasible to protect the copyright of open-source datasets since they either hinder the dataset accessibility or require the information of the training process that will not be disclosed.
**Dataset Ownership Verification.** Dataset ownership verification (DOV) is an emerging topic in data protection, aiming to verify whether a given suspicious model is trained on the protected dataset. To the best of our knowledge, this is currently the only feasible method to protect the copyright of open-source datasets. Specifically, it intends to implant specific prediction (towards verification samples) behaviors in models trained on the protected dataset while not reducing their performance on benign samples. Dataset owners can conduct ownership verification by examining whether the suspicious model has these behaviors. Currently, all DOV methods [3; 4; 5] exploit backdoor attacks to watermark the unprotected benign dataset. For example, [3] adopted poisoned-label backdoor attacks while [5] adopted clean-label ones for dataset watermarking. Recently, Li _et al_. [4] first discussed the 'harmlessness' requirement of DOV that the dataset watermark should not introduce new security risks to models trained on the protected dataset and proposed the untargeted backdoor watermarks. However, there is still no definition of harmlessness and backdoor-based DOV methods
can never achieve truly harmless verification for they introduce backdoor threats. How to design a harmless DOV method is still an important open question.
## 3 Domain Watermark
### Preliminaries
**Threat Model.** Following existing works in dataset ownership verification [3; 4; 5], we assume that the defenders (\(i.e.\), dataset owners) can only watermark the _benign dataset_ to generate the _protected dataset_. They will release the protected dataset instead of the original benign dataset for copyright protection. Given a third-party suspicious model that may be trained on the protected dataset without authorization, we consider the _black-box setting_ where defenders have no information about other training configurations (\(e.g.\), loss function and model architecture) of the model and can only access it to obtain predicted probability vectors via its model API.
**The Main Pipeline of Dataset Watermark.** Let \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) denotes the benign training dataset. Let we consider an image classification task with \(K\)-classes, \(i.e.\), \(\mathbf{x}_{i}\in\mathcal{X}=[0,1]^{C\times W\times H}\) represents the image with \(y_{i}\in\mathcal{Y}=\{1,\cdots,K\}\) as its label. Instead of releasing \(\mathcal{D}\) directly, the dataset owner will generate and release its watermarked version (\(i.e.\), \(\mathcal{D}_{w}\)). Specifically, \(\mathcal{D}_{w}=\mathcal{D}_{m}\cup\mathcal{D}_{b}\), where \(\mathcal{D}_{m}\) consists of the modified version of samples from a small selected subset \(\mathcal{D}_{s}\) of \(\mathcal{D}\) (\(i.e.\), \(\mathcal{D}_{s}\subset\mathcal{D}\)) and \(\mathcal{D}_{b}\) contains remaining benign samples (\(i.e.\), \(\mathcal{D}_{b}=\mathcal{D}-\mathcal{D}_{s}\)). The \(\mathcal{D}_{m}\) is generated by the defender-specified image generator \(G_{x}:\mathcal{X}\rightarrow\mathcal{X}\) and the label generator \(G_{y}:\mathcal{Y}\rightarrow\mathcal{Y}\), \(i.e.\), \(\mathcal{D}_{m}=\{(G_{x}(\mathbf{x}),G_{y}(y))|(\mathbf{x},y)\in\mathcal{D}_{s}\}\). For example, \(G_{x}=(\mathbf{1}-\mathbf{m})\odot\mathbf{t}+\mathbf{m}\odot\mathbf{x}\) and \(G_{y}=y_{t}\) in BadNets [17], where \(\mathbf{m}\in\{0,1\}^{C\times W\times H}\) is the trigger mask, \(\mathbf{t}\in[0,1]^{C\times W\times H}\) is the trigger pattern, \(\odot\) denotes the element-wise product, and \(y_{t}\) is the target label. In particular, \(\gamma\triangleq\frac{|\mathcal{D}_{m}|}{|\mathcal{D}_{w}|}\) is called the _watermarking rate_. All models trained on the protected dataset \(\mathcal{D}_{w}\) will have special prediction behaviors on \(G_{x}(\mathbf{x})\) for ownership verification. Specifically, let \(C:\mathcal{X}\rightarrow\mathcal{Y}\) denotes a third-party suspicious model that could be trained on the protected dataset, existing backdoor-based methods will examine whether it conduct unauthorized training by testing whether \(C(G_{x}(\mathbf{x}))=y_{t}\). Since \(y_{t}\neq y\) in most cases, these backdoor-based watermarks are harmful.
### Problem Formulation
As described in previous sections, existing backdoor-inspired dataset ownership verification (DOV) methods [3; 4; 5] would cause malicious misclassification on watermarked samples to all models trained on the protected dataset, therefore they are harmful. _This limitation of backdoor-based DOV methods cannot be eliminated_ because their inherent mechanism is to lead the watermarked model to have particular mispredicted behaviors for verification, although the misclassification could be random and less harmful [18]. In this paper, we intend to _design a truly harmless DOV method so that the watermarked models will correctly classify watermarked samples_. Before we formally define the studied problem, we first provide the definition of harmful degree of a DOV method.
**Definition 1** (Harmful and Relatively Harmful Degree).: _Let \(\hat{\mathcal{D}}=\{(\mathbf{\hat{x}}_{i},y_{i})\}_{i=1}^{N}\) indicates a set of watermarked samples used for ownership verification of a DOV method, where \(\mathbf{\hat{x}}_{i}\) is the verification sample with \(y_{i}\in\mathcal{Y}\) as its ground-truth label (instead of its given label). Let \(\hat{C}\) and \(\hat{C}\) represent a classifier trained on the protected and unprotected datasets, respectively. The harmful degree is \(H\triangleq\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}\{\hat{C}(\mathbf{\hat{x}}_{i})\neq y _{i}\}\) and the relatively harmful degree is \(\hat{H}\triangleq\frac{1}{N}\left(\sum_{i=1}^{N}\mathbb{I}\{\hat{C}(\mathbf{\hat{x }}_{i})\neq y_{i}\}-\sum_{i=1}^{N}\mathbb{I}\{C(\mathbf{\hat{x}}_{i})\neq y_{i}\}\right)\) where \(\mathbb{I}\{\cdot\}\) is the indicator function._
To design a harmless DOV method, we intend to make watermarked DNNs correctly classify some 'hard' samples that will be misclassified by the model trained on the unprotected benign dataset. Inspired by the generalization property of DNNs, we intend to find a _hardly-generalized domain_ of the benign dataset, which can be easily learned with the protected dataset containing the modified samples. In this paper, we call this watermarking method as _domain watermark_, defined as follows.
**Definition 2** (Domain Watermark).: _Given a benign dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\), let \(C:\mathcal{X}\rightarrow\mathcal{Y}\) denotes a model trained on \(\mathcal{D}\). Assume that \(G_{d}\) denotes a domain generator such that \(G_{d}(\mathbf{x}_{i})\) owns the same ground-truth label as \(\mathbf{x}_{i}\) but belongs to a hardly-generalized domain, \(i.e.\), \(\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}\mathbb{I}\{C(\mathbf{x}_{i})=y_{i}\} \gg\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}\mathbb{I}\{C(G_{d}(\mathbf{x}_{i}))=y_{i}\}\). We intend to find a watermarked
version of \(\mathcal{D}\) (i.e., \(\mathcal{D}_{d}\)) with watermarking rate \(\gamma\), such that the watermarked model \(\hat{C}\) trained on it have two properties:_ **(1)**_\(\frac{1}{N}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}\mathbb{I}(\hat{C}(\mathbf{x}_{i})=y_{ i})\geq\beta\) and_ **(2)**_\(\frac{1}{N}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}(\mathbb{I}\{\hat{C}(\mathbf{x}_{ i})=y_{i}\}-\mathbb{I}\{\hat{C}(G_{d}(\mathbf{x}_{i}))=y_{i}\})\leq\tau\), where \(\beta,\tau\in[0,1]\) are given parameters. In this paper, \(\mathcal{D}_{d}\) is defined as the domain watermark of the benign dataset \(\mathcal{D}\)._
### Generating the Hardly-Generalized Domain
As illustrated in Definition 2, finding a hardly-generalized target domain \(\mathcal{T}\) (with domain generator \(G_{d}\)) of the source domain \(\mathcal{S}\) is the first step of our domain watermark. To guide the construction of the domain \(\mathcal{T}\), we have the following Lemma 1.
**Lemma 1** (Generalization Bound [37]).: _The bound of expected risk on a given target domain \(\mathcal{T}\) is negatively associated with mutual information between features for source \(\mathcal{S}\) and target \(\mathcal{T}\) domains:_
\[\mathcal{R}_{\mathcal{T}}(f)\leq\mathcal{R}_{\mathcal{S}}(f)-4I(\mathbf{z};\mathbf{ \hat{z}})+4H(Y)+\frac{1}{2}d_{\mathcal{HL}}(p(\mathbf{z}),p(\mathbf{\hat{z}})), \tag{1}\]
_where \(\mathcal{R}_{\mathcal{T}}(f)=\mathbb{E}_{(\mathbf{\hat{z}},y)\sim\mathcal{T}}\left[ \mathbb{I}\{C(\mathbf{\hat{x}})\neq y\}\right]\), \(\mathcal{R}_{\mathcal{S}}(f)=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{S}}\left[ \mathbb{I}\{C(\mathbf{x})\neq y\}\right]\). \(I(\mathbf{z};\mathbf{\hat{z}})\) is mutual information between features from \(\mathcal{S}\) and \(\mathcal{T}\). \(d_{\mathcal{HL}}(p(\mathbf{z}),p(\mathbf{\hat{z}}))\) is \(\mathcal{HL}\Delta\mathcal{H}\)-divergence for measuring the divergence of feature marginal distributions of two domains, and \(H(\cdot)\) is the entropy._
Lemma 1 reveals the upper bound of generalization performance on \(\mathcal{T}\). Since \(d_{\mathcal{HL}\Delta\mathcal{H}}(\cdot)\) is intractable and hard to directly optimize, as well as [37] shows that only a \(I(\cdot)\) is enough for generalization across domains, _we propose to increase the expected risk on \(\mathcal{T}\) by minimizing \(I(\mathbf{z};\mathbf{\hat{z}})\)_.
Specifically, we formulate the design of the target domain \(\mathcal{T}\) (with the domain generator \(G_{d}(\cdot;\mathbf{\theta})\)) as a bi-level optimization, as follows:
\[\min_{\mathbf{\theta}}\mathbb{E}_{p(\mathbf{z},\mathbf{\hat{z}})}\ \left[I(\mathbf{z}(\mathbf{w^{*}});\mathbf{\hat{z}}(\mathbf{ \theta},\mathbf{w^{*}}))+\lambda_{1}\mathcal{L}_{c}(\mathbf{z}(\mathbf{w^{*}}),\mathbf{\hat{z} }(\mathbf{\theta},\mathbf{w^{*}}))\right], \tag{2}\] \[s.t.\ \mathbf{w^{*}}=\arg\min_{\mathbf{w}}\left[\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathcal{L}(f(G_{d}(\mathbf{x};\mathbf{\theta});\mathbf{w}),y)+\mathcal{ L}(f(\mathbf{x};\mathbf{w}),y)\right]-\lambda_{2}\mathbb{E}_{p(\mathbf{z},\mathbf{\hat{z}})}[I( \mathbf{z}(\mathbf{w});\mathbf{\hat{z}}(\mathbf{w}))]\right],\]
where \(\lambda_{1}\), \(\lambda_{2}\) are two positive hyper-parameters, and \(\mathcal{L}(\cdot)\) is the loss function (\(e.g.\), cross entropy).
Following previous works [37] in domain adaption and generalization, we propose to optimize the upper bound approximation for \(I(\mathbf{z};\mathbf{\hat{z}})\) instead of itself and leverage a transformation module consisting of multiple convolutional operations as \(G_{d}(\cdot;\mathbf{\theta})\) to generate the domain-watermarked image \(\mathbf{\hat{x}}\). Specifically, we aim to craft \(\mathbf{\hat{x}}\) via minimizing the upper bound approximation for mutual information [38] between \(\mathbf{x}\in\mathcal{D}\) and \(\mathbf{\hat{x}}\) in the latent feature space \(\mathbb{Z}\):
\[I(\mathbf{z};\mathbf{\hat{z}})=\mathbb{E}_{p(\mathbf{z},\mathbf{\hat{z}})}\left[\text{log}\ \frac{p(\mathbf{\hat{z}}|\mathbf{z})}{p(\mathbf{\hat{z}})}\right]\leq\mathbb{E}_{p(\mathbf{z},\mathbf{\hat{z}})}[\text{log}\ p(\mathbf{\hat{z}}|\mathbf{z})]-\mathbb{E}_{p(\mathbf{z})p(\bm {\hat{z}})}[\text{log}\ p(\mathbf{\hat{z}}|\mathbf{z})], \tag{3}\]
where \(\mathbf{z}\) and \(\mathbf{\hat{z}}\) are the latent vectors obtained by passing \(\mathbf{x}\) and \(\mathbf{\hat{x}}\) through \(f(\cdot;\mathbf{w})\)'s feature extractor. \(\mathcal{L}_{c}(\cdot)\) is the class-conditional maximum mean discrepancy (MMD) computed on the latent space \(\mathbb{Z}\) and proposed to limit the potential semantic information distortion between \(\mathbf{x}\) and \(\mathbf{\hat{x}}\), follows:
\[\mathcal{L}_{c}(z,\hat{z})=\frac{1}{K}\sum_{j=1}^{K}\left(||\frac{1}{n_{s}^{j}} \sum_{i=1}^{n_{s}^{j}}\phi(\mathbf{z_{i}^{j}})-\frac{1}{n_{t}^{j}}\sum_{i=1}^{n_{t} ^{j}}\phi(\mathbf{\hat{z}_{i}^{j}})||^{2}\right), \tag{4}\]
where \(n_{s}^{j}\), \(n_{t}^{j}\) represent the number for \(\mathbf{x}\) and \(\mathbf{\hat{x}}\) from class \(j\), and \(\phi(\cdot)\) is the kernel function.
The configurations, parameter selections, and model architectures are included in Appendix A.
### Generating the Protected Dataset
Once we obtain the hard-generalized domain generator \(G_{d}\) with the method proposed in Section 3.3, the next step is to generate the protected dataset based on it. Before we present its technical details, we first deliver some insight into the data quantity impact for the domain watermark.
**Theorem 1** (Data Quantity Impact).: _Suppose in PAC Bayesian [39], for a target domain \(\mathcal{T}\) and a source domain \(\mathcal{S}\), any set of voters (candidate models) \(\mathcal{H}\), any prior \(\pi\) over \(\mathcal{H}\) before any training, any \(\xi\in(0,1]\), any \(c>0\), with a probability at least \(1-\xi\) over the choices of \(S\sim\mathcal{S}^{n_{s}}\) and \(T\sim\mathcal{T}_{\mathcal{X}}^{n_{t}}\), for the posterior \(f\) over \(\mathcal{H}\) after the joint training on \(S\) and \(T\), we have_
\[\mathcal{R}_{\mathcal{T}}\left(f\right) \leq\frac{c}{2(1-e^{-c})}\widehat{\mathcal{R}}_{T}(f)+\frac{c}{1- e^{-c}}\beta_{\infty}(\mathcal{T}\|\mathcal{S})\widehat{\mathcal{R}}_{ \mathcal{S}}(f)+\Omega \tag{5}\] \[+\frac{1}{1-e^{-c}}\left(\frac{1}{n_{t}}+\frac{\beta_{\infty}( \mathcal{T}\|\mathcal{S})}{n_{s}}\right)\left(2\mathrm{KL}(f\|\pi)+\ln\frac{2} {\xi}\right),\]
_where \(\widehat{\mathcal{R}}_{T}(f)\) and \(\widehat{\mathcal{R}}_{\mathcal{S}}(f)\) are the target and source empirical risks measured over target and source datasets \(T\) and \(S\), respectively. \(\Omega\) is a constant and \(\mathrm{KL}(\cdot)\) is the Kullback-Leibler divergence. \(\beta_{\infty}(\mathcal{T}\|\mathcal{S})\) is a measurement of discrepancy between \(\mathcal{T}\) and \(\mathcal{S}\) defined as_
\[\beta_{\infty}(\mathcal{T}\|\mathcal{S})=\sup_{(\mathbf{x},y)\in\mathrm{SUPP}( \mathcal{S})}\left(\frac{\mathcal{P}_{(\mathbf{x},y)\in\mathcal{T}}}{\mathcal{P}_ {(\mathbf{x},y)\in\mathcal{S}}}\right)\geq 1, \tag{6}\]
_where \(\mathrm{SUPP}(\mathcal{S})\) denotes the support of \(\mathcal{S}\). When \(\mathcal{S}\) and \(\mathcal{T}\) are identical, \(\beta_{\infty}(\mathcal{T}\|\mathcal{S})=1\)._
Theorem 1 reveals the upper bound of \(\mathcal{R}_{\mathcal{T}}\left(f\right)\) is negatively associated with the number of samples for source and target domains (\({\it i.e.}\), \(n_{t}\) and \(n_{s}\)). Assuming \(n_{t}\) is fixed, increasing \(n_{s}\) can still increase generalization on the target domain. As such, it is possible to combine some domain-watermarked samples with benign samples to achieve target domain generalization. Its proof is in Appendix B.
In general, the most straightforward method to generate our domain watermark for the protected dataset is to _randomly select a few samples \((\mathbf{x},y)\) from the original dataset \(\mathcal{D}\) and replace them with their domain-watermarked version_\((G_{d}(\mathbf{x}),y)\). However, as we will show in the experiments, the domain-watermarked image is usually significantly different from its original version. Accordingly, the adversaries may notice watermarked samples and try to remove them to bypass our defense. To ensure the stealthiness of our domain watermark, we propose to _optimize a set of visually-indistinguishable modified data \(\{(\mathbf{x}_{i}^{\prime},y_{i})|\mathbf{x}_{i}^{\prime}=\mathbf{x}_{i}+\mathbf{\delta}_{i}\}\) having similar effects to domain-watermarked samples_. This is also a bi-level optimization problem, as follows.
\[\min_{\mathbf{\delta}\subset\mathcal{B}}\ \left[\mathbb{E}_{(\mathbf{\hat{x}},y) \sim\mathcal{T}}[\mathcal{L}\left(f(\mathbf{\hat{x}};\mathbf{w}(\mathbf{\delta})),y) ]-\lambda_{3}\min\left\{\mathbb{E}_{(\overline{\mathbf{x}},y)\sim\overline{ \mathcal{T}}}[\mathcal{L}\left(f(\overline{\mathbf{x}};\mathbf{w}(\mathbf{\delta})),y)], \lambda_{4}\right\}\right], \tag{7}\] \[s.t.\ \mathbf{w}(\mathbf{\delta})=\arg\min_{\mathbf{w}}\left[\frac{1}{| \mathcal{D}_{s}|}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}_{s}}\mathcal{L}\left(f (\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{w}),y_{i}\right)+\frac{1}{|\mathcal{D}_{b}|} \sum_{(\mathbf{x}_{j},y_{j})\in\mathcal{D}_{b}}\mathcal{L}\left(f(\mathbf{x}_{j};\bm {w}),y_{j}\right)\right],\]
where \(\mathbb{E}_{(\overline{\mathbf{x}},y)\sim\overline{\mathcal{T}}}[\mathcal{L} \left(f(\overline{\mathbf{x}};\mathbf{w}(\mathbf{\delta})),y\right)]\) represents the expected risk for the watermarked model on other unseen domains (\({\it i.e.}\),\(\overline{\mathcal{T}}\)) and \(\mathcal{B}=\{\mathbf{\delta}:||\mathbf{\delta}||_{\infty}\leq\epsilon\}\) where \(\epsilon\) is a visibility-related hyper-parameter.
The second term in Eq.(7) is to prevent the watermarked model can achieve a similar generalization performance on other unseen domains as the target domain \(\mathcal{T}\) to preserve the uniqueness of \(\mathcal{T}\) for verification purposes. We introduce two parameters \(\lambda_{3}\) and \(\lambda_{4}\) for preventing the second term dominant in the optimization procedure. \(\lambda_{4}\) is set as \(\mathbb{E}_{(\overline{\mathbf{x}},y)\sim\overline{\mathcal{T}}}[\mathcal{L}\left(f (\overline{\mathbf{x}};\mathbf{w}^{\star}),y)\right]\), where \(\mathbf{w}^{\star}\) is obtained by training on the original dataset \(\mathcal{D}\). Please find more optimization details in Appendix C.
In particular, our domain watermark is clean-label, \({\it i.e.}\), we do not modify the label of modified samples as have done in most backdoor-based methods. As such, it is more stealthy.
## 4 Dataset Ownership Verification via Domain Watermark
In this section, we introduce how to conduct dataset ownership verification via our domain watermark. The overview of the entire procedure is shown in Figure 2.
As described in Section 3.2, models trained on our protected dataset (with domain watermark) can correctly classify some domain-watermarked samples while other benign models cannot. Accordingly, given a suspicious third-party model \(f\), the defenders can verify whether it was trained on the protected dataset by examining whether the model has similar prediction behaviors on benign samples and their domain-watermarked version. _The model is regarded as trained on the protected
dataset if it has similar behaviors_. To verify it, we design a hypothesis-test-guided method following previous works [3; 4], as follows.
**Proposition 1**.: _Suppose \(f(\mathbf{x})\) is the posterior probability of \(\mathbf{x}\) predicted by the suspicious model. Let variable \(\mathbf{X}\) denotes the benign image and variable \(\mathbf{X}^{\prime}\) is its domain-watermarked version (\(i.e.\), \(\mathbf{X}^{\prime}=G_{d}(\mathbf{X})\)), while variable \(P_{b}=f(\mathbf{X})_{Y}\) and \(P_{d}=f(\mathbf{X}^{\prime})_{Y}\) indicate the predicted probability on the ground-truth label \(Y\) of \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\), respectively. Given the null hypothesis \(H_{0}:P_{b}=P_{d}+\tau\) (\(H_{1}:P_{b}<P_{d}+\tau\)) where the hyper-parameter \(\tau\in[0,1]\), we claim that the suspicious model is trained on the protected dataset (with \(\tau\)-certainty) if and only if \(H_{0}\) is rejected._
In practice, we randomly sample \(m\) different benign samples to conduct the pairwise T-test [40] and calculate its p-value. The null hypothesis \(H_{0}\) is rejected if the p-value is smaller than the significance level \(\alpha\). Besides, we also calculate the _confidence score_\(\Delta P=P_{b}-P_{d}\) to represent the verification confidence. _The smaller the \(\Delta P\), the more confident the verification_.
**Theorem 2**.: _Let \(f(\mathbf{x})\) is the posterior probability of \(\mathbf{x}\) predicted by the suspicious model, variable \(\mathbf{X}\) denotes the benign sample with label \(Y\), and variable \(\mathbf{X}^{\prime}\) is the domain-watermarked version of \(\mathbf{X}\). Assume that \(P_{b}\triangleq f(\mathbf{X})_{Y}>\eta\). We claim that dataset owners can reject the null hypothesis \(H_{0}\) at the significance level \(\alpha\), if the verification success rate (VSR) \(V\) of \(f\) satisfies that_
\[\sqrt{m-1}\cdot(V-\eta+\tau)-t_{\alpha}\cdot\sqrt{V-V^{2}}>0, \tag{8}\]
_where \(t_{\alpha}\) is \(\alpha\)-quantile of t-distribution with \((m-1)\) degrees of freedom and \(m\) is sample size._
In general, Theorem 2 indicates that our dataset verification can succeed if the VSR of the suspicious model \(f\) is higher than a threshold (which is not necessarily 100%). In particular, the assumption of Theorem 2 can be easily satisfied by using benign samples that can be correctly classified with high confidence. Its proof is included in Appendix D.
## 5 Experiments
In this section, we conduct experiments on CIFAR-10 [1] and Tiny-ImageNet [41] with VGG [42] and ResNet [43], respectively. Results on STL-10 [44] are in Appendix F.
### The Performance of Domain Watermark
**Settings.** We select seven baseline methods, containing three clean-label backdoor watermarks (\(i.e.\), Label-Consistent, Sleeper Agent, and UBW-C) and four poisoned-label watermarks (\(i.e.\), BadNets, Blended, WaNet, and UBW-P). Following the previous work [4], we set the watermarking rate \(\gamma=0.1\), perturbation constraint \(\epsilon=16/255\) in all cases, and adopt the same watermark patterns and parameters. The example of samples used in different watermarks is shown in Figure 3. For our
Figure 2: The workflow of dataset ownership via our domain watermark. In the first step, we will generate domain-watermarked (DW) samples in a hardly-generalized domain of the benign dataset; In the second step, we will optimize a set of visually-indistinguishable modified samples that have similar effects to domain-watermarked samples. We will release those modified samples associated with remaining benign samples instead of the original dataset for copyright protection; In the third step, we identify whether a given third-party model is trained on our protected dataset by testing whether it has similar prediction behaviors in benign images and their DW version.
method, we set \(\lambda_{3}=0.3\). We implement all baseline methods based on BackdoorBox[45]. Each result is averaged over five runs. Please find more details in Appendix E.
**Evaluation Metrics.** We adopt benign accuracy (BA) and verification success rate (VSR) to verify the effectiveness of dataset watermarks. Specifically, the VSR is defined as the percentage that verification samples can be classified as the assigned label (\(i.e.\), target label of baselines and ground-truth label of our method) by watermarked DNNs. We exploit harmful degree (\(H\in[0,1]\)), and relatively harmful degree (\(\hat{H}\in[-1,1]\)) to measure watermark harmfulness. In general, the larger the BA and VSR while the smaller the \(H\) and \(\hat{H}\), the better the dataset watermark.
**Results.** As shown in Table 1, the benign accuracy of our domain watermark is higher than clean-label backdoor watermarks in most cases, especially on the Tiny-ImageNet dataset. In particular, only our method is harmless. For example, both \(H\) and \(\hat{H}\) are 0.7 smaller than those of all baseline methods on CIRAR-10 datasets. Besides, as we will show in the next subsection, the VSR of our method is sufficiently high for correct ownership verification, although the VSR of our method is smaller than that of backdoor-based watermarks (especially on complicated datasets). The VSRs of benign models with our domain-watermarked samples on CIFAR-10 and Tiny-ImageNet are merely 13% and 6%, respectively. This mild potential limitation is because our VSR is restricted by the BA of watermark models. It is an unavoidable sacrifice for harmlessness.
'Independent-M'), and **3)** unauthorized dataset training (dubbed 'Malicious'). In the first case, we used domain-watermarked samples to query the suspicious model trained with modified samples from another domain; In the second case, we test the benign model with our domain-watermarked samples; In the last case, we test the domain-watermarked model with corresponding domain-watermarked samples. Notice that only the last case should be regarded as having unauthorized dataset use. More setting detail are described in Appendix G.
**Evaluation Metrics.** Following the settings in [4], we use \(\Delta P\in[-1,1]\) and \(p\)-value \(\in[0,1]\) for the evaluation. For the first two independent scenarios, a large \(\Delta P\) and \(p\)-value are expected. In contrast, for the third scenario, the smaller \(\Delta P\) and \(p\)-value, the better the verification.
**Results.** As shown in Table 2, our method can achieve accurate verification in all cases. Specifically, our approach can identify the unauthorized dataset usage with high confidence (\(i.e.\), \(\Delta P\approx 0\) and \(p\)-value \(\ll\) 0.01), while not misjudging when there is no unauthorized dataset utilization (\(i.e.\), \(\Delta P\gg 0\) and \(p\)-value \(\gg\) 0.05). Especially on the CIFAR-10 dataset (with high VSR), the \(p\)-values of independent cases are already 1 while that of the malicious scene is 50 powers smaller than a correct verification needs. These results verify the effectiveness of our dataset ownership verification.
### Discussions
#### 5.3.1 Ablation Studies
We hereby discuss the effects of two key hyper-parameters involved in our method (\(i.e.\), \(\epsilon\) and \(\gamma\)). Please find more experiments regarding other parameters and detailed settings in Appendix I.
**Effects of Perturbation Budget \(\epsilon\).** We study its effects on both CIFAR-10 and Tiny-ImageNet datasets. As shown in Figure 5, the VSR increases with the increase of \(\epsilon\). In contrast, the BA remains almost stable with different \(\epsilon\). However, increasing \(\epsilon\) would also reduce the invisibility of modified samples. Defenders should assign it based on their specific needs.
**Effects of watermarking Rate \(\gamma\).** As shown in Figure 6, similar to the phenomena of \(\epsilon\), the VSR increases with the increase of \(\gamma\) while the BA remains almost unchanged on both datasets. In particular, even with a low watermarking rate (\(e.g.\), 1%), our method can still have a promising VSR. These results verify the effectiveness of our domain watermark again.
#### 5.3.2 The Resistance to Potential Adaptive Methods
We notice that the adversaries may try to detect or even remove our domain watermark based on existing methods in practice. In this section, we discuss whether our method is resistant to them.
Due to the limitation of space, following the previous work [4], we only evaluate the robustness of our domain watermark under fine-tuning [46] and model pruning [47] in the main manuscript. As shown in Figure 6, fine-tuning has minor effects on both the VSR and the BA of our method. Our method is also resistant to model pruning for the BA decreases with the decrease of VSR. We have
also evaluated our domain watermark to more other representative adaptive methods. Please find more setting details and results in our Appendix J.
#### 5.3.3 A Closer Look to the Effectiveness of our Method
In this section, we intend to further explore the mechanisms behind the effectiveness of our domain watermark. Specifically, we adopt t-SNE [48] to visualize the feature distribution of different types of samples generated by the benign model and its domain-watermarked version. As shown in Figure 8, the domain-watermarked samples stay away (with the normalized distance as 1.84) from those with their ground-truth label (_i.e._, '0'), although they still cluster together, under the benign model. In contrast, these domain-watermarked samples lay close (with the normalized distance as 0.40) to benign samples having the same class under the watermarked model. These phenomena are consistent with the predictive behaviors of the two models and can partly explain the mechanism of our domain watermark. We will further explore it in our future works.
## 6 Conclusion
In this paper, we revisited the dataset ownership verification (DOV). We revealed the harmful nature of existing backdoor-based methods because their principle is making watermarked models misclassify 'easy' samples. To design a genuinely harmless DOV method, we proposed the domain watermark by leading watermarked DNNs to correctly classify some defender-specified 'hard' samples. We provided the theoretical analyses of our domain watermark and its corresponding ownership verification. We also verified its effectiveness on benchmark datasets. As the first non-backdoor-based method, our method can provide new angles and understanding to the design of dataset ownership verification to facilitate trustworthy dataset sharing.
## Acknowledgments
Junfeng Guo and Heng Huang were partially supported by NSF IIS 1838627, 1837956, 1956002, 2211492, CNS 2213701, CCF 2217003, and DBI 2225775. Cong Liu was supported by the National Science Foundation under Grants CNS Career 2230968, CPS 2230969, CNS 2300525, CNS 2343653, and CNS 2312397.
|
2305.01647 | Wilson loops and defect RG flows in ABJM | We continue our study of renormalization group (RG) flows on Wilson loop
defects in ABJM theory, which we have initiated in arXiv:2211.16501. We
generalize that analysis by including non-supersymmetric fixed points and RG
trajectories. To this end, we first determine the ``ordinary",
non-supersymmetric Wilson loops, which turn out to be two and to include an
R-symmetry preserving coupling to the scalar fields of the theory, contrary to
their four-dimensional counterpart defined solely in terms of the gauge field
holonomy. We then deform these operators by turning on bosonic and/or fermionic
couplings, which trigger an elaborate, multi-dimensional network of possible RG
trajectories connecting a large spectrum of fixed points classified in terms of
the amount (possibly zero) of supersymmetry and R-symmetry preserved. The
$\beta$-functions are computed to leading order in the ABJM coupling but
exactly in the deformation parameters, using an auxiliary one-dimensional
theory on the defect and a dimensional regularization scheme. A striking result
is the different behavior of the two ordinary Wilson loops, of which one turns
out to be a UV unstable point while the other is IR stable. The same is true
for the two 1/2 BPS Wilson loops. We interpret our results from a defect CFT
(dCFT) point of view, computing the anomalous dimensions of the operators
associated to the deformations and establishing appropriate g-theorems. In
particular, the fermionic unstable fixed point is associated to a dCFT which is
not reflection positive. | Luigi Castiglioni, Silvia Penati, Marcia Tenser, Diego Trancanelli | 2023-05-02T17:59:08Z | http://arxiv.org/abs/2305.01647v2 | # Wilson loops and defect RG flows in ABJM
###### Abstract
We continue our study of renormalization group (RG) flows on Wilson loop defects in ABJM theory, which we have initiated in arXiv:2211.16501. We generalize that analysis by including non-supersymmetric fixed points and RG trajectories. To this end, we first determine the "ordinary", non-supersymmetric Wilson loops, which turn out to be two and to include an R-symmetry preserving coupling to the scalar fields of the theory, contrary to their four-dimensional counterpart defined solely in terms of the gauge field holonomy. We then deform these operators by turning on bosonic and/or fermionic couplings, which trigger an elaborate, multi-dimensional network of possible RG trajectories connecting a large spectrum of fixed points classified in terms of the amount (possibly zero) of supersymmetry and R-symmetry preserved. The \(\beta\)-functions are computed to leading order in the ABJM coupling but exactly in the deformation parameters, using an auxiliary one-dimensional theory on the defect and a dimensional regularization scheme. A striking result is the different behavior of the two ordinary Wilson loops, of which one turns out to be a UV unstable point while the other is IR stable. The same is true for the two \(1/2\) BPS Wilson loops. We interpret our results from a defect CFT (dCFT) point of view, computing the anomalous dimensions of the operators associated to the deformations and establishing appropriate g-theorems. In particular, the fermionic unstable fixed point is associated to a dCFT which is not reflection positive.
Keywords:Chern-Simons theories, Wilson, 't Hooft and Polyakov loops, Renormalization Group
+
Footnote †: institutetext: Department of Physics, University of Wisconsin, Madison, WI 53706, USA
## 1 Introduction
Three-dimensional supersymmetric Chern-Simons-matter theories, like ABJ(M) [1; 2] and other \(\mathcal{N}\geq 2\) quiver theories [3; 4; 5; 6], have a rich moduli space of BPS Wilson loops discovered along the years in, for example, [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23].1 Such Wilson loops typically come in families related by a certain number of parameters, that allow to interpolate continuously among representatives preserving varying amounts of supercharges of the theory.
Footnote 1: See [24] for a fairly recent review.
These interpolations can be studied from the point of view of RG flows on defects, as initiated recently in [23] for ABJM, building on similar analyses done in four dimensions in [25] and subsequent literature, see for example [26; 27; 28; 29; 30; 31; 32; 33]. The main idea is to start from
a certain loop operator - a UV fixed point - and deform it with a marginally relevant parameter, that either leads to another loop operator - an IR fixed point - or to infinity along runaway directions.
The most studied and best understood example is the interpolation, initially proposed in [25], between the non-BPS Wilson loop of four-dimensional \(\mathcal{N}=4\) super Yang-Mills (SYM) and the 1/2 BPS operator of that theory. This interpolation is controlled by a single parameter \(\zeta\) that acts as a marginally relevant deformation of the non-BPS operator defined only in terms of the gauge field (for \(\zeta=0\)) and triggers a flow toward the supersymmetric operator [34] which is also coupled to a scalar of the theory (for \(\zeta=1\)).
Performing a similar investigation in the ABJM theory, one expects to find a much richer spectrum of flows, given the larger number of Wilson loops that can be defined. In [23] our focus has been on interpolations and RG trajectories associated to operators in ABJM that always preserve some amount of supersymmetry, at least one supercharge, something we dubbed as 'enriched flows'. Here we continue that analysis by including also fixed points and RG trajectories corresponding to non-supersymmetric operators, in the original spirit of [25].
The first step consists in identifying what are such non-BPS operators in the case of ABJM. Differently from the four-dimensional counterpart, these operators are not just defined in terms of the gauge fields, but they also include an R-symmetry preserving coupling to the scalars. This can be justified as follows. First of all, scalar bilinears have classical dimension 1 in three dimensions and allow for the contraction of the R-symmetry indices to produce the \(SU(4)\) singlet operator \(C_{I}\bar{C}^{I}\) (with \(I=1,\ldots,4\)). It follows that when we perform the renormalization of the gauge field on the defect theory, nothing prevents it from mixing with the singlet. In fact, one discovers that, due to the interaction with the bulk theory, this new effective vertex is produced. It is divergent and can only be cancelled if the defect theory defined by the Wilson loop includes a coupling to the scalars as the one above. It turns out there are two such operators, which we call \(W^{\pm}\), corresponding to the two possible signs in front of \(C_{I}\bar{C}^{I}\). We demonstrate this in section 2.2
Footnote 2: Similar results were found in [35; 36], where a classification of line operators in Chern-Simons theories with matter was obtained.
Having identified the non-BPS, or "ordinary", Wilson loops we then proceed to deforming them in several ways and in computing the associated \(\beta\)-functions at leading order in the ABJM coupling and in the planar limit. We start in section 3 with deforming the scalar couplings only. We restrict these deformations to be diagonal, thus generally given by four independent parameters that allow to either break (completely or in part) the R-symmetry group or to preserve it. We obtain the \(\beta\)-functions associated to these deformations using the same one-dimensional effective approach originally developed in [37; 38] for QCD and then extended to ABJM in [23].
As is well-known, BPS Wilson loops in three-dimensional quiver theories are defined in terms of superconnections including couplings to the fermions of the theory. It is then natural to also consider fermionic deformations, which we do in section 4 for purely fermionic deformations and in section 5 for more generic deformations of both fermions and scalars.
This whole setup allows for a very rich, multi-dimensional space of possible RG flows, which we depict in a series of plots. In these plots we denote fixed points according to their R-symmetry structure: \(SU(4)\) invariants as squares, \(SU(3)\) as triangles and \(SU(2)\) as circles. When they are black no supersymmetry is preserved, whereas when they are colored they preserve some amount of supersymmetry: blue points are 1/6 BPS and red points are 1/2 BPS. There are multiple facets in these series of plots and we leave a detailed presentation for the body of the text.
Nevertheless, in figure 1 we show a glimpse of a schematic representation of some of our results. Along what we depict as a horizontal line we find bosonic flows. In particular, connecting the two \(SU(4)\) symmetric "ordinary" operators \(W^{-}\) and \(W^{+}\) at the end points (\(\blacksquare\)) we have two \(SU(3)\) non-BPS bosonic loops (\(\blacktriangle\)) and/or a bosonic 1/6 BPS operators (\(\blacktriangledown\)), which is \(SU(2)\times SU(2)\) symmetric. Along the vertical direction we represent fermionic flows and among these we find in particular two 1/2 BPS fixed points (\(\blacktriangle\)). In this representation, arrows go from the UV to the IR. We hope the reader carries this general picture in mind throughout the paper.
One of the most interesting outputs of this web of flows is the qualitatively different behavior between \(W^{+}\) and \(W^{-}\), and between \(W^{+}_{1/2}\) and \(W^{-}_{1/2}\). Though classically they are equivalent (they simply differ by harmless signs in the couplings, which do not affect symmetries), at quantum level they describe completely different defect theories. In fact, while \(W^{-}\) is a UV unstable fixed point, its counterpart \(W^{+}\) is IR stable. Similarly, including fermionic deformations we find that UV instability lies at \(W^{-}_{1/2}\) while IR stability corresponds to \(W^{+}_{1/2}\).
After unravelling this multi-dimensional space of possible flows, in section 6 we inter
Figure 1: Representation of flows connecting different Wilson loop operators in ABJM theory. Along the horizontal line we have purely bosonic deformations whereas in the vertical direction we turn on fermions. Mixed deformations correspond to compositions of these two.
pret our results from the point of view of a defect CFT. First of all, computing anomalous dimensions at the first non-trivial order, we find that the perturbations are marginally relevant operators for the fermionic \(W^{-}\) and \(W^{-}_{1/2}\) defects, consistently with the direction of the flows. Then, through the evaluation of the defect entropy [39], we manage to test the validity of a g-theorem [29; 39; 40; 41; 42].
A striking result arises, which concerns the g-theorem along the flows connecting \(W^{\pm}_{1/2}\) defect theories with non-BPS bosonic \(SU(3)\) invariant defects. We find that, while \(W^{+}_{1/2}\) satisfies reflection-positivity and the g-theorem [29] holds, the \(W^{-}_{1/2}\) defect does not because of a crucial sign change in the two-point function of its stress-tensor. As a consequence, the RG flow is still irreversible, but the defect entropy is increasing rather than decreasing from the UV unstable to the IR stable fixed points. However, this seems to be consistent with the dual description of defects at strong coupling and the Higgsing procedure used to construct these two operators [43]. In fact, in field theory they arise by Higgsing either with particle or antiparticle modes, while in M-theory \(W^{+}_{1/2}\) is dual to a M2-brane configuration and \(W^{-}_{1/2}\) is dual to an anti-M2 brane. This might explain why one defect is reflection-positive, whereas the other one is not.
We conclude with section 7, where we further discuss our findings and address a few relevant open questions. The technical details of our calculations are reported in four appendices.
As a final remark, we stress that the same caveats of [23] regarding the employed regularization scheme also apply here. All our computations are in fact performed in dimensional regularization, which is alternative to introducing framing and can then be thought as giving results which correspond to framing zero. Restricting to BPS Wilson loops, this is of course not what one obtains from localization, which works at framing one. However, it is well-known that the results from the two different schemes differ simply by an overall phase, the famous framing factor [44; 45; 46] or framing function [47]. Since dimensional regularization breaks conformal invariance, while localization requires and preserves superconformal invariance [48; 49], the framing function has the interpretation of a conformal (or framing) anomaly. It is ultimately responsible for the BPS RG flows that we find at framing zero, thus we conclude that enriched flows are anomaly driven. In the more general case of non-BPS flows, the absence of cohomological equivalence does not guarantee that this is the case.
## 2 Ordinary Wilson loops in ABJM
Following what has been done in \(\mathcal{N}=4\) SYM theory, see for instance [25; 26; 27; 30; 31; 50], here we want to study how to generalize to ABJM theory the idea of interpolating between the non-supersymmetric Wilson loop (WL) given by the holonomy of the gauge field (usually referred to as "ordinary") and BPS loops that include extra couplings to the matter fields of the theory3.
In the \({\cal N}=4\) SYM case, the interpolation takes the form [25]
\[W^{(\zeta)}(C)=\frac{1}{N}\operatorname{Tr}{\cal P}\exp\oint_{C}d\tau\left[\,iA_{ \mu}(x)\dot{x}^{\mu}+\zeta\Phi_{m}(x)\theta^{m}|\dot{x}|\,\right],\qquad\theta^ {m}\theta^{m}=1, \tag{1}\]
where \(m\) runs over the six scalar fields of the theory.
This is a one-parameter family of Wilson loops that connects the ordinary WL at \(\zeta=0\) and the 1/2 BPS Wilson-Maldacena loop [34] at \(\zeta=1\). Since \(\langle W^{(\zeta)}(C)\rangle\) is invariant under \(\zeta\to-\zeta\), one can restrict to \(\zeta\geq 0\).
In [25] it was found perturbatively at first order in the 't Hooft coupling \(\lambda\) that a non-trivial RG flow connecting these two operators exists, which is dictated by the following \(\beta\)-function
\[\beta_{\zeta}=\mu\frac{d\zeta}{d\mu}=-\frac{\lambda}{8\pi^{2}}\zeta(1-\zeta^{ 2})+{\cal O}(\lambda^{2}). \tag{2}\]
Around the \(\zeta=0\) UV fixed point the scalar perturbation \(\Phi_{m}\theta^{m}\) becomes marginally relevant and drives the system towards the 1/2 BPS IR fixed point at \(\zeta=1\).
We note that within the one-parameter family (1) the ordinary Wilson loop corresponds to the operator preserving the maximal amount of R-symmetry, that is \(SO(6)\). The \(\zeta\)-deformation breaks \(SO(6)\) to \(SO(5)\). The Polchinski-Sully flow can then be interpreted as connecting the maximally R-symmetric operator to the maximally supersymmetric one.
In order to investigate the existence of an analogous pattern in ABJM theory, we focus on the study of parametric deformations that connect non-BPS with BPS Wilson loops. The logic that we are going to apply is the following: We start with the maximally R-symmetric operator, that is the "ordinary" WL preserving the maximal amount of R-symmetry, we then add marginally relevant, partial R-symmetry breaking deformations and study the corresponding RG flows.
Naively, one would expect the "ordinary" WL in the ABJM theory to be the same as for \({\cal N}=4\) SYM, that is the gauge field holonomy given by (1) with \(\zeta=0\). In fact, \(A_{\mu}\dot{x}^{\mu}\) is invariant under the action of the full R-symmetry group, \(SO(6)\simeq SU(4)\). However, this is not correct, since in this case there are other R-symmetry preserving operators with the same dimension, which necessarily mix with \(A_{\mu}\dot{x}^{\mu}\) and need to be included.
In order to prove that, we make use of the auxiliary one-dimensional method originally proposed for QCD in [37; 38; 51; 52; 53] and suitably generalized to the ABJM case in [23]. As we are going to show, in this approach the appearance of further R-symmetry preserving operators is signaled by the fact that the \(A_{\mu}\dot{x}^{\mu}\) operator itself leads to a non-renormalizable one-dimensional auxiliary theory.
As done in [23] (see also [51; 53]), we take the loop to be supported along the circular curve parametrized as
\[x^{\mu}(\tau)=(0,\cos\tau,\sin\tau)\,,\quad\tau\in[0,2\pi)\,. \tag{3}\]
For the ordinary Wilson loop we introduce the one-dimensional fermionic field \(z(\bar{z})\) in the (anti)fundamental representation of \(U(N)\) and define the Wilson loop vacuum expectation
value (VEV) on a contour \(\mathcal{C}_{12}\) connecting two points \(\tau_{1},\tau_{2}\) as4
Footnote 4: This prescription is briefly reviewed in appendix C. It is easy to realize that for the ordinary Wilson loop the \(\Psi\) auxiliary fermion defined in (C.2) boils down to \(z\).
\[\langle W\rangle=\langle z(\tau_{2})\bar{z}(\tau_{1})\rangle\,, \tag{4}\]
where the correlation function is computed with the effective action
\[S_{eff}=S_{ABJM}+\int d\tau\,\text{Tr}(\bar{z}\mathcal{D}_{\tau}z)\,. \tag{5}\]
Here, the \(\tau\)-integration is along the Wilson loop contour and the covariant derivative is defined as \(\mathcal{D}_{\tau}=\partial_{\tau}+i\mathcal{L}(\tau)\), where \(\mathcal{L}(\tau)\) is the connection of the Wilson operator under investigation. Since we are interested in studying the ordinary Wilson loop, we set \(\mathcal{D}_{\tau}=\partial_{\tau}+iA_{\mu}\dot{x}^{\mu}\).
The renormalization properties of this operator can be studied by looking at the renormalization of the one-dimensional QFT defined by the action in (5). As shown in [23], using the Feynman rules in appendix C, at one loop the \(\bar{z}\dot{z}\) kinetic term and the \(\bar{z}A_{\mu}\dot{x}^{\mu}z\) vertex do not renormalize. Instead, a divergent term arises from the diagram in figure 2, which corresponds to a new \(\bar{z}C_{I}\bar{C}^{I}z\) vertex, not present in the original action (5). Its explicit expression is given by
\[\begin{split}\Gamma^{(2)}&=\int d\tau_{1}\int\tau_ {2}\int d^{d}x\,\big{(}i\bar{z}A_{\mu}\dot{x}^{\mu}z\big{)}(x_{1})\big{(}i\bar{ z}A_{\nu}\dot{x}^{\nu}z\big{)}(x_{2})\big{(}A_{\rho}C_{I}\bar{C}^{I}A^{\rho} \big{)}(x)\\ &=\frac{g^{2}N}{8\pi\epsilon}\int d\tau\,\bar{z}C_{I}\bar{C}^{I}z \,.\end{split} \tag{6}\]
The appearance of this UV divergent term spoils renormalizability of the one-dimensional auxiliary theory, unless we include such a new term already at the classical level. This amounts to modifying the covariant derivative in (5) as \(\mathcal{D}_{\tau}=\partial_{\tau}+i\,\big{(}A_{\mu}\dot{x}^{\mu}\mp\frac{2\pi i }{k}|\dot{x}|C_{I}\bar{C}^{I}\big{)}\), which in turn amounts to stating that the correct WLs to consider are
\[W^{\pm}=\frac{1}{N}\,\text{Tr}\,\mathcal{P}\exp\left[-i\oint d\tau\,\big{(}A_{ \mu}\dot{x}^{\mu}-\frac{2\pi i}{k}|\dot{x}|(\pm\delta_{I}^{f})C_{J}\bar{C}^{I} \big{)}\right]\,. \tag{7}\]
In principle, there are no restrictions on the choice of the sign in front of \(\delta_{I}^{I}\). We have denoted the two possible options as \(W^{\pm}\). Since there is no field redefinition in the path
Figure 2: The \(\bar{z}C_{I}\bar{C}^{I}z\) vertex arising at one-loop. Dashed lines correspond to ABJM scalars while double blue straight lines corresponds to the one-dimensional fermion \(z\). Wavy lines are gauge fields.
integral that can compensate for this sign, \(W^{+}\) and \(W^{-}\) are genuine different operators. In the next sections we will further discuss the implications associated with the choice of this sign.
The scalar couplings in (7) preserve the whole \(SU(4)\) R-symmetry group, thus the two \(W^{\pm}\) operators also do. This implies that in the ABJM theory these are the "ordinary" maximally R-symmetric WLs.
This result is simply the manifestation of the general property according to which any operator preserving the symmetries of the conformal theory at the fixed point gets turned on along the RG flow. We emphasize that in the present case this is triggered by a bulk-defect interaction. In fact, the ABJM vertex \(A_{\mu}C_{I}\bar{C}^{I}A^{\mu}\) plays a central role in constructing the diagram of figure 2. As already mentioned, a similar pattern does not arise in \(\mathcal{N}=4\) SYM where, for dimensional reasons, any scalar coupling in the WL must be linear, thus necessarily breaking \(SO(6)\).
Finally, we mention that \(W^{\pm}\) does not preserve any supersymmetry. In fact, supersymmetry invariance requires the scalar coupling matrix \(M\) of bosonic loops (see its definition in appendix B) to be traceless [22]. This is clearly not the case for these operators, for which the scalar coupling matrix is simply \(M=\pm 1\).
## 3 Bosonic flows
We now begin the study of RG flows which involve fixed points corresponding to the ordinary Wilson loops defined above. We choose to perturb around \(W^{-}\) because, as it will become evident, this corresponds to a UV unstable fixed point. As the simplest case, we consider perturbing with a bosonic operator.
Referring to (7), we consider a four-parameter deformation of \(W^{-}\)
\[W(\zeta)=\frac{1}{N}\operatorname{Tr}\mathcal{P}\exp\bigg{[}-i\oint d\tau \left(A_{\mu}\dot{x}^{\mu}+\frac{2\pi i}{k}C_{I}\bar{C}^{I}-\frac{2\pi i}{k} \Delta M_{I}{}^{J}(\zeta_{i})C_{J}\bar{C}^{I}\right)\bigg{]}, \tag{10}\]
where
\[\Delta M(\zeta_{i})=2\begin{pmatrix}\zeta_{1}&0&0&0\\ 0&\zeta_{2}&0&0\\ 0&0&\zeta_{3}&0\\ 0&0&0&\zeta_{4}\end{pmatrix}\,. \tag{11}\]
For generic values of the parameters this perturbation breaks the \(SU(4)\) R-symmetry completely.
According to the calculation presented in appendix C, at one loop the \(\beta\)-functions parameterising this flow are given by
\[\beta_{\zeta_{i}}=\frac{g^{2}N}{2\pi}\zeta_{i}(\zeta_{i}-1)\,,\qquad i=1,2,3,4, \tag{12}\]
where \(g^{2}=2\pi/k\). Since the RG equations are decoupled, for the running coupling constants \(\zeta_{i}\) we find
\[\zeta_{i}(\mu)=\frac{1}{1+e^{c_{i}}\mu^{\frac{g^{2}N}{2\pi}}}\,,\qquad i=1,2,3,4, \tag{13}\]
where \(c_{i}\) are arbitrary real constants. The fixed points can be classified and denoted as follows:
* \(SU(4)\) invariant fixed points. The origin \(\zeta_{i}=0,\ i=1,2,3,4\), trivially corresponds to the ordinary \(W^{-}\) Wilson loop. Similarly, the point \(\zeta_{i}=1,\ i=1,2,3,4\), corresponds to the ordinary \(W^{+}\) operator. At these two fixed points \(SU(4)\) R-symmetry is restored.
* \(SU(3)\) invariant fixed points. A first set is obtained by setting one \(\zeta_{i}\) equal to zero and the others equal to one. They correspond to four equivalent Wilson loops with \(M=\text{diag}(-1,1,1,1)\) and permutations, which exhibit a \(SU(3)\) R-symmetry invariance and no supersymmetry. A second set is obtained by setting three \(\zeta_{i}\) equal to zero and one equal to one. In this case the corresponding scalar matrix is \(M=\text{diag}(1,-1,-1,-1)\) and permutations. Again, the WL is \(SU(3)\) invariant, but no supersymmetry is preserved.
* \(SU(2)\times SU(2)\) invariant fixed points. These are obtained by setting two \(\zeta_{i}\) equal to zero and two equal to one. They correspond to six equivalent \(1/6\) BPS bosonic Wilson loops defined in (14).
We can study the RG flows among these fixed points by looking at line, surface and volume projections of the four-dimensional parameter space. In all our pictures arrows go from the UV to the IR.
To begin with, we consider turning on only the \(\zeta_{1}\) deformation. This leads to the one-dimensional RG flow depicted in figure 3. Already in this one-dimensional projection the UV instability of the \(W^{-}\) operator emerges clearly. Instead, the point \(\zeta_{1}=1\) corresponding to one of the \(SU(3)\) invariant fixed points appears as attractive along the line.
We now move to a two-dimensional section of the parameter space by turning on also the \(\zeta_{2}\) perturbation. The corresponding flows are presented in figure 4. We see that under the \(\zeta_{2}\) perturbation the \(SU(3)\) fixed point corresponding to the \((1,0)\) point becomes unstable and the RG flow drives the system towards the point \((1,1)\) marked in blue. This is the \(1/6\) BPS bosonic loop corresponding to \(M=\text{diag}(1,1,-1,-1)\).
Going one step further, we turn on for instance \(\zeta_{3}\), still keeping \(\zeta_{4}=0\). The resulting RG flows are now depicted in three dimensions, see figure 5. We find that as soon as we leave the \(\zeta_{3}=0\) plane, the system is driven towards the \(SU(3)\) invariant \((1,1,1)\) point. Moreover, in addition to the \(1/6\) BPS operator at \((1,1,0)\) already marked in figure 4, two other \(1/6\) BPS bosonic loops appear, which correspond to the blue d
Figure 3: The RG flow along the \(\zeta_{1}\)-line. Arrows go from the UV to the IR. The black square represents \(W^{-}\), whereas the triangle corresponds to a \(SU(3)\) invariant bosonic Wilson loop.
\((0,1,1)\). They are simply R-symmetry rotations of each other. The pattern of the flows in figure 4 is nothing but the projection of the three-dimensional flow on one of the three planes containing the origin.
Finally, we turn on also the \(\zeta_{4}\) deformation, thus disclosing the whole spectrum of fixed points. In particular, the ordinary Wilson loop \(W^{+}\) now appears, which turns out to be an IR stable fixed point in any direction. The other 14 fixed points, eight \(SU(3)\) invariant
Figure 4: The RG flow in the \((\zeta_{1},\zeta_{2})\) plane corresponding to \(\zeta_{3}=\zeta_{4}=0\). The blue dot is the bosonic 1/6 BPS Wilson loop, while the two black triangles correspond to bosonic \(SU(3)\) invariant Wilson loops.
Figure 5: The RG flow in the \((\zeta_{1},\zeta_{2},\zeta_{3})\) space, with \(\zeta_{4}=0\). Blue spheres describe 1/6 BPS bosonic Wilson loops, the black pyramids correspond to \(SU(3)\) invariant operators.
bosonic WLs and six 1/6 BPS bosonic operators, are all saddle points and correspond to the vertices of a four-dimensional hypercube with side 1.
We have considered only the portion of the parameter space within the interval \([0,1]\), because this should correspond to the isolated invariant set [54, 55] of the RG space. Had we considered values outside this interval, we would flow to infinity along runaway directions. This is analogue to what has been found in \(\mathcal{N}=4\) SYM [25].
#### \(SU(2)\times SU(2)\) flows
One interesting subset of deformations corresponds to \(SU(2)\times SU(2)\) preserving \(\Delta M\) matrices. This necessarily restricts the spectrum of fixed points to a subset of \(SU(2)\times SU(2)\) invariant CFTs.
To this end we consider perturbing \(W^{-}\) as in (11) with
\[\Delta M(\zeta_{1},\zeta_{2})=2\begin{pmatrix}\zeta_{1}&0&0&0\\ 0&\zeta_{1}&0&0\\ 0&0&\zeta_{2}&0\\ 0&0&0&\zeta_{2}\end{pmatrix}\,. \tag{13}\]
Such a deformation gives rise to a two-parameter family of interpolating Wilson loops. The one-loop \(\beta\)-functions and their solutions can be easily read from (12) and (14) and the corresponding flows are depicted in figure 6. They connect \(W^{-}\) (for \(\zeta_{1}=\zeta_{2}=0\)), \(W^{+}\) (for \(\zeta_{1}=\zeta_{2}=1\)) and the bosonic BPS Wilson loop \(W_{1/6}\) (for \(\zeta_{1}=1\), \(\zeta_{2}=0\) or \(\zeta_{1}=0\), \(\zeta_{2}=1\)). The \(W(\zeta_{1},\zeta_{2})\) operators possess a \(\mathbb{Z}_{2}\) symmetry under the \(\zeta_{1}\leftrightarrow\zeta_{2}\) exchange, which comes from the freedom to exchange the two \(SU(2)\) factors in the R-symmetry group. This symmetry is also manifest in the behavior of the RG trajectories.
We note that although the deformation is not the same and thus fixed points are different, the flows in figure 4 and 6 are equal. In fact, they both share the same behavior as the RG flow of the \(O(N)\) model (see for instance figures 4 and 7 of [54]).
#### \(Su(4)\) flows
A further interesting subset of flows is the one induced by a single parameter deformation \(\Delta M_{I}^{\phantom{I}J}(\zeta)=2\zeta\,\delta_{I}^{J}\). In the same spirit as \(\mathcal{N}=4\) SYM, this deformation produces a one-parameter family of interpolating Wilson loops, which connects \(W^{-}\) for \(\zeta=0\) and \(W^{+}\) for \(\zeta=1\).
As is evident from figure 7, the ordinary Wilson loop \(W^{-}\) corresponding to a negative scalar coupling \(M=-\mathbb{1}\) is a UV unstable fixed point. On the other hand, the ordinary loop \(W^{+}\) with \(M=+\mathbb{1}\) is an IR stable point. We note that the naive analogue of the ordinary Wilson loop in \(\mathcal{N}=4\) SYM, _i.e._ the Wilson loop corresponding to a pure gauge connection \(A_{\mu}\dot{x}^{\mu}\), is located at point \(\zeta=1/2\). Consistently with our previous findings, it does not correspond to any fixed point, rather it belongs to the RG straight line that connects the two "true" ordinary WLs, \(W^{-}\) and \(W^{+}\).
The main conclusion we can draw from the present investigation is that at quantum level the \(W^{+}\) and \(W^{-}\) operators exhibit a very different behavior under renormalization,
though classically they simply differ by the overall sign in front of the scalar coupling. More evidence on their deep different nature will emerge in the next sections.
## 4 Fermionic flows
In three-dimensional Chern-Simons-matter theories - ABJM theory included - a more general class of Wilson loops can be considered, which involves coupling to fermions [11; 56]. In this case the connection appearing in the exponent of the Wilson operator is promoted to a supermatrix \({\cal L}\) where, in addition to gauge fields and scalar bilinears in the diagonal elements, fermi fields appear linearly in the off-diagonal entries5.
Footnote 5: For longer quiver gauge theories, this construction can be generalized to build loops which couple to more than two nodes. See for instance [20; 57; 58].
In the same spirit as in the previous section, we can study flows driven by adding fermionic deformations. As a first case, here we will keep the diagonal structure of \({\cal L}\) fixed, _i.e._ gauge fields and scalar bilinears with a fixed structure, and discuss possible flows obtained by simply adding fermionic fields to the off-diagonal entries of \({\cal L}\).
Figure 6: RG flows in the \((\zeta_{1},\zeta_{2})\) plane. The blue dots correspond to the \(SU(2)\times SU(2)\)\(1/6\) BPS bosonic loops fixed points, while the black squares correspond to the ordinary \(SU(4)\) invariant Wilson loops. The white square is the Wilson loop with purely gauge connection \(A_{\mu}\dot{x}^{\mu}\).
Figure 7: The RG flow along the \(\zeta\)-line. The black squares represent \(W^{-}\) and \(W^{+}\) at \(\zeta=0\) and \(\zeta=1\), respectively. The white square at \(\zeta=1/2\) represents the Wilson loop with connection \(A_{\mu}\dot{x}^{\mu}\).
We start with a composite non-BPS bosonic loop with superconnection
\[\mathcal{L}=\begin{pmatrix}A_{\mu}\dot{x}^{\mu}-\frac{2\pi i}{k}|\dot{x}|M_{I} ^{\,\,J}C_{J}\bar{C}^{I}&0\\ 0&\hat{A}_{\mu}\dot{x}^{\mu}-\frac{2\pi i}{k}|\dot{x}|M_{I}^{\,\,J}\bar{C}^{I}C_ {J}\end{pmatrix}, \tag{4.1}\]
where \(A,\hat{A}\) are charged under the two nodes of the ABJM theory and \(M\) is a fixed - not better specified - scalar coupling matrix. We take the fermionic perturbation to be of the following form
\[\Delta\mathcal{L}_{F}=-ig\begin{pmatrix}0&\eta\left(\chi_{1}\bar{\psi}^{1}+ \chi_{2}\bar{\psi}^{2}+\chi_{3}\bar{\psi}^{3}+\chi_{4}\bar{\psi}^{4}\right)\\ \left(\chi_{1}\psi_{1}+\chi_{2}\psi_{2}+\chi_{3}\psi_{3}+\chi_{4}\psi_{4} \right)\bar{\eta}&0\end{pmatrix}\,, \tag{4.2}\]
where \(\chi_{i},\ i=1,2,3,4\), are four arbitrary real parameters, the bosonic spinorial couplings \(\eta^{\alpha},\bar{\eta}_{\alpha}\) on the circle are defined as in [56]
\[\begin{split}\eta&=ie^{\frac{i}{2}\ell\tau}\left[\cos \left((1-\ell)\tfrac{\pi}{4}\right)-e^{i\tau}\sin\left((1-\ell)\tfrac{\pi}{4} \right)\right]\left(1\ -i\ell e^{-i\tau}\right)\,,\\ \bar{\eta}&=ie^{-\frac{i}{2}\ell\tau}\left[\cos\left((1- \ell)\tfrac{\pi}{4}\right)-e^{-i\tau}\sin\left((1-\ell)\tfrac{\pi}{4}\right) \right]\begin{pmatrix}-i\\ \ell e^{i\tau}\end{pmatrix}\,,\end{split} \tag{4.3}\]
and the spinorial products are always meant to be \(\lambda\bar{\rho}\equiv\lambda^{\alpha}\bar{\rho}_{\alpha}\). The constant parameter \(\ell\) in (4.3) can take only the two values \(\pm 1\). Therefore, we have two branches of fermionic deformations, which differ by the sign of \(\ell\). In principle, the structure of \(\eta,\bar{\eta}\) is totally arbitrary, however we fix them as in (4.3) in order to generate flows that can reach \(1/2\) BPS operators (see appendix B for their definition).
For arbitrary parameters \(\chi_{i}\), the fermionic deformation in (4.2) breaks completely the R-symmetry group. However, if the scalar matrix \(M\) is originally chosen to preserve a subgroup of the R-symmetry, whichever fermions we plan to add to \(\mathcal{L}\), they have to preserve the same R-symmetry structure. In fact, if this were not the case the R-symmetry mismatch between the fermion and scalar couplings would give rise to UV divergent contributions that would necessarily turn on a deformation of \(M\) along the flow. In the auxiliary one-dimensional field approach, this is made manifest by the fact that, as explained in appendix C.2, already at one loop a divergent triangle fermionic diagram with external fields \(\bar{z}C\bar{C}z\) arises, see figure 16, which is proportional to the fermionic deformation. It follows that in order to cancel this divergence we have to add a parametric deformation in the scalar coupling matrix, as well. This general observation has interesting consequences.
First of all, since \(SU(4)\) invariant fermionic couplings do not exist, it is not possible to deform a \(SU(4)\) invariant ordinary WL by adding purely fermionic deformations. A fermionic perturbation of the ordinary \(W^{\pm}\) operators considered above would necessarily require adding also a bosonic perturbation. This kind of perturbations will be discussed in section 5.
Sticking to pure fermionic deformations, the most we can do is to start with a \(SU(3)\) invariant superconnection and add a \(SU(3)\) invariant fermionic coupling. For instance, we can perturb around the bosonic WL corresponding to superconnection (4.1) with scalar coupling matrix \(M=\tilde{\ell}\,\text{diag}(-1,1,1,1)\), with \(\tilde{\ell}=\pm 1\) independent of \(\ell\) (see definition
(B.3)). In order to preserve \(SU(3)\) R-symmetry along the RG flows, it is crucial to choose in (4.2) only one non-vanishing fermion, precisely the one carrying the same R-symmetry index of the scalar bilinear which appears in \(MC\bar{C}\) with opposite sign. Therefore, given \(M=\tilde{\ell}\,\text{diag}(-1,1,1,1)\), we are forced to take \(\chi_{1}\neq 0\) and \(\chi_{2}=\chi_{3}=\chi_{4}=0\) in (4.2). If we were to choose a different scalar coupling matrix, for instance \(M=\tilde{\ell}\,\text{diag}(1,-1,1,1)\), we should perturb with \(\chi_{2}\).
The \(\beta\)-functions for single fermion deformations can be computed following [23]. Turning on the \(\chi_{i}\) deformation and evaluating the corresponding \(\beta\)-function for generic \(\ell\), we find
\[\beta_{\chi_{i}}=\frac{g^{2}N}{2\pi}\ell\left(\chi_{i}^{2}-1\right)\chi_{i}\,. \tag{4.4}\]
The effect of \(\ell\) on the \(\beta\)-function is a relevant overall sign.
From (4.4) we see that there is one fixed point at \(\chi_{i}=0\), corresponding to the undeformed operator, _i.e._ the non-BPS bosonic \(SU(3)\) Wilson loop, and two other ones at \(\chi_{i}=\pm 1\). However, since the deformed WL is invariant under \(\chi_{i}\to-\chi_{i}\), without losing generality we restrict our study to \(\chi_{i}\geq 0\).
In the study of these flows we have two different possibilities corresponding to the sign of \(\ell=\tilde{\ell}\)6. We interpolate between \(SU(3)\) bosonic operators (\(\blacktriangle\)) and \(1/2\) BPS ones (\(\blacktriangle\)), see (B.2). In particular, for \(\ell=1\) the bosonic operator is unstable and flows to the stable \(W_{1/2}^{+}\). In contrast, for \(\ell=-1\) the bosonic operator is stable while \(W_{1/2}^{-}\) is unstable. This pattern is represented in figure 8.
Footnote 6: When \(\ell=-\tilde{\ell}\) the theory is not renormalizable, since divergent contributions in the bosonic sector arise. This means turning on a mixed bosonic-fermionic flow, as we study in section 5.
So far we have considered only single fermion deformations. One might wonder whether a more general pattern exists, where more than one fermion parametric deformation is turned on. The answer is no. In fact, as one can infer from the calculations in appendix C.2, as soon as we turn on a second \(\chi_{j}\) coupling without a suitable accompanying bosonic deformation, the one-dimensional auxiliary theory is not renormalizable. In other words, deforming with more than one fermionic coupling necessarily turns on also a bosonic deformation along the RG flows.
Figure 8: \(SU(3)\) symmetric fermionic flows for (a) \(\ell=1\) and (b) \(\ell=-1\). The black triangles at the origin represent non-BPS bosonic \(SU(3)\) operators, whereas the red triangles are \(1/2\) BPS Wilson loops.
Mixed flows
As a last step, we consider moving out from the ordinary \(W^{-}\) fixed point by adding a combination of both bosonic (28) and fermionic (30) deforming operators. In general, the perturbation will depend on eight parameters \(\zeta_{i},\chi_{i}\), with \(i=1,2,3,4\).
The \(\beta\)-functions for the fermionic deformations are the straightforward generalization of the ones evaluated in (31). At the order we are working, they are not affected by the presence of bosonic deformations and read
\[\beta_{\chi_{i}}=\frac{g^{2}N}{2\pi}\left(\ell_{1}\chi_{1}^{2}+\ell_{2}\chi_{2 }^{2}+\ell_{3}\chi_{3}^{2}+\ell_{4}\chi_{4}^{2}-\ell_{i}\right)\chi_{i}\,, \qquad i=1,2,3,4, \tag{32}\]
where \(\ell_{i}=\pm 1\) distinguishes between the two possible branches of fermionic deformations, as described in section 4.
Instead, the \(\beta\)-functions of the scalar couplings \(\zeta_{i}\) get affected by the presence of the fermionic deformations, in fact they acquire a non-trivial dependence on \(\chi_{i}\). To simplify the discussion, in the following sections we investigate in details the cases where only one or two fermions appear in the deformation. Turning on more than two fermionic parameters does not add much to the discussion. The results, although more involved, exhibit the same qualitative behavior.
### Bosonic plus one-fermion deformations
We start discussing RG flows driven by the bosonic deformation \(\Delta M(\zeta_{i})\) in (28) plus the fermionic one in (30) where we set \(\chi_{2}=\chi_{3}=\chi_{4}=0\). As long as we are interested in single fermion perturbations, this choice is completely general, as deformations generated by \(\chi_{2},\chi_{3}\) or \(\chi_{4}\) can be obtained by simply applying an R-symmetry rotation.
The \(\beta\)-function for \(\chi_{1}\) can be easily read from (32). For \(\chi_{2}=\chi_{3}=\chi_{4}=0\) it reduces to (31) with \(i=1\).
The \(\beta\)-functions for the bosonic deformations are computed in appendix C.2. More precisely, they can be read from (17) setting \(\chi_{a}=\chi_{1}\) and \(\chi_{b}=0\)
\[\begin{split}&\beta_{\zeta_{1}}=\frac{g^{2}N}{2\pi}\left(\zeta_{1}-1+ \ell\chi_{1}^{2}+\frac{(1-\ell)}{2}\frac{\chi_{1}^{2}}{\zeta_{1}}\right)\zeta _{1}\,,\\ &\beta_{\zeta_{k}}=\frac{g^{2}N}{2\pi}\left(\zeta_{k}-1+\ell\chi _{1}^{2}-\frac{(1+\ell)}{2}\frac{\chi_{1}^{2}}{\zeta_{k}}\right)\zeta_{k}\,, \qquad\qquad k=2,3,4\,.\end{split} \tag{33}\]
These \(\beta\)-functions together with \(\beta_{\chi_{1}}\) describe RG flows in a five-dimensional space. As already shown in section 4, the non-trivial dependence of \(\beta_{\chi_{1}}\) on \(\ell\) leads to two qualitatively different classes of flows. Therefore, we discuss the two cases, \(\ell=1\) and \(\ell=-1\), separately.
\(\ell=1\) case.In this case the \(\beta\)-functions reduce to
\[\begin{split}&\beta_{\chi_{1}}=\frac{g^{2}N}{2\pi}(\chi_{1}^{2}-1) \chi_{1}\,,\\ &\beta_{\zeta_{1}}=\frac{g^{2}N}{2\pi}\left(\zeta_{1}-1+\chi_{1}^ {2}\right)\zeta_{1}\,,\\ &\beta_{\zeta_{k}}=\frac{g^{2}N}{2\pi}\left(\zeta_{k}(\zeta_{k}-1+ \chi_{1}^{2})-\chi_{1}^{2}\right)\,,\qquad\qquad k=2,3,4\,.\end{split} \tag{34}\]
The nature of the RG flows and the stability of the fixed points in the five dimensional space can be understood by plotting the solutions to the \(\beta\)-function equations \(\mu\frac{\partial\Xi}{\partial\mu}=\beta_{\Xi}\), for any coupling \(\Xi=(\chi_{1},\zeta_{1},\zeta_{k})\). The solutions depend on five arbitrary parameters and read explicitly
\[\begin{split}\chi_{1}(\mu)&=\frac{1}{\sqrt{e^{2c_{ 0}}\mu^{2}+1}}\,,\\ \zeta_{1}(\mu)&=\frac{1}{\sqrt{e^{2c_{0}}\mu^{2}+1} }\left(\operatorname{arctanh}\left(\sqrt{e^{2c_{0}}\mu^{2}+1}\right)+c_{1} \right)^{-1}\,,\\ \zeta_{k}(\mu)&=1-\frac{\mu^{2}}{e^{-2c_{0}}+\mu^{2 }-c_{k}\sqrt{1+e^{2c_{0}}\mu^{2}}}\,.\end{split} \tag{10}\]
In order to have a visual description of the RG flows, in figures 9 we provide projections on a few planes where fixed points can be detected. To this end, we note that as long as \(\chi_{1}\neq 0\) - which is our case of interest - the last term in \(\beta_{\zeta_{k}}\) prevents from projecting on planes where \(\zeta_{k}=0\), \(k=2,3,4\). We then choose to project on \(\zeta_{1}=0\) setting either \(\zeta_{2}=\zeta_{3}=\zeta_{4}\equiv\zeta\) (figure 9(a)) or \(\zeta_{2}=1\), \(\zeta_{3}=\zeta_{4}\equiv\zeta\) (figure 9(b)) or \(\zeta_{3}=\zeta_{4}=1\), \(\zeta_{2}\equiv\zeta\) (figure 9(c)).
In figure 9(a) we recognize the \(W^{-}\) unstable point at the origin (\(\blacksquare\)) and the IR stable fixed point \(W^{+}_{1/2}\) (\(\blacktriangle\)). The other two saddle points correspond to non-BPS \(SU(3)\) invariant operators. The point \((\zeta,\chi_{1})=(1,0)\) is a bosonic WL with scalar coupling matrix \(M=\operatorname{diag}(-1,1,1,1)\) (\(\blacktriangle\)), whereas \((\zeta,\chi_{1})=(-1,1)\) corresponds to a fermionic WL with \(M=\operatorname{diag}(-1,-3,-3,-3)\) and non-BPS coupling to \(\psi_{1},\bar{\psi}^{1}\) (still indicated with \(\blacktriangle\)).
In figure 9(b) the origin and the point \((\zeta,\chi_{1})=(1,0)\) correspond to non-BPS SU(3) invariant bosonic WL (\(\blacktriangle\)). The flow still reaches the fermionic \(1/2\) BPS \(W^{+}_{1/2}\) (\(\blacktriangle\)) and a novel non-BPS SU(2) invariant fermionic operator with \(M=\operatorname{diag}(-1,1,-3,-3)\) and \(\psi_{1},\bar{\psi}^{1}\) couplings (\(\blacklozenge\)).
In figure 9(c) the flows connect the 1/6 BPS bosonic WL (\(\bullet\)) at the origin with a non-BPS SU(3) invariant bosonic point (\(\blacktriangle\)), the fermionic 1/2 BPS \(W^{+}_{1/2}\) (\(\blacktriangle\)) and a novel non-BPS \(SU(2)\) invariant fermionic operator with \(M=\text{diag}(-1,-3,1,1)\) and \(\psi_{1},\bar{\psi}^{1}\) couplings (\(\bullet\)).
As a further case, in figure 10 we consider the \(\chi_{1}=1\) section, taking \(\zeta_{1}\) generically non-vanishing and setting \(\zeta_{2}=\zeta_{3}=\zeta_{4}\equiv\zeta\). We see that it exhibits a flow between the non-BPS SU(3) invariant fermionic operator with \(M=\text{diag}(-1,-3,-3,-3)\) (\(\blacktriangle\)) and the 1/2 BPS WL \(W^{+}_{1/2}\) (\(\blacktriangle\)).
\(\ell=-1\) case.A similar analysis can be done for \(\ell=-1\). In this case the \(\beta\)-functions in (4.4, 5.2) reduce to
\[\begin{split}&\beta_{\chi_{1}}=-\frac{g^{2}N}{2\pi}(\chi_{1}^{2}-1) \chi_{1}\,,\\ &\beta_{\zeta_{1}}=\frac{g^{2}N}{2\pi}\left(\zeta_{1}(\zeta_{1}- 1-\chi_{1}^{2})+\chi_{1}^{2}\right)\,,\\ &\beta_{\zeta_{k}}=\frac{g^{2}N}{2\pi}\left(\zeta_{k}-1-\chi_{1}^ {2}\right)\zeta_{k}\,,\qquad\qquad k=2,3,4\,.\end{split} \tag{5.5}\]
and solving the corresponding flow equations we find
\[\begin{split}&\chi_{1}(\mu)=\frac{\mu}{\sqrt{e^{2c_{0}}+\mu^{2}}}\,, \\ &\zeta_{1}(\mu)=1-\frac{\mu}{\sqrt{e^{2c_{0}}+\mu^{2}}}\left( \operatorname{arctanh}\left(\frac{\mu}{\sqrt{e^{2c_{0}}+\mu^{2}}}\right)-c_{1 }\right)^{-1}\,,\\ &\zeta_{k}(\mu)=\frac{1}{e^{-2c_{0}}\mu^{2}+1+c_{k}\mu\sqrt{e^{2 c_{0}}+\mu^{2}}}\,.\end{split} \tag{5.6}\]
The main differences with the previous case are the crucial sign change in \(\beta_{\chi_{1}}\) and somehow the exchange of the roles of \(\zeta_{1}\) with \(\zeta_{k}\). In particular, in order to project on planes containing fixed points, this time \(\zeta_{1}=0\) is not a consistent choice, whereas we can project on \(\zeta_{k}=0,(k=2,3,4)\) hyperplanes.
In figure 11 we plot the flows in the \((\zeta_{1}\equiv\zeta,\chi_{1})\) plane, setting \(\zeta_{2}=\zeta_{3}=\zeta_{4}=0\), whereas figure 11 describes the flows in the \(\chi_{1}=1\) section, taking \(\zeta_{1}\) and \(\zeta_{2}=\zeta_{3}=\zeta_{4}\equiv\zeta\) free. The pattern is similar to the \(\ell=1\) case, the main difference being that the red triangle now corresponds to the \(1/2\) BPS fermionic loop \(W^{-}_{1/2}\). Moreover, the black triangle in the second figure corresponds to a \(SU(3)\) invariant non-BPS fermionic operator with scalar coupling \(M=\text{diag}(1,3,3,3)\).
It is now interesting to compare the qualitative behavior of the RG flows in the \(\ell=1\) and \(\ell=-1\) cases. To this end, in figure 12 we present a three-dimensional picture of the flows in the \((\zeta_{1},\zeta,\chi_{1})\) space, where \(\zeta_{2}=\zeta_{3}=\zeta_{4}\equiv\zeta\), for both cases. The points along the \(\chi_{1}=0\) plane, highlighted in green in both graphs, correspond to bosonic operators. The origin is the repulsive ordinary Wilson loop \(W^{-}\). The two neighbour points along this plane are non-BPS \(SU(3)\) invariant bosonic operators with \(M=\pm\text{diag}(1,-1,-1,-1)\). The fourth vertex of the green plane, diagonally opposite to the origin, is the attractive (on this plane) ordinary Wilson loop \(W^{+}\).
Leaving the \(\chi_{1}=0\) plane, the behavior of the flows in the two graphs start deviating. In 12 all flows point towards the \(1/2\) BPS Wilson loop \(W^{+}_{1/2}\) (red triangle), confirming that this point is stable in any direction. In 12 the \(1/2\) BPS Wilson loop (red triangle) now corresponding to \(W^{-}_{1/2}\) is a repulsive point, thus the most stable point is the ordinary Wilson loop \(W^{+}\).
We can understand the different nature of the fixed points in the whole five dimensional
parameter space by looking at the RG flow solutions (5.4) and (5.6). The most stable point will be the IR fixed point at \(\mu=0\)
\[\lim_{\mu\to 0}\Big{(}\zeta_{1}(\mu),\,\zeta_{k}(\mu),\,\chi_{1}(\mu)\Big{)}= \begin{cases}(0,1,1)\,,&\ell=+1\,.\\ (1,1,0)\,,&\ell=-1\,.\end{cases} \tag{5.7}\]
which corresponds to the \(1/2\) BPS operator \(W_{1/2}^{+}\) for \(\ell=1\) and to the ordinary Wilson loop \(W^{+}\) for \(\ell=-1\). Similarly, the most unstable point will be the UV fixed point at \(\mu\to\infty\)
\[\lim_{\mu\to\infty}\Big{(}\zeta_{1}(\mu),\,\zeta_{k}(\mu),\,\chi_{1}(\mu)\Big{)} =\begin{cases}(0,0,0)\,,&\ell=+1\,.\\ (1,0,1)\,,&\ell=-1\,.\end{cases} \tag{5.8}\]
which corresponds to the ordinary Wilson loop \(W^{-}\) for \(\ell=1\) and to the \(1/2\) BPS operator \(W_{1/2}^{-}\) for \(\ell=-1\). Therefore, the nature of the fixed points enlightened in figure 12 for three-dimensional subspaces remains unchanged in five dimensions.
### Bosonic plus two-fermion deformations
Now we consider deformations that involve two fermions sourced by two pairs of \((\eta,\bar{\eta})\). For concreteness, we specialize to the case where \(\psi_{1}\) and \(\psi_{2}\) deformations are turned on. The corresponding \(\beta\)-functions can be read in (5.1) setting \(\chi_{3}=\chi_{4}=0\), whereas the ones for the bosonic parameters are computed in detail in appendix C. They can be obtained from (C.17) setting \(a=1\) and \(b=2\).
As comes out from these results, we have to distinguish between deformations with \(\ell_{1}=\ell_{2}\) or ones with \(\ell_{1}=-\ell_{2}\). In the first case, the fermionic diagram which contributes
Figure 12: (a) The RG flow with \(\ell=1\) in the \((\zeta_{1},\zeta_{k}=\zeta_{2}=\zeta_{3}=\zeta_{4},\chi_{1})\) space. We can see that in the \(\chi_{1}=0\) plane (no fermions), the most attractive point is the ordinary WL \(W^{+}\). As soon as we turn on fermions, the most attractive point is the \(1/2\) BPS WL with mostly positive scalar coupling matrix. (b) The RG flow with \(\ell=-1\) in the \((\zeta_{1},\zeta_{k}=\zeta_{2}=\zeta_{3}=\zeta_{4},\chi_{1})\) space. We can see that the \(1/2\) BPS Wilson loop with a mostly negative scalar coupling matrix acts as a repulsive fixed point.
to the renormalization of the \(\bar{z}MC\bar{C}z\) scalar vertex gives rise to off-diagonal terms in the scalar coupling matrix \(M\). Instead, a deformation with opposite \(\ell\)'s does not require a non-diagonal scalar coupling matrix. Since in what follows we restrict to the study of diagonal scalar deformations we will only consider the \(\ell_{1}=-\ell_{2}\) case.
Setting for instance \(\ell_{1}=1\) and \(\ell_{2}=-1\), the \(\beta\)-functions are explicitly given by
\[\beta_{\chi_{1}} =\frac{g^{2}N}{2\pi}\bigg{(}\chi_{1}^{2}-\chi_{2}^{2}-1\bigg{)} \chi_{1}\,,\] \[\beta_{\chi_{2}} =\frac{g^{2}N}{2\pi}\bigg{(}\chi_{1}^{2}-\chi_{2}^{2}+1\bigg{)} \chi_{2}\,,\] \[\beta_{\zeta_{1}} =\frac{g^{2}N}{2\pi}\bigg{(}\zeta_{1}-1+(\chi_{1}^{2}-\chi_{2}^{2 })\bigg{)}\zeta_{1}\,, \tag{112}\] \[\beta_{\zeta_{2}} =\frac{g^{2}N}{2\pi}\left(\zeta_{2}-1+(\chi_{1}^{2}-\chi_{2}^{2}) \left(1-\frac{1}{\zeta_{2}}\right)\right)\zeta_{2}\,,\] \[\beta_{\zeta_{c}} =\frac{g^{2}N}{2\pi}\left(\zeta_{c}-1+(\chi_{1}^{2}-\chi_{2}^{2 })-\frac{\chi_{1}^{2}}{\zeta_{c}}\right)\zeta_{c}\,,\qquad c=3,4\,.\]
In this case the RG flow occurs in a six-dimensional space, therefore it is quite difficult to visualize it. In order to grasp some partial information, we note that at this order the fermionic \(\beta\)-functions do not depend on the bosonic parameters, so one can study their flows independently of the \(\zeta_{i}\), \(i=1,\ldots,4\) behavior. The fermionic flows in the \((\chi_{1},\chi_{2})\) plane are depicted in figure 13, where we focus only on the positive sector. As already stressed, the deformed theory is invariant under \(\chi_{i}\to-\chi_{i}\), therefore the flows are simply mirrored with respect to the two axes. We highlight in orange the boundaries of the region where flows start and end at the fixed points \((1,0)\) and \((0,1)\). Since for particular choices of the \(\zeta_{i}\) the \((1,0)\) point corresponds to \(W_{1/2}^{+}\) while \((0,1)\) corresponds to \(W_{1/2}^{-}\), once more we find that within the isolated invariant set \(W_{1/2}^{+}\) is stable whereas \(W_{1/2}^{-}\) is unstable.
This pattern is also very clear from figure 14, where we plot the flows on the \((\zeta_{i},\chi_{1},\chi_{2})\) subspace for different \(\zeta_{i}\). In orange we highlight the planes corresponding to the orange lines of figure 13. Selecting flows that lie within these two hyperplanes one easily realizes that the upper plane is repulsive whereas the lower one is attractive.
## 6 Defect theory interpretation
The fixed points appearing in the RG flow figures of the previous sections describe defect conformal field theories (dCFTs) living on the corresponding Wilson loops, in interaction with the ABJM bulk theory. BPS fixed points give rise to superconformal defects, whereas non-BPS points correspond to ordinary dCFTs. The RG flows have then the interesting interpretation of flows in the space of one-dimensional defect theories triggered by dynamical interactions with the bulk. From this perspective, they can be interpreted as being driven by marginally relevant perturbations of the defect.
In general, in the presence of deformations driven by a set of local operators \(\hat{d}_{i}\), a defect stress tensor \(T_{D}\) turns on, which is given by the product [29; 59]
\[T_{D}=\beta_{i}\,\hat{d}_{i}\,, \tag{113}\]
where \(\beta_{i}\) is the \(\beta\)-function of the coupling associated to the \(i\)-th deformation. The \(T_{D}\) operator affects the ABJM stress tensor conservation law as
\[\nabla_{\mu}T^{\mu\nu}=-\delta_{D}^{(2)}\left(\dot{x}^{\nu}\dot{T}_{D}+n_{i}^{ \nu}D^{i}\right)\,, \tag{6.2}\]
where \(\delta_{D}^{(2)}\) is the Dirac delta localized at the defect, \(x^{\mu}\) is the embedding function describing the defect location (2.3), \(n_{i}^{\mu}\) is a unit vector normal to the defect and \(D^{i}\) is the displacement operator.
From the explicit expression of \(T_{D}\) one can compute physical quantities, as for instance the anomalous dimension of the \(\hat{d}_{i}\) operators, and investigate the validity of a g-theorem [39; 40; 41; 42; 29].
Figure 14: The RG flow (a) in the \((\zeta_{1},\chi_{1},\chi_{2})\) space, (b) in the \((\zeta_{2},\chi_{1},\chi_{2})\) space and (c) in the \((\zeta_{c},\chi_{1},\chi_{2})\) space, \(c=3,4\).
Figure 13: The RG flow in the \((\chi_{1},\chi_{2})\) plane for \(\ell_{1}=+1\) and \(\ell_{2}=-1\). The region between the two curves highlighted in orange delimits flows that start and end at fixed points \((1,0)\) and \((0,1)\).
In fact, recalling that for a set of generic operators \(\hat{O}_{i}\) the matrix of anomalous dimensions is defined as \(\mu\frac{\partial\hat{O}_{i}}{\partial\mu}=-\Delta_{i}^{j}\hat{O}_{j}\), and taking into account that \(T_{D}\) has protected mass dimension equal to one, we can apply \(\mu\frac{\partial}{\partial\mu}\) to both sides of (6.1), obtaining
\[\Delta_{i}^{\;j}=\delta_{i}^{\;j}+\frac{\partial\beta^{j}}{\partial\zeta_{i}} \quad\Longrightarrow\quad\Gamma_{i}^{\;j}=\frac{\partial\beta^{j}}{\partial \zeta_{i}}\,, \tag{6.3}\]
where \(\Gamma_{i}^{\;j}\) is the anomalous dimension matrix and \(\zeta_{i}\) is the coupling associated to \(i\)-th deforming operator.
Furthermore, using the definition (6.1) for the defect stress tensor one can establish a one-dimensional version of the g-theorem, that is the statement that the free energy of a \(\zeta\)-deformed defect, \(\mathrm{g}\equiv\log\langle W(\zeta)\rangle\), should be a monotonically decreasing function along the RG flows. In particular, it should satisfy \(\mathrm{g}_{\mathrm{UV}}>\mathrm{g}_{\mathrm{IR}}\), where \(\mathrm{g}_{\mathrm{UV}}\) and \(\mathrm{g}_{\mathrm{IR}}\) are the values of \(\mathrm{g}\) at the UV and the IR fixed points, respectively. In [29] this has been proven for line defects in any dimension by studying the \(\zeta\)-induced flow of a related observable, which is the defect entropy
\[s(\zeta)=\left(1+\beta_{\zeta}\frac{\partial}{\partial\zeta}\right)\log \langle W(\zeta)\rangle\,, \tag{6.4}\]
that at the fixed points coincides with \(\mathrm{g}\). From the relevant identity [29]
\[\mu\frac{\partial s}{\partial\mu}=-\int d\tau_{1>2}\;\big{\langle}\!\big{/}T_{ D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\!\big{/}(1-\cos(\tau_{1}-\tau_{2}))\,, \tag{6.5}\]
which provides the mass scaling of \(s\), one can conclude that the \(\mathrm{g}\)-theorem holds whenever the defect theory is reflection positive (in Euclidean signature) or unitary (in Minkowski), that is whenever \(\big{\langle}\!\big{/}T_{D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\!\big{/}>0\).
In the rest of this section we will apply eqs. (6.3) and (6.5) to our flows in order to better investigate their nature from a defect theory perspective.
We stress that our computations, despite being perturbative in the Chern-Simons coupling \(g\), are exact in the running coupling constants \(\zeta_{i},\chi_{i}\). Therefore, from the point of view of the one-dimensional theory, the \(\beta\)-functions and the general behavior of the RG flows are reliable at any scale.
### Bosonic defects
We begin by focusing on the bosonic deformations described in section 3. They can be interpreted as deformations of the \(W^{-}\) bosonic dCFT7 that drive the system towards IR fixed points still given by bosonic dCFTs.
Footnote 7: We call “bosonic defects” the one-dimensional theories defined on bosonic WL (no fermion fields turned on), and “fermionic defects” the ones defined on fermionic WLs.
In the simplest case of one-parameter deformations (3.2) with \(\zeta_{1}=\zeta_{2}=\zeta_{3}=\zeta_{4}\equiv\zeta\), the defect stress tensor for the deformed theory is simply given by
\[T_{D}=\beta_{\zeta}\,\hat{d}_{\zeta}=-2g^{2}\beta_{\zeta}C_{I}\bar{C}^{I}\,, \tag{6.6}\]
where \(\beta_{\zeta}\) is the \(\beta\)-function evaluated in (3.3).
In this case the \(SU(4)\) symmetry ensures that the four operators \(C_{1}\bar{C}^{1},\cdots,C_{4}\bar{C}^{4}\) have the same anomalous dimension. The anomalous dimension matrix in (45) is then proportional to the identity matrix. Using (3) for \(\beta_{\zeta}\), its value at \(\zeta=0\) is \(\Gamma=-\frac{g^{2}N}{2\pi}\ \mathbb{1}\). A negative anomalous dimension signals the fact that around \(W^{-}\) the perturbation is a weakly relevant operator that drives the defect CFT away from the fixed point. On the other hand, evaluating the anomalous dimension matrix for deformations around the \(W^{+}\) fixed point in figure 7, which amounts to evaluate (45) at \(\zeta=1\), we find a positive anomalous dimension, thus the perturbation is marginally irrelevant and, consistently with the RG flow behavior, \(W^{+}\) corresponds to an IR stable fixed point.
Moving to two-parameter deformations (3) with \(\zeta_{1}=\zeta_{2}(\equiv\zeta_{1})\) and \(\zeta_{3}=\zeta_{4}(\equiv\zeta_{2})\), the preserved \(SU(2)\times SU(2)\) symmetry this time insures that the \(C_{1}\bar{C}^{1}\) and \(C_{2}\bar{C}^{2}\) deforming operators share the same anomalous dimension \(\gamma_{1}\), and \(C_{3}\bar{C}^{3}\) and \(C_{4}\bar{C}^{4}\) share the same \(\gamma_{2}\), as well. In principle, there might be mixing between the two pairs of operators. However, at the order we are working this is not the case. In fact, the two \(\beta\)-functions that can be read from (3) are decoupled, so that the anomalous dimension matrix (45) is diagonal, \(\Gamma=\mathrm{diag}(\gamma_{1},\gamma_{1},\gamma_{2},\gamma_{2})\).
Referring to figure 6, we compute the values of \(\gamma_{1}\) and \(\gamma_{2}\) at the four fixed points. At the two black squares points, corresponding to \(W^{-}\) (at the origin) and \(W^{+}\), we still find that \(\gamma_{1}(W^{\pm})=\gamma_{2}(W^{\pm})=\pm\frac{g^{2}N}{2\pi}\), in agreement with the \(W^{+}(W^{-})\) fixed point being attractive (repulsive).
If we consider instead the \(1/6\) BPS Wilson loop associated to the scalar coupling matrix \(M=\mathrm{diag}(-1,-1,1,1)\) and corresponding to the upper blue dot in figure 6, the anomalous dimension for the \(\zeta_{1}\) deformation is negative, \(\gamma_{1}=-\frac{g^{2}N}{2\pi}\), whereas for the \(\zeta_{2}\) deformation it is positive, \(\gamma_{2}=\frac{g^{2}N}{2\pi}\). Therefore the \(\zeta_{1}\) deformation of the \(1/6\) BPS Wilson loop is weakly relevant, while the \(\zeta_{2}\) one is weakly irrelevant. A similar pattern arises for the other \(1/6\) BPS Wilson loop associated to \(M=\mathrm{diag}(1,1,-1,-1)\) (lower blue point in figure 6), simply with the signs of \(\gamma_{1}\) and \(\gamma_{2}\) interchanged. This is in agreement with the saddle point behavior and the directions of the flows shown in figure 6.
This analysis can be easily generalized to deformations featured by (3) with four different parameters. In that case the is no symmetry constraining the anomalous dimension matrix. However, at the order we are working the \(\beta\)-functions in (3) are decoupled and the anomalous dimension matrix turns out to be of the form \(\Gamma=\mathrm{diag}(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4})\), with four independent entries. A careful analysis reveals that \(\Gamma\) is negative-definite at the \(W^{-}\) fixed point, it is positive-definite at \(W^{+}\), whereas at all the other fixed points it does not have a definite sign, thus confirming that they are all saddle points.
As anticipated above, we now establish the validity of a g-theorem for the RG flows under consideration. We focus for simplicity on the one-parameter deformation that drives the RG flow between \(W^{-}\) and \(W^{+}\) (figure 7). Defining \(\mathrm{g}_{\mathrm{UV}}=\log\langle W^{-}\rangle\) and \(\mathrm{g}_{\mathrm{IR}}=\log\langle W^{+}\rangle\), we want to check whether \(\mathrm{g}_{\mathrm{UV}}>\mathrm{g}_{\mathrm{IR}}\).
In principle, one could try a direct check by computing perturbatively \(\mathrm{g}_{\mathrm{UV}}\) and \(\mathrm{g}_{\mathrm{IR}}\). However, this would require a three-loop calculation, as up to two loops \(W^{\pm}\) share the same expectation value. Alternatively, we can use the prescription of [29]. If we insert the
explicit expression (6.6) for \(T_{D}\) in the scaling equation (6.5) for the defect entropy, at one loop we find (\(\tau_{12}\equiv\tau_{1}-\tau_{2}\))
\[\mu\frac{\partial s}{\partial\mu}=-4g^{4}\beta_{\zeta}^{2}\int d\tau_{1>2} \,\big{\langle}\!\big{\langle}(C_{I}\bar{C}^{I})(\tau_{1})(C_{J}\bar{C}^{J})( \tau_{2})\big{\rangle}\!\big{\rangle}(1-\cos\tau_{12})=-g^{4}N^{2}\beta_{\zeta }^{2}<0\,. \tag{6.7}\]
This means that the defect entropy decreases monotonically along the flow, leading to the expected result
\[\texttt{g}_{\text{UV}}>\texttt{g}_{\text{IR}}\,. \tag{6.8}\]
This analysis can be generalized straightforwardly to the \(SU(2)\times SU(2)\) and generic bosonic flows, always leading to inequality (6.8). We can then conclude that within the set of bosonic deformations of the form (3.2), the g-theorem is always respected.
### Fermionic defects
We now move to the case of fermionic deformations described in section 4. Without loss of generality, we turn on only the \(\chi_{1}\) deformation and consider the flows connecting a \(SU(3)\) invariant bosonic WL (\(\blacktriangle\)) and a \(1/2\) BPS one (\(\blacktriangle\)) (see figure 8). As is clear from this figure, the direction of the flow depends on the sign of \(\ell\) entering the fermionic couplings (4.3), thus if the g-theorem is still at work, we should expect an opposite inequality between g(\(\blacktriangle\)) and g(\(\blacktriangle\)) in the two cases. Moreover, since the sign of \(\ell\) also discriminates between different defects at the fixed points,8 the two cases \(\ell=\pm 1\) have to be studied separately.
Footnote 8: It determines the overall sign of the scalar coupling matrix at the two fixed points and the \((\eta,\bar{\eta})\) couplings at the \(1/2\) BPS fixed point.
\(\ell\!=\!1\) case.In this case the \(SU(3)\) invariant bosonic defect corresponds to the scalar coupling matrix \(M=\text{diag}(-1,1,1,1)\). As is clear from figure 8, it is a UV unstable fixed point under the supermatrix deformation
\[\hat{d}=-g\chi_{1}\begin{pmatrix}0&\eta\bar{\psi}^{1}\\ \psi_{1}\bar{\eta}&0\end{pmatrix}\,. \tag{6.9}\]
In fact, a straightforward application of prescription (6.3) with \(\beta_{\chi_{1}}\) as in (5.3) leads to a negative anomalous dimension for \(\hat{d}\) at the \(\chi_{1}=0\) fixed point, signaling that this is indeed a marginally relevant perturbation around the (\(\blacktriangle\)) point. An analogous calculation reveals that around the BPS fixed point \(W_{1/2}^{+}\) this is instead an irrelevant operator.
In order to check the g-theorem, we evaluate the scaling behavior of the defect entropy inserting in (6.5) the expression \(T_{D}=\beta_{\chi_{1}}\hat{d}\), with \(\hat{d}\) given in (6.9).9 Evaluating the resulting expression at one loop, we find
Footnote 9: Since coupling to fermions are defined in terms of \(U(N|N)\) superconnections, the operator insertions on the defect theory are \(U(N|N)\) supermatrices. Therefore, in this case \(\big{\langle}T_{D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\) in (6.5) includes also a trace on supermatrices.
\[\mu\frac{\partial s}{\partial\mu}=\] \[\quad-g^{2}\beta_{\chi_{1}}^{2}\int d\tau_{1>2}\bigg{[}\,\big{ }\big{\langle}\!\big{(}\eta\bar{\psi}^{1}\big{)}(\tau_{1})\big{(}\psi_{1}\bar {\eta}\big{)}(\tau_{2})\big{\rangle}\!\big{\rangle}_{\ell=1}+\big{\langle}\! \big{(}\psi_{1}\bar{\eta}\big{)}(\tau_{1})\big{(}\eta\bar{\psi}^{1}\big{)}( \tau_{2})\big{\rangle}\!\big{\rangle}_{\ell=1}\,\bigg{]}(1-\cos\tau_{12})\] \[\qquad=-\frac{\pi}{2}g^{2}N^{2}\beta_{\chi_{1}}^{2} \tag{6.10}\]
This expression is manifestly negative, therefore the defect entropy is a monotonically decreasing function from the UV to the IR and the g-theorem holds.
\(\ell=-1\) case.From figure 8 it is clear that in this case we have to consider perturbing around the \(1/2\) BPS operator (\(\blacktriangle\)) given by a a scalar matrix \(M=\text{diag}(1,-1,-1,-1)\) and fermionic couplings
\[-i\mathcal{L}_{F}=-g\begin{pmatrix}0&\eta\bar{\psi}^{1}\\ \psi_{1}\bar{\eta}&0\end{pmatrix}\,, \tag{116}\]
This corresponds to the \(W_{1/2}^{-}\) fixed point. We now add the supermatrix deformation
\[\hat{d}=g\tilde{\chi}_{1}\begin{pmatrix}0&\eta\bar{\psi}^{1}\\ \psi_{1}\bar{\eta}&0\end{pmatrix}\,, \tag{117}\]
such that the \(1/2\) BPS point corresponds to \(\tilde{\chi}_{1}=0\) and the IR stable bosonic WL is obtained for \(\tilde{\chi}_{1}=1\). A simple application of prescription (109) reveals that at \(\tilde{\chi}_{1}=0\) this deformation is indeed marginally relevant.
Evaluating the corresponding \(\beta\)-function, in this case we find
\[\beta_{\tilde{\chi}_{1}}=-\frac{g^{2}N}{2\pi}\tilde{\chi}_{1}(\tilde{\chi}_{1 }-2)(\tilde{\chi}_{1}-1) \tag{118}\]
while the defect stress tensor is given by \(T_{D}=\beta_{\tilde{\chi}_{1}}\hat{d}\), with \(\hat{d}\) in (117).
Inserting \(T_{D}\) in (108) and evaluating the expectation values at one loop, we find
\[\mu\frac{\partial s}{\partial\mu}=\] \[-g^{2}\beta_{\tilde{\chi}_{1}}^{2}\int d\tau_{1>2}\bigg{[}\left< \big{(}\eta\bar{\psi}^{1}\big{)}(\tau_{1})\big{(}\psi_{1}\bar{\eta}\big{)}( \tau_{2})\right>_{\ell=-1}+\left<\!\left(\psi_{1}\bar{\eta}\right)\!\left(\tau_ {1}\right)\!\left(\eta\bar{\psi}^{1}\right)\!\left(\tau_{2}\right)\!\right>_{ \ell=-1}\bigg{]}(1-\cos\tau_{12})\] \[\qquad=\frac{\pi}{2}g^{2}N^{2}\beta_{\tilde{\chi}_{1}}^{2} \tag{119}\]
In contrast with the \(\ell=1\) case, this result is positive. Therefore, in the \(\ell=-1\) class of deformations the g-theorem is not respected. According to the general formulation of the theorem [29], this signals the lack of reflection positivity of this class of defects.
Summarizing, for the class of one-fermion deformations we have explicitly found that
\[\mu\frac{\partial s}{\partial\mu}=\begin{cases}\ell=+1&-g^{2}\beta_{\chi_{1} }^{2}N^{2}\frac{\pi}{2}&\quad\Rightarrow\quad\text{g}_{\text{UV}}(\blacktriangle )>\text{g}_{\text{IR}}(\blacktriangle)\,,\\ \ell=-1&g^{2}\beta_{\tilde{\chi}_{1}}^{2}N^{2}\frac{\pi}{2}&\quad\Rightarrow \quad\text{g}_{\text{UV}}(\blacktriangle)<\text{g}_{\text{IR}}(\blacktriangle )\,.\end{cases} \tag{120}\]
Since this result can appear quite surprising, we provide the technical explanation of why the \(\ell=1\) and \(\ell=-1\) flows behave so differently.
First, we note that possible sign differences cannot come from the \(\beta\)-functions since in the variation of the entropy they always appear squared. The difference comes directly from the integrated two-point functions, in particular from the different definition of the \(\eta,\bar{\eta}\) couplings entering \(T_{D}\).
In fact, looking at the first term in the integrals (114), (115) (for the second term the argument works similarly) we see that, as the result of contracting the two fermions with the propagator (101), one obtains
\[\big{\langle}\!\big{/}T_{D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\!\big{\rangle}_ {\ell=\pm 1}\sim(\eta_{1}\gamma^{\mu}\,\bar{\eta}_{2})_{\ell=\pm 1}\,(x_{12})_{\mu} \tag{116}\]
where \(\eta_{i},\bar{\eta}_{i}\) stands for \(\eta(\tau_{i}),\bar{\eta}(\tau_{i})\) and \(x_{12}^{\mu}\equiv(x^{\mu}(\tau_{1})-x^{\mu}(\tau_{2}))\).
On the other hand, from (101) it is easy to see that the \(\eta,\bar{\eta}\) couplings satisfy the following identity
\[(\eta_{i}\gamma^{\mu}\bar{\eta}_{j})_{\ell}\,(x_{ij})_{\mu}=4i\ell\sin\frac{ \tau_{ij}}{2}. \tag{117}\]
Since the rest of the factors in \(\big{\langle}\!\big{/}T_{D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\!\big{\rangle}_ {\ell}\) do not depend on \(\ell\), from identity (117) we immediately conclude that the two two-point function is proportional to \(\ell\), thus it has opposite sign in the two cases
\[\big{\langle}\!\big{/}T_{D}(\tau_{1})T_{D}(\tau_{2})\big{\rangle}\!\big{\rangle} _{\ell=-1}=-\left\langle\!\big{/}T_{D}(\tau_{1})T_{D}(\tau_{2})\right\rangle_ {\ell=1}\,. \tag{118}\]
Finally, since the results of the circle integrations are the same in two cases, we are easily led to the conclusions in (115).
## 7 Discussion
We have studied defect RG flows in ABJM theory induced by bosonic and/or fermionic marginally relevant deformations. Specifically, we have focused on the subset of Wilson loops and deformations featured by diagonal scalar matrix couplings. Exploiting the one-dimensional auxiliary field description of Wilson loops to perform renormalization on the defect, we managed to compute the \(\beta\)-functions exactly in the deforming parameters, and at first order in the bulk ABJM coupling. Relevant features of these flows arise, which are worth recapping.
First, starting from defects with only scalar bilinears, also referred to as bosonic defects, we have shown that generic diagonal, scalar bilinear deformations are always well-defined marginally relevant operators on the defect, as no further constraint on their structure arises along the flow.
Regarding deformations of \(SU(4)\) invariant defects, it is always possible to turn on a deformation made by a single fermion, thus necessarily breaking \(SU(4)\) R-symmetry to \(SU(3)\), as long as this reminiscent \(SU(3)\) is also preserved by the scalar bilinears.
Progressing towards deformations containing more than one fermion, we have found in particular that turning on a two-fermion operator can be achieved only if this is accompanied by a bosonic deformation, which necessarily turns on along the flow due to the interaction of the two-fermion deformation with the bulk fields. Therefore, marginally relevant operators containing at least two fermions are necessarily given by \(U(N|N)\) supermatrices with both diagonal and off-diagonal entries. Moreover, if the two fermionic deformations correspond to \((\eta,\bar{\eta})\) couplings with the same \(\ell\) the bosonic deformation turns out to be non-diagonal. This is not a problem in general, as \(M\) is generically non-diagonal (see for
instance equation (10) in [23]). However, since here we have restricted our investigation to diagonal \(M\)'s, we have disregarded these cases. The generalization to non-diagonal bosonic deformations should be worth investigating in the future.
Our results share some common features with similar results in \(\mathcal{N}=4\) SYM [25; 26], where the flow connects the ordinary loop sitting at the UV fixed point to the 1/2 BPS fixed point in the IR. In ABJM theory the analogous pattern is realised by the mixed bosonic plus one-fermion deformation connecting \(W^{-}\) to \(W^{+}_{1/2}\). However, as usual, ABJM offers a much broader spectrum of fixed points. Indeed this particular realisation is only one of the many possibilities depicted in figure 1.
One of the most striking results we have obtained is that the ordinary \(W^{\pm}\) operators, which differ only by an apparently harmless overall sign in the scalar coupling matrix \(M\), exhibit a very different nature at quantum level, being one an IR stable fixed point and the other a UV unstable one, respectively. Similarly, the \(W^{\pm}_{1/2}\) BPS Wilson loops which differ simply by the choice of \(\ell=1\) or \(-1\) in the bosonic and fermionic couplings, related to the two possible ways of solving the supersymmetry preserving constraints [56], turn out to behave in an opposite way under deformations. In addition, the \(W^{-}_{1/2}\) WL seems to describe a non-unitary dCFT. This is consistent with what was found in [35; 36] for mesonic line operators in Chern-Simons-matter theories. While we have provided a perturbative explanation of these features, it would be nice to have a more physical interpretation.
Focusing on 1/2 BPS operators, we recall that in field theory \(W^{+}_{1/2}\) and \(W^{-}_{1/2}\) can be obtained by a Higgsing construction that makes use of heavy \(W\)-particles and \(W\)-antiparticles, respectively [43] (\(W\) and \(\tilde{W}\) there). This provides already some indication that they could give rise to physically different dCFTs. This seems to be confirmed also holographically. In fact, as shown in [43], the two 1/2 BPS fixed points have different classical M-theory duals:10 while \(W^{+}_{1/2}\) is dual to a M2-brane configuration in AdS\({}_{4}\times\)S\({}^{7}/\mathbb{Z}_{k}\), \(W^{-}_{1/2}\) is described by an anti-M2-brane. Although this seems to be consistent with our findings, it is not clear how to exactly relate the opposite behavior under RG flow found in this paper to the two different holographic descriptions. This is a very interesting question that deserves further investigation.
Footnote 10: A first quantum perturbative calculation around this classical configuration can be found in [60].
A less clear picture exists for ordinary WLs. As is well-known, the holographic descriptions of different Wilson loops with different amount of supersymmetry differ by the set of boundary conditions of the corresponding string configuration in the internal space being Dirichlet or Neumann. A naive comparison with the \(\mathcal{N}=4\) SYM setting suggests that our "ordinary" \(SU(4)\) Wilson loop operators should be described by strings satisfying pure Neumann boundary conditions, that is strings smeared over \(\mathbb{CP}^{3}\). It is not clear, however, how a distinction between \(W^{+}\) and \(W^{-}\) should arise from the dual point of view. Of course, a holographic input would allow for a better understanding of our results.
Our current discussion includes the enriched flows studied in [23], when we focus on the particular case of BPS RG trajectories connecting \(W_{1/6}\) and \(W^{\pm}_{1/2}\). These are depicted as green lines in figure 15. The green curve connecting \(W_{1/6}\) and \(W^{+}_{1/2}\) corresponds to the parabolic RG trajectory in figure 9(c) given by \(\zeta=\chi_{1}^{2}\). It consists of a smooth
interpolation made by a continuum of 1/6 BPS fermionic loops, thus the name "enriched flow". It corresponds exactly to the green trajectory in figure 13 of [23]. The enriched trajectory connecting \(W_{1/2}^{-}\) and \(W_{1/6}\), instead, is not directly included in the analysis of section 5, as the \(W_{1/6}\) fixed point should correspond to the one considered there, but with \(M\to-M\). Of course, it is straightforward to duplicate the analysis of section 5 in this case and pinpoint the enriched trajectory. The \(W_{1/2}^{-}\) operator, which turns out to be a repulsive fixed point, coincides with \(\mathcal{W}_{1/2}^{II}\) of [23] upon an R-symmetry rotation. We thus complete the analysis of [23] with new information about enriched RG flows connecting the repulsive \(\mathcal{W}_{1/2}^{\rm II}\). For the interested reader, we discuss details in appendix D.
Our results are perturbative in the ABJM coupling, so higher order corrections to the \(\beta\)-functions might potentially modify the spectrum of fixed points. Checking what happens to them at higher loops is an interesting open question that would be worth addressing.
## Acknowledgements
We are grateful to Diego Correa and Guillermo Silva for discussions. LC, SP and MT are partially supported by the INFN grant _Gauge Theories, Strings and Supergravity (GSS)_. DT is supported in part by the INFN grant _Gauge and String Theory (GAST)_. DT would like to thank FAPESP's partial support through the grants 2016/01343-7 and 2019/21281-4.
Figure 15: Representation of enriched flows connecting \(W_{1/6}\) and \(W_{1/2}^{\pm}\).
Conventions and Feynman rules
For ABJM theory we follow the conventions in [61]. We work in three-dimensional Euclidean space with coordinates \(x^{\mu}=(x^{0},x^{1},x^{2})\). The three-dimensional gamma matrices are defined as
\[(\gamma^{\mu})_{\alpha}^{\ \beta}=(-\sigma^{3},\sigma^{1},\sigma^{2})_{\alpha}^{ \ \beta}\,,\] (A.1)
with \((\sigma^{i})_{\alpha}^{\ \beta}\) (\(\alpha,\beta=1,2\)) being the Pauli matrices, such that \(\gamma^{\mu}\gamma^{\nu}=\delta^{\mu\nu}+i\epsilon^{\mu\nu\rho}\gamma_{\rho}\), where \(\epsilon^{123}=\epsilon_{123}=1\) is totally antisymmetric. Spinorial indices are lowered and raised as \((\gamma^{\mu})_{\ \beta}^{\alpha}=\epsilon^{\alpha\gamma}(\gamma^{\mu})_{ \gamma}^{\ \delta}\epsilon_{\beta\delta}\), with \(\epsilon_{12}=-\epsilon^{12}=1\). The Euclidean action of \(U(N)_{k}\times U(N)_{-k}\) ABJM theory is
\[S_{\text{ABJM}}= \frac{k}{4\pi}\int d^{3}x\,\epsilon^{\mu\nu\rho}\Big{\{}-i \text{Tr}\left(A_{\mu}\partial_{\nu}A_{\rho}+\frac{2i}{3}A_{\mu}A_{\nu}A_{ \rho}\right)+i\text{Tr}\left(\hat{A}_{\mu}\partial_{\nu}\hat{A}_{\rho}+\frac{2 i}{3}\hat{A}_{\mu}\hat{A}_{\nu}\hat{A}_{\rho}\right)\] (A.2) \[+\text{Tr}\left[\frac{1}{\xi}(\partial_{\mu}A^{\mu})^{2}-\frac{1 }{\xi}(\partial_{\mu}\hat{A}^{\mu})^{2}+\partial_{\mu}\bar{c}D^{\mu}c-\partial _{\mu}\bar{\hat{c}}D^{\mu}\hat{c}\right]\Big{\}}\] \[+\int d^{3}x\text{Tr}\left[D_{\mu}C_{I}D^{\mu}\bar{C}^{I}+i\bar{ \psi}^{I}\gamma^{\mu}D_{\mu}\psi_{I}\right]\] \[-\frac{2\pi i}{k}\int d^{3}x\text{Tr}\Big{[}\bar{C}^{I}C_{I}\psi_{ J}\bar{\psi}^{J}-C_{I}\bar{C}^{I}\bar{\psi}^{J}\psi_{J}+2C_{I}\bar{C}^{J} \bar{\psi}^{I}\psi_{J}\] \[\qquad\qquad\qquad\qquad-2\bar{C}^{I}C_{J}\psi_{I}\bar{\psi}^{J} -\epsilon_{IJKL}\bar{C}^{I}\bar{\psi}^{J}\bar{C}^{K}\bar{\psi}^{L}+\epsilon^{ IJKL}C_{I}\psi_{J}C_{K}\psi_{L}\Big{]}+S_{\text{int}}^{\text{bos}}\,,\]
with covariant derivatives defined as
\[D_{\mu}C_{I} =\partial_{\mu}C_{I}+iA_{\mu}C_{I}-iC_{I}\hat{A}_{\mu}\,,\qquad D _{\mu}\bar{C}^{I}=\partial_{\mu}\bar{C}^{I}-i\bar{C}^{I}A_{\mu}+i\hat{A}_{\mu }\bar{C}^{I}\,,\] \[D_{\mu}\bar{\psi}^{I} =\partial_{\mu}\bar{\psi}^{I}+iA_{\mu}\bar{\psi}^{I}-i\bar{\psi}^{ I}\hat{A}_{\mu}\,,\qquad D_{\mu}\psi_{I}\,=\partial_{\mu}\psi_{I}-i\psi_{I}A_{ \mu}+i\hat{A}_{\mu}\psi_{I}\,.\]
We work in Landau gauge for vector fields and in dimensional regularization with \(d=3-2\epsilon\). The tree-level propagators are (with \(g=\sqrt{2\pi/k}\))
\[\langle(A_{\mu})_{p}^{\ q}(x)(A_{\nu})_{r}^{\ s}(y)\rangle^{(0)} =\delta_{p}^{s}\delta_{r}^{q}\,ig^{2}\,\frac{\Gamma(\frac{3}{2}- \epsilon)}{2\pi^{\frac{3}{2}-\epsilon}}\frac{\epsilon_{\mu\nu\rho}(x-y)^{\rho }}{|x-y|^{3-2\epsilon}},\] (A.3) \[\langle(\hat{A}_{\mu})_{\hat{p}}^{\ \ \hat{q}}(x)(\hat{A}_{\nu})_{\hat{r}}^{ \ \hat{s}}(y)\rangle^{(0)} =-\delta_{\hat{p}}^{\hat{q}}\delta_{r}^{\hat{q}}\,ig^{2}\,\frac{ \Gamma(\frac{3}{2}-\epsilon)}{2\pi^{\frac{3}{2}-\epsilon}}\frac{\epsilon_{\mu \nu\rho}(x-y)^{\rho}}{|x-y|^{3-2\epsilon}},\] \[\langle(\psi^{\alpha})_{\hat{i}}^{j}(x)(\bar{\psi}^{J}_{\beta})_{ k}^{\ \hat{l}}(y)\rangle^{(0)} =-i\delta_{I}^{j}\delta_{i}^{j}\delta_{k}^{j}\frac{\Gamma(\frac{3}{2 }-\epsilon)}{2\pi^{\frac{3}{2}-\epsilon}}\frac{(\gamma_{\mu})_{\beta}^{\alpha} (x-y)^{\mu}}{|x-y|^{3-2\epsilon}}\] \[=i\delta_{I}^{J}\delta_{i}^{\hat{l}}\delta_{k}^{j}(\gamma_{\mu}) _{\ \beta}^{\alpha}\partial_{\mu}\left(\frac{\Gamma(\frac{1}{2}-\epsilon)}{4\pi^{ \frac{3}{2}-\epsilon}}\frac{1}{|x-y|^{1-2\epsilon}}\right),\]
\[\langle(C_{I})_{i}^{\ \hat{j}}(x)(\bar{C}^{J})_{\hat{k}}^{\ \hat{l}}(y)\rangle^{(0)} =\delta_{I}^{J}\delta_{i}^{\hat{l}}\delta_{\hat{k}}^{\hat{j}} \frac{\Gamma(\frac{1}{2}-\epsilon)}{4\pi^{\frac{3}{2}-\epsilon}}\frac{1}{|x-y| ^{1-2\epsilon}},\]
while the one-loop propagators read
\[\langle(A_{\mu})_{p}^{\ q}(x)(A_{\nu})_{r}^{s}(y)\rangle^{(1)} =\delta_{p}^{s}\delta_{r}^{q}\,g^{4}N\,\frac{\Gamma^{2}(\frac{1}{2}- \epsilon)}{4\pi^{3-2\epsilon}}\left[\frac{\delta_{\mu\nu}}{|x-y|^{2-4\epsilon}}- \partial_{\mu}\partial_{\nu}\frac{|x-y|^{2\epsilon}}{4\epsilon(1+2\epsilon)} \right],\] (A.4) \[\langle(\hat{A}_{\mu})_{\hat{p}}^{\ \ \hat{q}}(x)(\hat{A}_{\nu})_{\hat{r}}^{ \hat{s}}(y)\rangle^{(1)} =\delta_{\hat{p}}^{\hat{s}}\delta_{\hat{r}}^{\hat{q}}\,g^{4}N\, \frac{\Gamma^{2}(\frac{1}{2}-\epsilon)}{4\pi^{3-2\epsilon}}\left[\frac{\delta_{ \mu\nu}}{|x-y|^{2-4\epsilon}}-\partial_{\mu}\partial_{\nu}\frac{|x-y|^{2\epsilon}}{4 \epsilon(1+2\epsilon)}\right]\,.\]
The latin indices are color indices. For instance, \((A_{\mu})_{p}^{\ q}\equiv A_{\mu}^{a}(T^{a})_{p}^{\ q}\) where \(T^{a}\) are \(U(N)\) generators in the fundamental representation.
## Appendix B Wilson loops in ABJM theory
BPS Wilson loops.In ABJM theory there is a wide set of BPS operators one can construct, which may carry some parametric dependence that allows for a continuous interpolation between different observables. Such set was the main character of [23]. Here we focus instead on non-BPS flows and only a few BPS operators within the aforementioned set appear in our discussion. We list these below, together with the pictorial representation used in the figures throughout the paper.
We start from the simplest realization where the loops are charged under a single node and are bosonic. In this case we may have operators separately charged under the first node, associated with \(A\), or the second node, associated with \(\hat{A}\), of the ABJM quiver. In Euclidean signature they are explicitly given by
\[\bullet\quad\begin{cases}W_{1/6}=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i \oint d\tau\,\mathcal{A}\bigg{)}\,,\quad\mathcal{A}\equiv A_{\mu}\dot{x}^{\mu} -ig^{2}|\dot{x}|M_{I}{}^{J}C^{I}\bar{C}_{J}\,,\\ \hat{W}_{1/6}=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i\oint d\tau\,\hat{ \mathcal{A}}\bigg{)}\,,\quad\hat{\mathcal{A}}\equiv\hat{A}_{\mu}\dot{x}^{\mu} -ig^{2}|\dot{x}|M_{I}{}^{J}\bar{C}_{J}C^{I},\end{cases} \tag{115}\]
where \(g^{2}=2\pi/k\). In both cases for \(M=\pm\operatorname{diag}(-1,-1,1,1)\) (plus any other permutation of the diagonal entries) these operators become \(1/6\) BPS. Whenever present in the figures, these \(SU(2)\times SU(2)\) symmetric objects are represented as blue dots/balls.
In addition, we also have a pair of \(1/2\) BPS operators that are charged under both nodes of the quiver and are therefore written in terms of a superconnection \(\mathcal{L}\). Explicitly, we have
\[\blacktriangle\quad W_{1/2}^{\pm}=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i \oint d\tau\,\mathcal{L}^{\pm}\bigg{)}\,,\quad\mathcal{L}^{\pm}\equiv\begin{pmatrix} \mathcal{A}&-ig\eta\bar{\psi}^{1}\\ -ig\psi_{1}\bar{\eta}&\hat{\mathcal{A}}\end{pmatrix}\,, \tag{116}\]
where \(\mathcal{A}\) and \(\hat{\mathcal{A}}\) are as in (115) but with scalar coupling matrix \(M=\ell\operatorname{diag}(-1,1,1,1)\). Correspondingly the commuting spinors \(\eta\) and \(\bar{\eta}\) are those in (10). \(W_{1/2}^{+}\) is then obtained by setting \(\ell=1\), while \(W_{1/2}^{-}\) corresponds to \(\ell=-1\). In our figures these \(SU(3)\) symmetric BPS objects are represented as red triangles/pyramids.
Non-BPS Wilson loops.We find fixed points corresponding to non-BPS operators that are \(SU(3)\) invariant. As in the BPS case, for bosonic operators these can be defined separately for each node of the ABJM quiver,
\[\blacktriangle\quad\begin{cases}W=\operatorname{Tr}\mathcal{P}\exp\bigg{(}- i\oint d\tau\,\mathcal{A}\bigg{)}\,,\quad\mathcal{A}=A_{\mu}\dot{x}^{\mu}-ig^{2}| \dot{x}|M_{I}{}^{J}C^{I}\bar{C}_{J}\,,\\ \hat{W}=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i\oint d\tau\,\hat{\mathcal{A} }\bigg{)}\,,\quad\hat{\mathcal{A}}=\hat{A}_{\mu}\dot{x}^{\mu}-ig^{2}|\dot{x}|M _{I}{}^{J}\bar{C}_{J}C^{I}.\end{cases} \tag{117}\]
with \(M=\pm\operatorname{diag}(-1,1,1,1)\) or permutations. Such options should be equivalent up to R-symmetry rotations.
In addition, we also find \(SU(3)\) fixed points that are fermionic, in which case they are defined as
\[\blacktriangle\hskip 8.535827ptW=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i \oint d\tau\,\mathcal{L}\bigg{)}\,,\quad\mathcal{L}\equiv\begin{pmatrix}\mathcal{ A}&-igp\bar{\psi}^{1}\\ -ig\psi_{1}\bar{\eta}&\hat{\mathcal{A}}\end{pmatrix}\,, \tag{104}\]
with \(\mathcal{A},\hat{\mathcal{A}}\) given as in (101) but now with \(M=\ell\,\text{diag}(-1,-3,-3,-3)\), and \(\eta\), \(\bar{\eta}\) given in (4.3). Such points are shown in figures 9(a) and 11(b).
Finally, there are also \(SU(2)\) fermionic fixed points, denoted as black circles. These are defined as
\[\blacktriangle\hskip 8.535827ptW=\operatorname{Tr}\mathcal{P}\exp\bigg{(}-i \oint d\tau\,\mathcal{L}\bigg{)}\,,\quad\mathcal{L}\equiv\begin{pmatrix} \mathcal{A}&-igp\bar{\psi}^{1}\\ -ig\psi_{1}\bar{\eta}&\hat{\mathcal{A}}\end{pmatrix}\,, \tag{105}\]
with \(M=\text{diag}(-1,1,-3,-3)\) or \(M=\text{diag}(-1,-3,1,1)\) in \(\mathcal{A},\hat{\mathcal{A}}\). They appear in figures 9(b) and 9(c).
## Appendix C Renormalization computations
In this section we report the explicit calculation of the \(\beta\)-functions. We follow what was done for the 1/24 BPS interpolating Wilson loop in [23]. We study the renormalization of the quantum field theory defined by the following effective action
\[S_{\text{eff}}=S_{\text{ABJM}}+\int d\tau\bar{\Psi}\left(\partial_{\tau}+i \mathcal{L}\right)\Psi\,, \tag{106}\]
where \(\mathcal{L}\) is the deformed Wilson loop (super)connection and we have defined the one-dimensional Grassmann odd superfield
\[\Psi=\begin{pmatrix}z&\varphi\\ \tilde{\varphi}&\tilde{z}\end{pmatrix}\,,\qquad\bar{\Psi}=\begin{pmatrix}\bar{ z}&\bar{\tilde{\varphi}}\\ \bar{\varphi}&\tilde{\tilde{z}}\end{pmatrix}\,, \tag{107}\]
where \(z\) (\(\tilde{z}\)) and \(\varphi\) (\(\tilde{\varphi}\)) are a spinor and a scalar, respectively, in the fundamental representation of \(U(N)\).
The tree-level propagators of the one-dimensional fields are
\[\begin{split}=&\langle z^{i}(\tau_{1})\tilde{z}_{j}(\tau_{2}) \rangle\ =\delta^{i}_{j}\,\theta(\tau_{1}-\tau_{2})\,,\\ \hline\hline=&\langle\tilde{z}^{\tilde{i}}(\tau_{1})\bar{ \tilde{z}}_{j}(\tau_{2})\rangle\ =\delta^{\tilde{i}}_{j}\,\theta(\tau_{1}-\tau_{2})\,,\\ \hline=&\langle\varphi^{\hat{i}}(\tau_{1})\bar{\varphi}_{j}(\tau_{2}) \rangle\ =\delta^{\hat{i}}_{j}\,\theta(\tau_{1}-\tau_{2})\,,\\ \hline=&\langle\tilde{\varphi}^{\hat{i}}(\tau_{1})\bar{\tilde{\varphi}}_{j} (\tau_{2})\rangle\ =\delta^{\hat{i}}_{j}\,\theta(\tau_{1}-\tau_{2})\,.\end{split} \tag{108}\]
In order to renormalize the theory, for each one-dimensional field \(\phi=\{\varphi,\tilde{\varphi},z,\tilde{z}\}\) we introduce the corresponding renormalization function as \(\phi=Z_{\phi}^{-\frac{1}{2}}\phi_{0}\), where \(\phi_{0}\) stands for the bare quantity.
ocusing on the one-loop renormalization of the \(\zeta_{i}\) parameters in the bosonic deformation (3.2), we need to consider the scalar vertex \(\bar{z}C\bar{C}z\) (similarly for the other one-dimensional fields). We define the renormalization functions \(Z_{\zeta_{i}}\) such that
\[(\zeta_{i})_{0}=Z_{\zeta_{i}}\zeta_{i}=(1+\delta_{\zeta_{i}})\zeta_{i}\,.\] (C.4)
We write the action (C.1) as a function of the renormalized parameters adding the counterterm \(g^{2}\delta{M_{I}}^{J}\bar{z}C_{J}\bar{C}^{I}z\) with
\[\delta{M_{I}}^{J}=2\begin{pmatrix}\delta_{\zeta_{1}}\zeta_{1}&0&0&0\\ 0&\delta_{\zeta_{2}}\zeta_{2}&0&0\\ 0&0&\delta_{\zeta_{3}}\zeta_{3}&0\\ 0&0&0&\delta_{\zeta_{4}}\zeta_{4}\end{pmatrix}\,.\] (C.5)
For the other one-dimensional scalar field vertices the calculations are similar.
### Bosonic deformation
The divergent scalar one-loop diagrams related to the \(\bar{z}C\bar{C}z\) vertex are depicted in figures 16(a) and 16(b). We refer to [23] for the explicit computation. We find
\[\begin{split}\Gamma^{\text{16(a)}}&=-\frac{g^{4}N}{8\pi \epsilon}{M_{I}}^{K}{M_{K}}^{J}\int d\tau\,\bar{z}\,C_{J}\bar{C}^{I}z\,,\\ \Gamma^{\text{16(b)}}&=\frac{g^{4}N}{8\pi\epsilon}\int d \tau\,\bar{z}\,C_{I}\bar{C}^{I}z\,,\end{split}\] (C.6)
where \(M\) is the deformed scalar matrix.
These are the only divergent contributions to the vertex, as in the absence of fermions there is no field function renormalization at one loop. In minimal subtraction scheme the counterterm \(\delta{M_{I}}^{J}\) is then obtained by imposing
\[0=g^{2}\left[\delta{M_{I}}^{J}-\frac{g^{2}N}{8\pi\epsilon}{M_{I}}^{K}{M_{K}}^{ J}+\frac{g^{2}N}{8\pi\epsilon}\delta_{I}^{J}\right]\int d\tau\bar{z}C_{J}\bar{C}^{I}z\,.\] (C.7)
From each diagonal element of (C.7) we find the corresponding counterterm
\[\delta_{\zeta_{i}}\zeta_{i}=\frac{g^{2}N}{4\pi\epsilon}(\zeta_{i}-1)\zeta_{i}\,.\] (C.8)
Figure 16: One-loop corrections to the \(\bar{z}C\bar{C}z\) vertex. (a) and (b) are purely bosonic diagrams, while (c) contains fermions.
In dimensional regularization, \(d=3-2\epsilon\) with minimal subtraction, the \(\zeta_{i}\) parameters are dimensionless while \(g^{2}\) has dimension \(\Delta_{g^{2}}=2\epsilon\). We can then write the \(\beta\)-functions for \(\zeta_{i}\) as
\[\beta_{\zeta_{i}}=2g^{2}\frac{\partial K_{\zeta_{i}}}{\partial g^{2}}\,, \tag{102}\]
where \(K_{\zeta_{i}}\) are the coefficients of the divergent part of \((\zeta_{i})_{0}\) as a function of \(\zeta_{i}\) (see for instance equation (3.43) in [23]). Therefore, combining (100) with (101), we eventually find
\[\beta_{\zeta_{i}}=\frac{g^{2}N}{2\pi}(\zeta_{i}-1)\zeta_{i}\,. \tag{103}\]
### Fermionic deformation
The presence of fermions in the Wilson loop superconnection gives rise to one extra divergent diagram, figure 16, which contributes to the \(\bar{z}C\bar{C}z\) vertex renormalization. Moreover we also have to take into consideration the renormalization of the \(z\) field.
Diagram 16 gives two different contributions. The first one arises from the ABJM vertex \(C_{I}\bar{C}^{I}\bar{\psi}^{J}\psi_{J}\). This contribution is not affected by the two possible values of \(\ell\) in the definition of \(\eta,\bar{\eta}\). Turning on only the \(i\)th deformation we find
\[\Gamma_{1}^{\rm 16(c)}=\frac{g^{2}N}{4\pi\epsilon}\chi_{i}^{2}\int d\tau\,\bar{z} C_{I}\bar{C}^{I}z\,,\qquad i=1,2,3,4\,. \tag{104}\]
This is a divergent contribution to the diagonal scalar coupling matrix \(M_{I}^{\phantom{I}J}\) to be included in (100). More generally, if we turn on more than one \(\chi\)-deformation the overall contribution will be the sum of \(\Gamma_{1}^{\rm 16(c)}\), one for each \(\chi_{i}\).
The second contribution comes from using the ABJM vertex \(C_{I}\bar{C}^{J}\bar{\psi}^{I}\psi_{J}\). Also in this case, if we turn on only one fermion the result is not affected by the choice of \(\ell\). For the \(i\)th deformation we find
\[\Gamma_{2}^{\rm 16(c)}=-\frac{g^{2}N}{2\pi\epsilon}\chi_{i}^{2}\int d\tau\, \bar{z}C_{I}\bar{C}^{I}z\,,\qquad i=1,2,3,4\,. \tag{105}\]
However, from the point of view of the \(\ell\) dependence this time things drastically change if we turn on more than one fermion. In fact, if we consider a two-fermion deformation corresponding to \(\chi_{a}\) with \(\ell_{a}\) and \(\chi_{b}\) with \(\ell_{b}\) (with \(a\neq b\)) we find
\[\Gamma_{2}^{\rm 16(c)}=-\frac{g^{2}N}{2\pi\epsilon}\bigg{(}\chi_{a}^{2}\, \delta_{I}^{a}\delta_{a}^{J}+\chi_{b}^{2}\,\delta_{I}^{b}\delta_{b}^{J}-\frac{ |\ell_{a}+\ell_{b}|}{2}\chi_{a}\chi_{b}\left(\delta_{I}^{a}\delta_{b}^{J}+ \delta_{I}^{b}\delta_{a}^{J}\right)\bigg{)}\int d\tau\,\bar{z}C_{J}\bar{C}^{I}z\,, \tag{106}\]
where there is no implicit sum over \(a\) and \(b\). Therefore, if we turn on two fermions with the same \(\ell\), off-diagonal contributions arise in the scalar coupling matrix. On the contrary, turning on two fermions with opposite \(\ell\) leaves the scalar coupling matrix diagonal.
Since in the main text we restricted our analysis to diagonal scalar matrices, we consider only the case of double fermionic deformations with opposite \(\ell\)'s, let's say \(\ell_{a}=-\ell_{b}=\ell\).
In the present case extra divergent contributions to the vertex come from the field function renormalization. In fact, we recall that in the presence of fermions the scalar coupling matrix renormalizes as [23]
\[(M_{I}^{\phantom{I}J})_{0}=Z_{z}{}^{-1}\left(M_{I}^{\phantom{I}J}+\delta M_{I }^{\phantom{I}J}\right)\simeq(1-\delta_{z})M_{I}^{\phantom{I}J}+\delta M_{I}^{ \phantom{I}J}\,, \tag{107}\]
with \(\delta_{z}=-\frac{g^{2}N}{4\pi\epsilon}\ell(\chi_{a}^{2}-\chi_{b}^{2})\).
Proceeding as done in C.1, we impose the matrix counterterm \((\delta M-\delta_{z}M)\) to cancel divergent contributions (C.7, C.12, C.13). We write
\[\begin{split} 0=g^{2}\bigg{[}\delta M_{I}{}^{J}-\frac{g^{2}N}{8\pi \epsilon}\big{(}M_{I}{}^{K}M_{K}^{J}-\delta_{I}^{J}\big{)}&+\frac {g^{2}N}{4\pi\epsilon}\big{(}\chi_{a}^{2}+\chi_{b}^{2}\big{)}\delta_{I}^{J}\\ &-\frac{g^{2}N}{2\pi\epsilon}\big{(}\chi_{a}^{2}\delta_{I}^{a} \delta_{a}^{J}+\chi_{b}^{2}\delta_{I}^{b}\delta_{b}^{J}\big{)}\bigg{]}\int d \tau\,\bar{z}C_{J}\bar{C}^{I}z\,,\end{split}\] (C.15)
from which we find
\[(\zeta_{a})_{0} =\left[1+\frac{g^{2}N}{4\pi\epsilon}\left(\zeta_{a}-1+\ell( \chi_{a}^{2}-\chi_{b}^{2})+\frac{(1-\ell)}{2}\frac{(\chi_{a}^{2}-\chi_{b}^{2} )}{\zeta_{a}}\right)\right]\zeta_{a}\,,\] (C.16) \[(\zeta_{b})_{0} =\left[1+\frac{g^{2}N}{4\pi\epsilon}\left(\zeta_{b}-1+\ell( \chi_{a}^{2}-\chi_{b}^{2})-\frac{(1+\ell)}{2}\frac{(\chi_{a}^{2}-\chi_{b}^{2} )}{\zeta_{b}}\right)\right]\zeta_{b}\,,\] \[(\zeta_{c})_{0} =\left[1+\frac{g^{2}N}{4\pi\epsilon}\left(\zeta_{c}-1+\ell( \chi_{a}^{2}+\chi_{b}^{2})-\frac{(1+\ell)}{2}\frac{\chi_{a}^{2}}{\zeta_{c}}- \frac{(1-\ell)}{2}\frac{\chi_{b}^{2}}{\zeta_{c}}\right)\right]\zeta_{c}\,, \quad c\neq a,b\,.\]
Therefore, the \(\beta\)-functions read
\[\begin{split}\beta_{\zeta_{a}}&=\frac{g^{2}N}{2\pi} \left(\zeta_{a}-1+\ell(\chi_{a}^{2}-\chi_{b}^{2})+\frac{(1-\ell)}{2}\frac{( \chi_{a}^{2}-\chi_{b}^{2})}{\zeta_{a}}\right)\zeta_{a}\,,\\ \beta_{\zeta_{b}}&=\frac{g^{2}N}{2\pi}\left(\zeta_{b }-1+\ell(\chi_{a}^{2}-\chi_{b}^{2})-\frac{(1+\ell)}{2}\frac{(\chi_{a}^{2}-\chi _{b}^{2})}{\zeta_{b}}\right)\zeta_{b}\,,\\ \beta_{\zeta_{c}}&=\frac{g^{2}N}{2\pi}\left(\zeta_{c }-1+\ell(\chi_{a}^{2}-\chi_{b}^{2})-\frac{(1+\ell)}{2}\frac{\chi_{a}^{2}}{\zeta _{c}}-\frac{(1-\ell)}{2}\frac{\chi_{b}^{2}}{\zeta_{c}}\right)\zeta_{c}\,, \quad c\neq a,b\,.\end{split}\] (C.17)
## Appendix D Enriched flows realization
As highlighted in figure 15, the three fixed points \(W_{1/6}\) and \(W_{1/2}^{\pm}\) can be connected both through enriched flows previously explored in [23] or through mixed flows as we study here. Naturally, we should be able to recover the former from the latter by specifically tuning our mixed bosonic plus one fermion deformation. In figure 18 we propose a picture that captures the connecting features of each construction.
Concretely, \(W_{1/2}^{+}\) can be obtained from \(W_{1/6}\) through a mixed flow in the \((\zeta_{2},\chi_{1})\) plane and \(\ell=1\), as described in section 5.1. An enriched flow, _i.e._ a trajectory that is BPS along all of its points (and not only at the fixed points), is then obtained imposing the constraint \(\zeta_{2}=\chi_{1}^{2}\). For generic \(\zeta_{2}\) the corresponding operators are the \(1/6\) BPS fermionic loops in green in figure 13 of [23], whereas at \(\zeta_{2}=1\) it becomes \(W_{1/2}^{+}\) (see figure 18(a)).
On the other hand, \(W_{1/6}\) can be obtained from \(W_{1/2}^{-}\) through a mixed bosonic plus one-fermion flow with \(\ell=-1\). We have not considered such a construction explicitly in the main text and for completeness we include it here. The realization involves replicating (6.11)-(6.13) with the additional bosonic contribution from \(\zeta_{2}\). Starting with \(W_{1/2}^{-}\) in eq. (B.2) with \(\ell=-1\), we add the mixed deformation
\[\hat{d}=-2g^{2}\zeta_{2}\begin{pmatrix}C_{2}\bar{C}^{2}&0\\ 0&\bar{C}^{2}C_{2}\end{pmatrix}+g\tilde{\chi}_{1}\begin{pmatrix}0&\eta\bar{ \psi}^{1}\\ \psi_{1}\bar{\eta}&0\end{pmatrix}\,.\] (D.1)
The corresponding RG flows are then described by the fermionic \(\beta\)-function in (6.13) together with
\[\beta_{\zeta_{2}}=\frac{g^{2}N}{2\pi}\big{(}(\zeta_{2}-2)-\tilde{\chi}_{1}(\tilde{ \chi}_{1}-2)\big{)}\zeta_{2}\,.\] (D.2)
We depict the flow in figure 17. In particular, the enriched flow is obtained imposing the constraint \(\zeta_{2}=-\tilde{\chi}_{1}(\tilde{\chi}_{1}-2)\) and at the particular point where \(\zeta_{2}=1\) we recover \(W_{1/6}\) with \(M=\text{diag}(1,1,-1,-1)\).
Even though by imposing these constraints, either between \(\zeta_{2}\) and \(\chi_{1}\) or between \(\zeta_{2}\) and \(\tilde{\chi}_{1}\), we do recover enriched flows, these are not precisely the ones studied in [23], see figure 1 there. Indeed, \(W_{1/2}^{+}\) is nothing but \(\mathcal{W}_{1/2}^{\text{I}}\) introduced there, while \(W_{1/2}^{-}\) coincides with \(\mathcal{W}_{1/2}^{\text{II}}\) of [23] only upon an R-symmetry transformation that interchanges the \(1\) and \(4\) indices. In fact, if we look at the fermionic couplings, for instance, \(W_{1/2}^{-}\) couples to \(\psi_{1},\bar{\psi}^{1}\) whereas \(\mathcal{W}_{1/2}^{\text{II}}\) couples to \(\psi_{4},\bar{\psi}^{4}\). Nevertheless, consistently, under RG flows both \(W_{1/2}^{-}\) and \(\mathcal{W}_{1/2}^{\text{II}}\) share the same nature, in the sense that both are repulsive.
The R-symmetry difference between \(W_{1/2}^{-}\) and \(\mathcal{W}_{1/2}^{\text{II}}\) has its origin on a quite technical but nevertheless interesting aspect: from the point of view of enriched flows they are built upon different (but R-equivalent) \(1/6\) BPS operators.
To clarify this statement, we recall that the construction for enriched flows is based on the deformation of a \(1/6\) BPS bosonic operator. Such a deformation is written in terms of
Figure 17: Flows starting from \(W_{1/2}^{-}\). Fixed points correspond to bosonic operators, either \(SU(3)\) non-BPS or a \(1/6\) BPS \(W_{1/6}\). The trajectory in green corresponds to an enriched flow, made by a continuum of \(1/6\) BPS fermionic loops.
a matrix \(G\) that may include all four scalars of the theory via \(\alpha\) and \(\beta\) parameters,11
Footnote 11: This construction was originally proposed in the second chapter of [24] and then generalized in [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 445; 446; 447; 448; 459; 451; 452; 453; 454; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 488; 489; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 511; 511; 52; 5136; 514; 515; 525; 537; 538; 540; 539; 541; 543; 555; 556; 561; 571; 572; 573; 574; 575; 588; 59; 60; 616; 621; 630; 644; 659; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 778; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 88; 89; 91; 84; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 111; 121; 1334; 135; 136; 137; 140; 108; 111; 141; 151; 167; 179; 180; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 210; 222; 231; 243; 256; 267; 27; 288; 299; 300; 310; 311; 311; 329; 270; 28; 291; 295; 301; 331; 333; 340; 351; 341; 332; 333; 342; 343; 352; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 54; 55; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 90; 910; 111; 122; 133; 144; 15; 16; 88; 89; 92; 93; 101; 113; 147; 18; 19; 114; 19; 115; 17; 19; 18; 19; 19; 202; 21; 232; 23; 244; 25; 26; 27; 28; 293; 303; 311; 332; 333; 343; 353; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 549; 54; 55; 57; 58; 59; 60; 61; 62; 631; 64; 65; 66; 67; 68; 69; 70; 71; 73; 74; 75; 76; 78; 79; 81; 83; 84; 85; 86; 87; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 11; 12; 135; 14; 15; 16; 17; 18; 19; 19; 18
each type of flow that, whereas \({\cal W}^{\rm I,II}_{1/2}\) necessarily share a subset of preserved supercharges, the pair \(W^{\pm}_{1/2}\) does not share any. Thus when studying the corresponding brane/anti-brane description in the dual setting along the lines of [43], we should keep in mind that mixed flows, and not the enriched ones, are the ones to be considered.
We have discussed one possible scenario of complementary choices of \(M\), suitable for turning on enriched flows realized by \(\alpha_{2},\bar{\alpha}^{2}\) (alias \(\zeta_{2},\chi_{1}\) mixed flows in this paper) or \(\beta^{3},\bar{\beta}_{3}\) (\(\zeta_{3},\chi_{4}\) here). Alternatively, we could consider complementary \(M\) pairs suitable for \(\alpha_{1},\bar{\alpha}^{1}\) or \(\beta^{4},\bar{\beta}_{4}\) (\(\zeta_{1}\) or \(\zeta_{4}\)) deformations. In this case we would unravel four extra 1/2 BPS points that, instead of coupling to fermions with R-symmetry index 1 or 4, would couple to those of index 2 or 3.
The eight possible realizations of 1/2 BPS fixed points are for instance collected in figure 2(a) of [43]. Referring to that diagrammatic representation, enriched flows interpolate between operators connected by solid red lines (operators that share 2/3 of preserved supercharges), whereas mixed flows connect neighbour operators sharing no common supercharges, denoted there as \(W_{i}\) and \(\tilde{W}_{i}\). |
2310.05697 | Combining recurrent and residual learning for deforestation monitoring
using multitemporal SAR images | With its vast expanse, exceeding that of Western Europe by twice, the Amazon
rainforest stands as the largest forest of the Earth, holding immense
importance in global climate regulation. Yet, deforestation detection from
remote sensing data in this region poses a critical challenge, often hindered
by the persistent cloud cover that obscures optical satellite data for much of
the year. Addressing this need, this paper proposes three deep-learning models
tailored for deforestation monitoring, utilizing SAR (Synthetic Aperture Radar)
multitemporal data moved by its independence on atmospheric conditions.
Specifically, the study proposes three novel recurrent fully convolutional
network architectures-namely, RRCNN-1, RRCNN-2, and RRCNN-3, crafted to enhance
the accuracy of deforestation detection. Additionally, this research explores
replacing a bitemporal with multitemporal SAR sequences, motivated by the
hypothesis that deforestation signs quickly fade in SAR images over time. A
comprehensive assessment of the proposed approaches was conducted using a
Sentinel-1 multitemporal sequence from a sample site in the Brazilian
rainforest. The experimental analysis confirmed that analyzing a sequence of
SAR images over an observation period can reveal deforestation spots
undetectable in a pair of images. Notably, experimental results underscored the
superiority of the multitemporal approach, yielding approximately a five
percent enhancement in F1-Score across all tested network architectures.
Particularly the RRCNN-1 achieved the highest accuracy and also boasted half
the processing time of its closest counterpart. | Carla Nascimento Neves, Raul Queiroz Feitosa, Mabel X. Ortega Adarme, Gilson Antonio Giraldi | 2023-10-09T13:16:20Z | http://arxiv.org/abs/2310.05697v1 | Combining recurrent and residual learning for deforestation monitoring using multitemporal SAR images
###### Abstract
With its vast expanse, exceeding that of Western Europe by twice, the Amazon rainforest stands as the Earth's largest forest, holding immense importance in global climate regulation. Yet, deforestation detection from remote sensing data in this region poses a critical challenge, often hindered by the persistent cloud cover that obscures optical satellite data for much of the year. Addressing this need, this paper proposes three deep-learning models tailored for deforestation monitoring, utilizing SAR (Synthetic Aperture Radar) multitemporal data moved by its independence on atmospheric conditions. Specifically, the study proposes three novel recurrent fully convolutional network architectures--namely, RRCNN-1, RRCNN-2, and RRCNN-3--crafted to enhance the accuracy of deforestation detection. Additionally, this research explores replacing a bitemporal with multitemporal SAR sequences, motivated by the hypothesis that deforestation signs quickly fade in SAR images over time. A comprehensive assessment of the proposed approaches was conducted using a Sentinel-1 multitemporal sequence from a sample site in the Brazilian rainforest. The experimental analysis confirmed that analyzing a sequence of SAR images over an observation period can reveal deforestation spots undetectable in a pair of images. Notably, experimental results underscored the superiority of the multitemporal approach, yielding approximately a 5% enhancement in F1-Score across all tested network architectures. Particularly the RRCNN-1 achieved the highest accuracy and also boasted half the processing time of its closest counterpart.
keywords: Remote sensing, Deforestation detection, SAR images +
Footnote †: journal: Computer Vision
## 1 Introduction
Remote sensing refers to acquiring information about an object from a remote location. This term is frequently employed to describe the imaging of the Earth's surface from an elevated perspective, such as via satellite (Parelius, 2023). Multitemporal remote sensing data can offer wide information for land change monitoring (Shi et al., 2020; Ban & Yousif, 2016).
Change detection captures spatial differences in the state of an object by observing it at different times (Singh, 1989). In the Remote Sensing context, its purpose is to monitor environmental changes by jointly processing a set of images of the same geographical area acquired at different dates, which is essential for the management of natural resources, the conservation of ecosystems and biodiversity as well as decision support for sustainable development (Asokan & Anitha, 2019).
Change detection using remote sensing imagery assumes a crucial function in numerous fields of applications, including disaster monitoring (Zheng et al., 2021), biodiversity study (Newbold et al., 2015), desertification (Dawelbait & Morari, 2012), urbanization process (Han et al., 2017) and deforestation detection (De Bem et al., 2020), which is the focus of this research.
Among these applications, preserving the rainforests is critical in maintaining the health and stability of our planet's ecosystems. In particular, the Amazon rainforest, the Earth's largest forest, has suffered increasing deforestation rates in recent years, with Brazil being the country where the most significant statistics are concentrated (Giljum et al., 2022; Amin et al., 2019).
The Brazilian government tracks deforestation in the Amazon region through systematic satellite monitoring. For example, the Amazon Deforestation Monitoring Project (PRODES1) provides annual reports about deforestation in the Brazilian Legal Amazon (BLA) since 1988 (Valeriano et al., 2004). One notable limitation of PRODES and several other deforestation monitoring systems is their dependency on optical data. Optical data are frequently hindered by cloud cover throughout most of the year in tropical regions (Doblas et al., 2020)
Footnote 1: [http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/prodes](http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/prodes)
Various techniques have been addressed to create change maps for deforestation detection, including simple differencing (Stauffer and McKinney, 1978), change vector analysis (Perbet et al., 2019) and traditional machine learning techniques like Support Vector Machine (Reis et al., 2020), Principal Component Analysis (Sule and Wood, 2020), Random Forest (Hethcoat et al., 2020), Maximum Likelihood (Diniz et al., 2022) and distance-based classifiers (Nicolau et al., 2021).
Apart from these methods and their variations, numerous publications have incorporated deep learning in remote sensing change detection and demonstrated their superiority over conventional change detection methods Bai et al. (2022).
Among the recent researches with deep learning solutions for change detection, Autoencoders, U-Net and its variants (Li et al., 2022; Zheng et al., 2021), Recurrent Networks (Fang et al., 2023; Shi et al., 2022), Generative Adversarial Networks (Zhao et al., 2020), and Transformer-based networks (Chen et al., 2021; Wang et al., 2022) are included.
A current research line Panuju et al. (2020) focuses on combining different techniques to improve change detection accuracy in remote sensing. Moreover, Parelius (2023) points out that the growing abundance of satellite imagery offers an opportunity to shift focus from the traditional bitemporal change detection methods used in previous studies to models that exploit longer time-series images. This approach allows for the incorporation of a richer set of data for change detection, a perspective that has been relatively underexplored until now, as emphasized in Parelius (2023).
The present study follows this trend and seeks to develop solutions for change detection with remote sensing data, specifically for deforestation monitoring, by employing multitemporal SAR data and combinations of different deep learning techniques. This paper proposes three recurrent fully convolutional deep networks for pixel-wise deforestation detection from SAR images. Furthermore, this study assesses the application of image sequences instead of bitemporal SAR image pairs for deforestation detection. This approach is inspired by the hypothesis that deforestation signs in tropical forests tend to rapidly diminish in SAR imagery due to the natural process of forest regeneration. Consequently, deforestation detection becomes increasingly challenging when bi-temporal image acquisition intervals are too widely apart. By harnessing a sequence of multiple images captured between such intervals, we substantially enhance the prospects of accurate and timely deforestation detection.
The main contributions of this work are:
* Development of novel deep learning based solutions for automatic deforestation mapping and comparison with state-of-art methods.
* Addressing underexplored aspects in change detection literature, including the use of longer multitemporal data sequences.
* Experimental analysis of the proposed methods using SAR data provided by Sentinel-1 from a sample site of the Amazon forest.
This paper is organized as follows. Section 2 discusses the recent research and also the gaps in change detection with deep learning. Section 3 provides the theoretical background on the deep learning techniques
that inspired the architectures developed in the present work and Section 4 introduces the proposed models. Section 5 presents the employed data, the experimental setup, and the architectures used for comparison. Section 6 shows and discusses the experimental results. Finally, Section 7 presents the main conclusions drawn from this study.
## 2 Related Works
With the continuous advancement of Deep Learning methods within computer vision, their application has been extended to the problem of change detection in Remote Sensing due to their ability to capture complex and hierarchical features present in the data (Parelius, 2023). Hence, this section will discuss recent works with deep learning solutions for change detection problems.
The U-Net encoder-decoder architecture and its variants are commonly employed for change detection tasks among the reviewed models. For instance, Zheng et al. (2021) introduced the Cross-Layer Convolutional Neural Network (CLNet), which is a modified U-Net with Cross-Layer Blocks (CLB) incorporated to integrate multi-scale features and multi-level contextual information by temporarily splitting the input into two parallel asymmetrical branches using different convolution strides and concatenating the feature maps from the two branches. Experiments were conducted on two building change detection datasets: the Learning Vision and Remote Sensing Laboratory building change detection (LEVIR-CD) and WHU-CD, reaching superior performance compared to several state-of-the-art methods.
Wang et al. (2022) proposed the U-Net-like Visual Transformer for Change Detection (UVACD) for bitemporal image change detection. This network uses a CNN backbone for extracting semantic information followed by a visual transformer for feature enhancement that constructs change intensity tokens to complete the temporal information interaction and suppress irrelevant information weights to help obtain more distinguishable change features. The experiments were conducted on the WHU-CD and LEVIR-CD datasets. The authors reported a Precision of 94.58%, a Recall of 91.17%, an F1 score of 92.84%, and an IoU of 86.64% on the WHU-CD. UVACD outperformed some previous state of the art change detection methods in the experimental results.
U-Nets variants proposed in the literature take the form of a double-stream architecture. One example is the Densely Attentive Refinement Network (DARNet) introduced in Li et al. (2022) to improve change detection on bitemporal very-high-resolution (VHR) remote sensing images. DARNet has a dense skip connections module between the encoder-decoder architecture, which combines features from various levels. A hybrid attention module is inserted at the skip connections level to combine temporal, spatial, and channel attention. Also, a recurrent refinement module is used to refine the predicted change in the decoding process. The experimental results on the season-varying change detection (SVCD) dataset, the Sun Yat-sen University change detection (SYSU-CD) dataset, and the LEVIR-CD dataset outperformed state-of-the-art models.
Some deep learning-based models proposed for change detection include Recurrent Neural Networks (RNN) due to their ability to handle related data sequences. The sequence usually consists of images from two different time points (Parelius, 2023).
Fang et al. (2023) proposed a fine-grained Multi-Functional Radar (MFR) model followed by a multi-head attention-based bi-directional Long Short Term Memory (LSTM) network to capture relationships between successive pulses. This process uses the temporal features to predict the probability of each pulse being a change point. The simulation results achieved better performance than the compared convolutional and recurrent networks.
Shi et al. (2022) proposed a Multi-path Convolutional LSTM (MP-ConvLSTM) by combining LSTM and a CNN for change detection with bi-temporal hyperspectral images. A Siamese CNN was adopted to reduce the dimensionality of the images and extract preliminary features. A Convolutional LSTM was used to learn multi-level temporal dependencies among them. The MP-ConvLSTM was evaluated using four publicly available hyperspectral datasets acquired by the Earth Observing-1 (EO-1) Hyperion and obtained superior results than several state-of-the-art change detection algorithms, also exhibiting better trade-off between complexity and accuracy in general.
Papadomanolaki et al. (2019) presented a framework for urban change detection that combines a fully convolutional network (FCN) similar to U-Net for feature representation and a recurrent network for tem
poral modeling. The U-Net-based encoder-decoder architecture has a convolutional LSTM block added at all levels of the encoder. The authors evaluated the performance of this network using an ensemble cross-validation strategy on bi-temporal data from Onera Satellite Change Detection (OSCD) Sentinel-2 dataset (SAR data). The U-Net+LSTM model outperformed the regular U-Net.
Strategies using residual learning (He et al., 2016) to facilitate gradient convergence are also applied with FCN approaches for change detection since such a combination helps obtain a more comprehensive range of information (Khelifi and Mignotte, 2020; Shafique et al., 2022).
Basavaraju et al. (2022) introduced UCDNet, the Urban Change Detection Network. This model is based on an encoder-decoder architecture, using a version of spatial pyramid pooling (SPP) blocks for extracting multiscale features and residual connections for introducing additional maps of feature differences between the streams at each level of the encoder to improve change localization, aiming to acquire better predictions while preserving the shape of changed areas. UCDNet uses a proposed loss function, a combination of weighted class categorical cross-entropy (WCCE) and modified Kappa loss. The authors evaluated the network on bi-temporal multispectral Sentinel-2 satellite images from Onera Satellite Change Detection (OSCD), obtaining better results than the models used for comparison..
The Multiscale Residual Siamese Network fusing Integrated Residual Attention (IRA-MRSNet), proposed by Ling et al. (2022), introduced multi-resolution blocks that combine convolutions with kernels of different sizes to extract deep semantic information and features at multiple scales. In addition, this network utilizes an attention unit connecting the encoder and the decoder. In experiments conducted on Seasonal Change Detection Dataset (CDD) the network outperformed the counterpart methods.
Peng et al. (2019) presented an improved U-Net++ design where change maps could be learned from scratch using available annotated datasets. The authors adopted the U-Net++ model with dense skip connections as the backbone for learning multiscale feature maps from several semantic layers. Residual blocks are employed in the convolution unit, aiming for better convergence. Deep supervision is implemented by using multiple side-output fusion (MSOF) to combine change maps from different semantic levels, generating a final change map. They used the weighted binary cross-entropy loss and the dice coefficient loss to mitigate the impact of class imbalance. The performance of the proposed CD method was verified on a VHR satellite image dataset and acheived a superior performance than the related methods.
Recent studies proposed change detection techniques by combining residual and recurrent learnings. For instance, Khankeshizadeh et al. (2022) presented FCD-R2U-Net, a forest change detection that includes a module for producing an enhanced forest fused difference image (EFFDI), to achieve a more efficient distinction of changes, followed by (Recurrent Residual U-Net) R2U-Net, applied to segment the EFFDI into the changed and unchanged areas. Experimental results were conducted on four bi-temporal images acquired by the Sentinel 2 and Landsat 8 satellite sensors. The qualitative and quantitative results demonstrated the effectiveness of the proposed EFFDI in reflecting the true forest changes from the background. Regarding the qualitative results, forest changes and their geometrical details were better preserved by FCD-R2U-Net, compared with U-Net, ResU-Net, and U-Net++. The proposed network also obtained superior results in the quantitative analysis.
Moustafa et al. (2021) proposed a change detection architecture named Attention Residual Recurrent U-Net (Att R2U-Net), inspired by R2U-Net and attention U-Net. This study supports the notion that deep neural networks can learn complex features and improve change detection performance when combined with hyperspectral data. Three hyperspectral change detection datasets with class imbalance and small regions of interest were employed to evaluate the performance of the proposed method for binary and multiclass change cases. The results were compared with U-Net, ResU-Net, R2U-Net, and Attention U-Net. Att R2U-Net outperformed the counterpart methods in almost all metrics and cases.
As observed in the previously mentioned related works, in addition to developing new algorithms, combinations of available techniques are being considered to improve the accuracy of change detection in remote sensing. This is a research focus pointed out by Panuju et al. (2020).
## 3 Deep Learning Approaches Background
This section presents the deep learning approaches explored for constructing the change detection frameworks proposed in this study. The architectures are rooted in recurrent residual learning.
### Residual Networks
To enhance the training process of deep convolutional neural networks (CNNs), Residual Networks (ResNets) were conceived based on the observation that as neural networks grow deeper, they typically encounter elevated training errors, particularly when the network's depth becomes substantially large.
He et al. (2016) introduced the so called residual blocks (see1 equipped with skip connections or shortcuts. In a residual block, the input to a layer is combined with the output of that layer, allowing the network to pass through information without significant alteration directly. As gradients backpropagate during training, they flow nearly unaltered through these skip connections, improving the convergence at the earlier layers.
He et al. (2016) defines de building block as
\[\mathbf{y}=\mathcal{F}(\mathbf{x},\{\mathcal{W}_{i}\})+\mathbf{x}\, \tag{1}\]
where \(\mathbf{x}\) and \(\mathbf{y}\) are the input and output of the layers considered and the function \(\mathcal{F}(\mathbf{x},\{\mathcal{W}_{i}\})\) denotes the residual mapping with learnable weights assembled in \(\mathcal{W}_{i}\). The example in Figure 1 has two layers, so \(\mathcal{F}=W_{2}\ \sigma(W_{1}\mathbf{x})\), with \(\sigma\) representing the rectified linear unit (ReLU) activation function (Nair and Hinton, 2010). Biases are omitted for simplifying notations. It also seen in Figure 1 that the operation \(\mathcal{F}+\mathbf{x}\) is carried out by a shortcut connection and element-wise addition. These shortcut connections introduce no additional parameters and negligible computational complexity.
Inspired by the U-Net architecture and deep residual learning, Zhang et al. (2018) proposed the Deep Residual U-Net (ResU-Net). ResU-Net has three parts: The encoder consists of Residual Blocks (RB) and down-sampling operations that compress the input into compact representations. The bottleneck connects the encoder with the last part, the decoder, which comprises bilinear upsampling operations and recovers the representations to a pixel-wise categorization. In addition, skip connections link the first and last part so that information from the encoding layers is preserved and transmitted to the decoding layers.
According to Zhang et al. (2018), the deep residual units make the deep network easy to train, and the skip connections within a residual unit and between the corresponding levels of the network will facilitate information propagation without degradation, making it possible to design a deep neural network with fewer parameters.
As the present work approaches change detection, the decoder output feeds a softmax operator that delivers the posterior class probabilities for change or no change at each pixel location. The following figures show U-Net (Figure 1(a)) and ResU-Net (Figure 1(c)) variants used for deforestation detection in (Ortega et al.,
Figure 1: Residual block (RB).
2021). The residual blocks (Figure 1(b)) facilitate the flow of information through the network layers, allowing the capture of relevant features of the satellite image (He et al., 2016), which is crucial for change detection.
### Recurrent Networks
Recurrent Neural Networks (RNNs) are designed to process sequential data, updating their internal state at each time step \(t\) while storing relevant information from prior steps (Rumelhart et al., 1986). This enables RNNs to share weights across time steps, capturing temporal patterns Goodfellow et al. (2016); Graves (2013). However, the ability to retain information over long sequences is limited in traditional RNNs due to vanishing or exploding gradients during back-propagation (Calin, 2020). The Long Short-term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) networks were developed to address such limitations of conventional RNNs. They incorporate gating mechanisms that selectively allow information to flow nearly unaltered through them, allowing the capture of long-range dependencies in sequential data.
The present work employs the Convolutional LSTM Network (ConvLSTM). A ConvLSTM unit (See Figure 3) consists of inputs \(X_{t}\), a cell state \(C_{t}\), a hidden state \(H_{t}\) along with an input gate \(i_{t}\), an output gate \(o_{t}\) and a forget gate \(f_{t}\) to control information flow. Due to the introduction of the convolutional structure, all the states, inputs, and intermediary outputs are three-dimensional tensors where the first two dimensions are spatial (rows and columns), and the last dimension learns feature representations (Shi et al., 2022). The input \(X_{t}\) and past states \(C_{t-1}\), \(H_{t-1}\) are employed to determine the future states \(C_{t}\) and \(H_{t}\).
The central equations are described in the sequence, with '\(*\)' denoting the convolutional operator, '\(\circ\)' the Hadamard product, \(W\) the coefficient matrix, \(\sigma\) the sigmoid function and \(b\) the bias vector.
Figure 2: U-Net and ResU-Net Architectures being used for a change detection scheme. Legend: C (Convolution), MP (Max-pooling), RB (Residual Block), US (Up-sampling).
\[i_{t} =\sigma(W_{xi}*X_{t}+W_{hi}*H_{t-1}+W_{ci}\circ C_{t-1}+b_{i}) \tag{2}\] \[f_{t} =\sigma(W_{xf}*X_{t}+W_{hf}*H_{t-1}+W_{cf}\circ C_{t-1}+b_{f})\] (3) \[C_{t} =f_{t}\circ C_{t-1}+i_{t}\circ\tanh(W_{xc}*X_{t}+W_{hc}*H_{t-1}+b_ {c})\] (4) \[o_{t} =\sigma(W_{xo}*X_{t}+W_{ho}*H_{t-1}+W_{co}\circ C_{t}+b_{o})\] (5) \[H_{t} =o_{t}\circ\tanh(C_{t}) \tag{6}\]
According to Shi et al. (2022), the focus when using ConvLSTM for a change detection task is on capturing short-term temporal dependencies that accentuate bands capable of detecting changes while attenuating bands with less informative content. ConvLSTM recognizes and analyzes the multitemporal changes in image sequences by capturing temporal dependencies and incorporating temporal features into the change detection process. Consequently, the hidden states \(H_{t}\) of the ConvLSTM output can be extracted as representative features of changes.
Considering this ability, convLSTM has been used in applications involving sequential images, such as detection of changes in hyperspectral images (Shi et al., 2022), detection of urban changes (Papadomanolaki et al., 2019) and deforestation detection (Masolele et al., 2021).
### Recurrent Residual Networks
According to Yue et al. (2018), residual learning and shortcut connections can effectively mitigate the exploding and vanishing gradient issues in long-term backpropagation. Bringing together the residual learning described in Section 3.1 and the recurrent learning presented in Section 3.2, Recurrent Residual Neural Networks have been proposed in the literature. Some examples include the Hybrid Residual LSTM (HRL) used for sequence classification (Wang and Tian, 2016), R2U++ (Mubashar et al., 2022) and the Deep Recurrent U-Net (DRU-Net) (Kou et al., 2019) proposed for medical image segmentation.
A particular recurrent residual network called R2U-Net (Alom et al., 2019) takes the U-Net architecture and the residual blocks of ResU-Net (Figure 2) as a starting point and includes the recurrent learning. This model combines recurrent convolutional operations with the residual blocks in Recurrent Residual Convolutional Units (RRCU - Figure 3(a)) to replace the regular convolutional layers in the U-Net. Each
Figure 3: Inner structure of ConvLSTM
RRCU has two Recurrent Convolutional Layers (RCL), and the input of the residual block is added to the output of the second RCL unit.
The unfolded RCL (Figure 3(b)) for \(t\) time steps is a feed-forward sub-network of depth \(t+1\). The figure exemplifies an RCL with \(t=2\), referring to the recurrent convolutional operation that includes one single convolution layer followed by two sub-sequential recurrent convolutional layers.
The following mathematical explanation of an RCL unit is adapted from Liang and Hu (2015). For a unit located at \((i,j)\) on the \(k\)-th feature map in an RCL, the net input, \(z_{ijk}(t)\) at a step \(t\), is formulated as:
\[z_{ijk}(t)=(\mathbf{w}_{k}^{f})^{T}\mathbf{x}^{f(i,j)}(t)+(\mathbf{w}_{k}^{r} )^{T}\mathbf{x}^{r(i,j)}(t-1)+b_{k}, \tag{7}\]
where \(\mathbf{x}^{f(i,j)}(t)\) and \(\mathbf{x}^{r(i,j)}(t-1)\) represents the feedforward and recurrent input, respectively. They correspond to the vectorized patches centered at \((i,j)\) of the feature maps in the current and previous layer. The terms \(\mathbf{w}_{k}^{f}\) and \(\mathbf{w}_{k}^{r}\) represent the vectorized feed-forward weights and recurrent weights, respectively, and \(b_{k}\) is the bias. The first term in Eq. 7 is used in standard CNN and the recurrent connections induce the second term. The activity or state of this unit is a function of \(z_{ijk}(t)\), where \(\sigma\) is the ReLU:
\[\sigma(z_{ijk}(t))=\max(z_{ijk}(t),0) \tag{8}\]
The RRCU proposed by Alom et al. (2019) uses the RCL in a Residual Block, as shown in Figure 3(a). Considering \(x_{l}\) as the input in the \(l^{th}\) layer of an RRCU, the output of this unit, \(x_{l+1}\), can be calculated as:
\[x_{l+1}=x_{l}+F(x_{l},w_{l}), \tag{9}\]
where \(F(x_{l},w_{l})\) is the output of the last RCL, expressed as
\[F(x_{l},w_{l})=\sigma(z_{ijk}^{l}(t))=\max(z_{ijk}^{l}(t),0). \tag{10}\]
Figure 5 shows a R2U-Net architecture with three encoding and decoding levels and a bottleneck between these stages, employing RRCUs.
Recurrent residual architectures derived from R2U-Net, including FCD-R2U-Net (Khankeshizadeh et al., 2022) and Att R2U-Net (Moustafa et al., 2021) have been used for change detection tasks. Alom et al. (2019) and Kou et al. (2019) state that the inclusion of RCLs in residual units further enhances the ability to handle deeper architectures. Also, the process of collecting and combining information from different time-steps in a recurrent neural network allows the model to capture dependencies and patterns over longer sequences of data. This feature accumulation process leads to more comprehensive feature representations and helps the model extract very low-level features from the data, which are crucial for segmentation tasks across various modalities.
Figure 4: Recurrent Residual Convolutional Unit (RRCU)
## 4 Proposed Architectures
This section presents the deep learning solutions for deforestation monitoring proposed in this work, drawing on the principles of recurrent residual learning discussed in Section 3.3. In this way, in addition to the ability to learn in multiple layers and the mitigation of the problem of vanishing gradients provided by the residual blocks, there will be benefits from the temporal dependency modeling provided by the recurrent layers. The first proposal, illustrated in Figure 5, consists of an adaptation of R2U-Net, using the RRCUs in the encoder and in the bottleneck and the replacement of these units in the decoder by transposed convolutions (TC), as shown in Figure 6. The result is a network with fewer parameters.
The second proposal relies on a new recurrent residual block, the Residual Convolutional LSTM block (RCLSTM), that consists of convolutional LSTM blocks with a ReLU activation instead of the RCL used in R2U-Net's RRCU. Figure 7 highlights the differences between the RCLSTM block proposed here and a typical residual block (Figure 6(a)) and an RRCU block (Figure 6(b)), already described in Section 3.3.
Figure 5: R2U-Net Architecture for change detection. Legend: MP (Max-pooling), TC (Transpose Convolution), RRCU (Recurrent Residual Convolutional Unit)
Figure 6: Modified R2U-Net Architecture: RRCNN-1
The second proposal derives from the RRCNN-1, replacing the RRCUs by the RCLSTM block in the encoder and bottleneck. Figure 8 depicts the proposed recurrent residual network hereafter called RRCNN-2.
The third proposed architecture proposed in this work, named RRCNN-3, is a kind of variant of the prior architecture resulting from replacing the RCLSTM block in the bottleneck with a singular convolutional LSTM block, as depicted in Figure 9.
## 5 Experimental Analysis
This section presents the datasets and the experimental setup employed in this study. The deep learning methods presented in Section 3 were used for comparison purposes with the architectures proposed in Section 4.
### Dataset
This study employed Sentinel-1 data from a site in the Brazilian Legal Amazon in the Para state that extends over \(115\times 186\) Km\({}^{2}\). The site is characterized by mixed land cover, mainly dense evergreen forests and pastures (see Figure 10).
To build the input data for the deep learning models described in Section 3 and Section 4, seven images from this same geographical site with resolution of \(9327\times 5767\times 2\) (width \(\times\) height \(\times\) polarizations - VV and VH) were captured with a periodicity of approximately two months, starting in August 2019 (Figure 11(a)) and ending in August 2020 (Figure 11(b)).
Figure 8: RRCNN-2 Architecture.
Figure 9: RRCNN-3 Architecture.
The reference map of the deforestation that occurred in this period is available on the INPE website2 (Figure 11(c)). It is worth mentioning that this dataset is highly unbalanced, with only \(1.06\%\) of the pixels belonging to the deforestation class, \(34.04\%\) corresponding to past-deforestation class, and \(64.9\)\(\%\) to the no-deforestation class.
Footnote 2: [http://terrabrasilis.dpi.inpe.br/](http://terrabrasilis.dpi.inpe.br/)
The input consists of a tensor \(\mathbf{I}\in\mathbf{R}^{H\times W\times 2D}\) resulting from stacking \(D\) multitemporal SAR images with 2 polarizations along the third dimension. In our experiments, we explored two scenarios: \(D=2\) to represent the conventional bitemporal set and \(D=7\) to represent an extended multitemporal sequence. Figure 12 shows this process for the bitemporal case.
Each input tensor was splited into \(60\) tiles of \(932\times 961\) pixels. A cross-validation strategy with six folds was adopted during the training. Each tile was part of the test set only once. Then, the final prediction was a mosaic of all test tiles covering the whole image.
During training and validation the network receives as input tensor patches of \(128\times 128\) pixels cropped from the training tiles, with maximum overlap of \(70\%\) allowed.
Figure 11: The SAR images: (a) Initial date; (b) Final date; (c) Ground truth of the deforestation that occurred in the period. Legend - gray: past deforestation (1988-2018); red: deforestation (2019-2020); blue: no-deforestation.
Figure 10: Geographical location of the study site in the Pará state, Brazil.
### Networks configuration and hyperparameters
The experimental analysis reported in the next section compares the results obtained with the approaches discussed in Section 3, which serve as baselines, with the models proposed in Section 4. Table 1 shows the architectures evaluated in our experiments.
The employed parameter setup follows: batch size equal to 32, Adam optimizer with learning rate equal to \(1e^{-3}\), and \(\beta\) equal to 0.9, and, to avoid over-fitting, an early stopping strategy with patience equal to 10.
Considering that the dataset is highly unbalanced, the employed loss function was the weighted categorical cross entropy, given by Ho & Wookey (2019):
\[\text{Loss}_{wcee}=-\frac{1}{M}\sum_{k=1}^{K}\sum_{m=1}^{M}w_{k}\cdot y_{m}^{k} \cdot\log(\hat{y}_{m}^{k}), \tag{11}\]
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Architectures & Encoder & Bottleneck & Decoder & Output \\ \hline \multirow{3}{*}{U-Net} & MP(C(3\(\times\)3,32)) & C(3\(\times\)3,128) & US(C(3\(\times\)3,128)) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ & MP(C(3\(\times\)3,64)) & C(3\(\times\)3,128) & US(C(3\(\times\)3,64)) & \\ & MP(C(3\(\times\)3,128)) & C(3\(\times\)3,128) & US(C(3\(\times\)3,32)) & \\ \hline \multirow{3}{*}{ResU-Net} & MP(RB(3\(\times\)3,32)) & RB(3\(\times\)3,128) & US(C(3\(\times\)3,128)) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ & MP(RB(3\(\times\)3,64)) & RB(3\(\times\)3,128) & US(C(3\(\times\)3,64)) & \\ & MP(RB(3\(\times\)3,128)) & RB(3\(\times\)3,128) & US(C(3\(\times\)3,32)) & \\ \hline \multirow{3}{*}{R2U-Net} & MP(RRCU(3\(\times\)3,32)) & \multirow{3}{*}{RRCU(3\(\times\)3,128)} & US(RRCU(3\(\times\)3,128)) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ & MP(RRCU(3\(\times\)3,64)) & & US(RRCU(3\(\times\)3,64)) & \\ & MP(RRCU(3\(\times\)3,128)) & & US(RRCU(3\(\times\)3,32)) & \\ \hline \multirow{3}{*}{RRCNN-1} & MP(RRCU(3\(\times\)3,32)) & \multirow{3}{*}{RRCU(3\(\times\)3,128)} & TC(3\(\times\)3,128) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ & MP(RRCU(3\(\times\)3,128)) & & TC(3\(\times\)3,64) & \\ \hline \multirow{3}{*}{RRCNN-2} & MP(RCLSTM(3\(\times\)3,32)) & \multirow{3}{*}{RCLSTM(3\(\times\)3,128)} & TC(3\(\times\)3,128) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ & MP(RCLSTM(3\(\times\)3,64)) & & TC(3\(\times\)3,64) & \\ \cline{1-1} & MP(RCLSTM(3\(\times\)3,128)) & & TC(3\(\times\)3,32) & \\ \hline \multirow{3}{*}{RRCNN-3} & MP(RCLSTM(3\(\times\)3,32)) & \multirow{3}{*}{Conv. LSTM(3\(\times\)3,128)} & TC(3\(\times\)3,128) & \multirow{3}{*}{Softmax (C(1\(\times\)1, \#Classes))} \\ \cline{1-1} & MP(RCLSTM(3\(\times\)3,128)) & & TC(3\(\times\)3,32) & \\ \hline \hline \end{tabular} The parametrization is (Kernel Height x Kernel Width, Number of filters). Symbols: C (Convolution), MP (Max-pooling), RB (Residual Block), US (Up-sampling), TC (Transpose Convolution), RRCU (Recurrent Residual Convolutional Unit), RCLSTM (Residual Convolutional LSTM block).
\end{table}
Table 1: Networks Architectures
Figure 12: Model Input construction employed for the current experiments. Bitemporal example for data acquired from the same sensor.
where \(M\) is the number of training pixels, \(K\) is the number of classes, \(w_{k}\) is the weight for class \(k\), \(y_{m}^{k}\) is the target label for training example \(m\) for class \(k\), \(x_{m}\) is the input for training example \(m\) and \(\hat{y}_{m}^{k}\) refers to the predicted probability for training example \(m\) for class \(k\).
In the present case, adopted the weights 0.2 for class no-deforestation and 0.8 for class deforestation. Following the PRODES methodology, the past deforestation class was ignored during training, validation, and testing. Only patches having at least 2% of pixels of the deforestation class were used for training. In addition, a data augmentation procedure was applied for training and validation; these operations included rotation (multiples of 90\({}^{\text{0}}\)) and flipping (horizontal, vertical) transformations. The threshold to separate the deforestation and no-deforestation classes was 50%.
### Evaluation metrics
The generated deforestation maps classify each pixel into categories of deforestation and no-deforestation. Designating deforestation as a "positive" and no-deforestation as "negative", there are four possible outcomes: true positive (TP) being a correctly identified deforestation/positive, true negative (TN) being a correctly identified no-deforestation, false positive (FP) being an unchanged pixel labeled as deforestation and false negative (FN) a changed pixel labeled as no-deforestation.
Among the several performance metrics that have been used to evaluate the results of a deforestation detection process, three of the most common are Precision, Recall, and F1-Score, as can be seen in the studies mentioned in Section2 and also in review articles about change detection with remote sensing data (Parelius, 2023; Shafique et al., 2022).
The Precision metric denotes the ratio between the number of correctly classified deforestation pixels and the total number of pixels identified as deforestation.
\[\text{Precision}=\frac{TP}{TP+FP}. \tag{12}\]
The Recall metric, conversely, is equivalent to the true positive rate, representing the ratio of accurately classified deforestation pixels to the total number of original deforestation pixels.
\[\text{Recall}=\frac{TP}{TP+FN}. \tag{13}\]
With Recall and Precision, the F1-Score is calculated as follows:
\[\text{F1-Score}=\frac{2\cdot\text{Precision}\cdot\text{Recall}}{\text{Precision }+\text{Recall}}. \tag{14}\]
### Computational Resources
The preliminary experiments were conducted on the following system configuration:
* 3.60GHz 128MB L3 Cache (280W)
* Memory: 8 x 64GB PC4-25600 3200MHz DDR4 ECC RDIMM (512GB total)
* 48GB GDDR6
- PCIe 4.0 x16
* Operating System: Ubuntu 22.04.2 LTS
The use of all GPUs available on the machine for training was enabled by the Mirrored Strategy (**tf.distribute.MirroredStrategy3**) function of the TensorFlow deep learning framework. This data parallelism approach is intended to accelerate the training process by allowing a deep learning model to be replicated across multiple GPUs, where each GPU retains a full copy of the model. During training, each replica processes a portion of the training data, and then gradient updates are synchronized between GPUs to update the global model. According to Pang et al. (2020), using this function further accelerates training and allows for larger models by leveraging memory from multiple GPUs.
## 6 Experimental Results
In this section, we delve into the findings derived from our experimental analysis to evaluate the performance of the deforestation detection architectures outlined in Section 4. Figures 13 through 15 showcase the performance metrics of the methods introduced in this study, alongside those referenced in Section 3, which serve as our baseline for comparison. These figures provide a comprehensive assessment of accuracy, encompassing Precision, Recall, and F1-score, derived from experiments on both bitemporal and multitemporal input data. The reported scores have been derived using K-fold cross-validation with a value of \(K=6\), following the methodology detailed in Section 5.2.
The first finding that emerges from the analysis of the figures is that all performance metrics derived from multi-temporal data consistently outperformed those obtained from bi-temporal data. Notably, the only exception was the Precision for RRCNN-2, which declined by a small amount for the multitemporal data. This observation corroborates the hypothesis that signs of change in SAR images get less apparent
Figure 14: Comparison of methods in terms of Recall for bitemporal and multitemporal data
Figure 13: Comparison of methods in terms of Precision for bitemporal and multitemporal data
with time. Consequently, using a sequence rather than a mere pair of SAR images improves the chance of capturing changes that occur during the target observation interval.
The second conclusion from our experiments is that recurrent variants, namely the R2U-Net and the three RRCNN variants, consistently outperformed their strictly convolutional counterparts, namely U-Net and ResU-Net. This trend was apparent in all three metrics we examined.
As for the three proposed variants, the RRCNN-1 consistently outperformed the other two, with RRCNN-2 holding a slight edge over RRCNN-3. Remarkably, RRCNN-1 showcased the best results among all the architectures we examined, surpassing the top-performing baseline, the R2U-Net, by 2% in F1-Score.
Another aspect deserving of examination is the computational efficiency. Figure 16 provides insight into the count of trainable parameters associated with each scrutinized configuration.
By looking at each bar group in Figure 16, one observes that adopting a sequence of images instead of a mere image pair had a marginal impact on the parameter count for each model. It is also observed that the RRCNN-2 and RRCNN-3 were the variants with the largest parameter count, followed closely by R2U-Net.
Interestingly RRCNN-1 stands out by carrying approximately half the number of parameters when
Figure 16: Number of Parameters of each model for the bitemporal and multitemporal data
Figure 15: Comparison of methods in terms of F1-Score for bitemporal and multitemporal data
contracted with R2U-Net, bringing it close to the parameter count of ResU-Net. This implies that RRCNN-1 successfully incorporates recurrence into its model framework with minimal alterations to the overall parameter load compared to ResU-Net, a fully convolutional network.
Figure 17 and Figure 18 show the training and inference times. By and large, as expected, these Figures show a profile similar to those of the parameter values.
Figure 19: Predicted deforestation maps in two snips from the test set. Legend - past deforestation; deforestation (true positives); no deforestation (true negatives); false positives; false negatives.
Figure 19-(p) and (v), that represent maps generated with a multitemporal input.
Regarding false negative spots, their decrease is noticed in the maps generated with multitemporal inputs in relation to bitemporal inputs, confirming the quantitative results. The change map generated by the RRCNN-1 with the multitemporal input was the closest to the ground-truth, as can be seen in Figure 19(p). The other developed architectures, RRCNN-2 and RRCNN-3, also delivered well defined maps.
## 7 Conclusions and Future Work
The present research seeks for developing solutions for deforestation monitoring using deep learning aproaches. Until the present stage of this investigation, three change detection architectures relying on recurrent residual learning have been formulated, RRCNN-1, RRCNN-2 and RRCNN-3. These methods were compared with three techniques from the literature, U-Net, ResU-Net and R2U-Net, the latter being a residual recurrent network.
Preliminary experiments were conducted using Sentinel-1 SAR images corresponding to a region of the Brazilian Amazon rainforest. The ground-truth used in this work was collected from the PRODES Project, which was developed by the National Institute for Space Research (INPE). The performance of the techniques was compared using bitemporal data, which are usually emphasized in the literature reports, and also multitemporal data.
RRCNN-1 presented the best performance in most metrics, achieving a F1-Score of 66,5% with the bitemporal input and 71,6% with the multitemporal input. RRCNN-2 had the best Precision (97,1%), the second best F1-Score (65,2%) and the third best Recall (49,0%) with the bitemporal input and the second best Recall and F1-Score with the multitemporal input, 55,9% and 70,8%, respectively. RRCNN-3 achieved the second best Recall (49,2%) and Precision (95,9%) in the bitemporal case and the third best Precision (97,2%) with the multitemporal input.
Based on the assessed metrics and the change maps generated through the tested networks, it became evident that the incorporation of an extended sequence of images significantly enhanced the deforestation detection performance, highlighting the potential benefits of incorporating a comprehensive longer temporal context in the analysis. RRCNN-1, particularly, delivered significant improved results with a multitemporal input compared to the bitemporal case, while incurring a training time increase of nearly 2 minutes.
The RRCNN-2 and RRCNN-3 networks are designed with ConvLSTM layers in their architectures, which led to longer training and inference times. This is attributed to the inherently high computational complexity associated with LSTM operations, resulting in more demanding computational resources during the learning process.
The next steps for this research include tests with other datasets commonly employed in deforestation detection surveys.
## Acknowledgments
The authors would like to thank the financial support provided by CNPq, CAPES and FAPERJ.
|
2308.01863 | Tag Prediction of Competitive Programming Problems using Deep Learning
Techniques | In the past decade, the amount of research being done in the fields of
machine learning and deep learning, predominantly in the area of natural
language processing (NLP), has risen dramatically. A well-liked method for
developing programming abilities like logic building and problem solving is
competitive programming. It can be tough for novices and even veteran
programmers to traverse the wide collection of questions due to the massive
number of accessible questions and the variety of themes, levels of difficulty,
and questions offered. In order to help programmers find questions that are
appropriate for their knowledge and interests, there is a need for an automated
method. This can be done using automated tagging of the questions using Text
Classification. Text classification is one of the important tasks widely
researched in the field of Natural Language Processing. In this paper, we
present a way to use text classification techniques to determine the domain of
a competitive programming problem. A variety of models, including are
implemented LSTM, GRU, and MLP. The dataset has been scraped from Codeforces, a
major competitive programming website. A total of 2400 problems were scraped
and preprocessed, which we used as a dataset for our training and testing of
models. The maximum accuracy reached using our model is 78.0% by MLP(Multi
Layer Perceptron). | Taha Lokat, Divyam Prajapati, Shubhada Labde | 2023-08-03T16:39:02Z | http://arxiv.org/abs/2308.01863v1 | # Tag Prediction of Competitive Programming Problems using Deep Learning Techniques
###### Abstract
In the past decade, the amount of research being done in the fields of machine learning and deep learning, predominantly in the area of natural language processing (NLP), has risen dramatically. A well-liked method for developing programming abilities like logic building and problem solving is competitive programming. It can be tough for novices and even veteran programmers to traverse the wide collection of questions due to the massive number of accessible questions and the variety of themes, levels of difficulty, and questions offered. In order to help programmers find questions that are appropriate for their knowledge and interests, there is a need for an automated method. This can be done using automated tagging of the questions using Text Classification. Text classification is one of the important tasks widely researched in the field of Natural Language Processing. In this paper, we present a way to use text classification techniques to determine the domain of a competitive programming problem. A variety of models, including are implemented LSTM, GRU, and MLP. The dataset has been scraped from codeforces, a major competitive programming website. A total of 2400 problems were scraped and preprocessed, which we used as a dataset for our training and testing of models. The maximum accuracy reached using our model is 78.0% by MLP(Multi Layer Perceptron).
Multi class text Classification, Natural Language Processing, LSTM, GRU, Multi Layer Perceptron.
## I Introduction
A tremendous amount of research has been done in the past decade in the area of Natural Language Processing (NLP), which is concerned with how computers and human languages interact. There are many different applications of NLP, such as Sentiment Analysis, Text Classification, Machine Translation, text summarization, and many more [1, 2, 3]. Text classification is an application of NLP that involves categorizing or labeling a given text based on its content. With many applications in areas including sentiment analysis, topic modeling, spam detection, and more, text categorization is a fundamental NLP issue. In general, text categorization may be divided into two categories binary and multi-class. In binary classification, the text is classified into one of the two predetermined groups, whereas in multi-class classification, it is classified into one of the many specified groups. On the other hand, multi-label classification is a subset of multi-class classification in which a single text may simultaneously belong to several categories.
Multi-class classification is a difficult issue in NLP that has attracted a lot of interest lately. Each text instance in this job can be connected to many labels. Multi-class classification is prevalent in various real-world applications, such as document categorization, image tagging, and music genre classification.
As there have been advancements in the area of Deep Learning (DL), it has also been used for solving these NLP problems. Convolutional neural networks (CNNs) and Recurrent Neural Networks (RNNs) are two of the most commonly used deep learning approaches that have been successfully used in multi-class classification. [4]. This paper also provides a DL based approach that can be used for multi-label text classification purposes. This paper is organized in the following manner: Section I contained the introduction to multi-class text classification; Section II contained the Literature survey; and Section III contained our methodology, i.e., the proposed model along with details about the dataset used and the pre-processing techniques applied to it, as well as training and testing techniques. Section IV is Results and Discussion, which discusses how our approach is better than previous approaches and comparisons based on metrics. Lastly, Section V concludes the paper and also provides some future direction.
## II Related Work
There has been numerous amount of research that has been on the topic of multi-label text classification especially using DL techniques in the past decade and we will go thorough some of the literature in this section.
The comparision between 12 machine learning pipelines using the dataset Enron spam corpus is done [5]. The preprocessing steps involve removal of stop words, lemmatisation, removal of HTML tags and single letters and numbers. After preprocessing using the above steps, various machine learning algorithms were fit on the processed data. The machine learning algorithms used are Naive Bayes, Support Vector Machine (SVM), k-Nearest Neighbours(kNN), Multi Layer perceptron Neural Network(MLP), Logistic Regression, Random Forest, and Extreme Gradient Boost (XGBoost). The authors use 5-fold cross validation throughout the test cases to avoid overfitting the model. Random Forest was the best performing model with precision, recall and F1-score of 0.94, 0.94, 0.4 respectively.
Several large datasets like AG's news corpus, Sogou news corpus, DBPedia onotology dataset, Yelp reviews, Yahoo! answers dataset and the Amazon reviews dataset to present a character level ConvNet [6]. There were two ConvNets designed : one large and one small each of them nine layers deep with six convolutional layers and three fully connected layers The most important conclusion from the experiments was that character-level ConvNets could work for text classification without the need for words. This meant there was a strong indication that language can also be thought of as a signal no different from any other kind.
An improved class specific word vector taht enhances the distinctive property of word in a class to tackle light polysemy problem in question classification [7]. The models used are Convolutional Neural Networks(CNN), Bidirectional LSTM(Bi-LSTM) and Attention Based Bi-GRU CNN(ABBC). The accuracy on TREC dataset was 0.936, the accuracy on Kaggle questions dataset was 0.918 and the accuracy on Yahoo questions dataset was 0.892.
The methods to improve text classfication on large number of unbalanced classes and noisy data are given [8]. The dataset used contains 57,647 English song texts with their artist and title that is downloaded from kaggle. The models used are perceptron, Doc2vec and Multilayer Perceptron(MLP). There are two versions of Perceptron used : the minimal version(Perceptron) and maximal version (Perceptron+). The tokenization for the Doc2Vec model is performed using the UIMA tokenizer. There are also two versions of MLP: MLP and MLP+ that has one bias feature for every group. MLP+ performs the best with F-score of 0.079 on training set and 0.182 on test set. The worst performing model is Perceptron+ with F-score of 0.003 on train set and 0.021 on the test set.
Many different types of EDA(Easy Data Augmentation) techniques are given which include Synonym Replacement(SR), Random Insertion(RI), Random Swap(RS) and Random Deletion(RD) [9]. RNNs (Recurrent Neural Networks) and CNNs (Convolutional Neural Networks) are then ru on five NLP tasks with and without the aforementioned EDA steps and an improvement was observed on the full dataset.
A Universal Language Model Fine tuning (ULMFiT) is proposed which when given a large corpus of a particular domain then fine tunes an already existing Language Model (LM) [10]. This method is tested on six widely studied datasets used in three most common text classification tasks: Sentiment Analysis, Question Classification and Topic Classification.
A text classifier called fastText is proposed which is a simple and efficient baseline model for text classification. It takes as input normalized bag of features of the Nth document (N being the number of documents) and then passes them through a hidden layer with hierarchical softmax [11]. For tag prediction, YFCC100M dataset is used, It was observed that adding of bigrams to the hidden layers improved the accuracy.
BertGCN is proposed, it uses BERT representations and converts into a heterogeneous graph over the dataset [12]. The input representations for document nodes to the BertGCN model are the document embeddings that are obtained using the BERT-style model. The output is then passed to a softmax layer for classification. The model is optimized using a memory bank M that keeps a track of the input features of all the document nodes.
A BAE is proposed which leverages the BERT-MLM to generate alternative of the masked tokens in the document [13]. It also replaces a token in the document with another token some of which contribute towards the final prediction.
Convolutional Neural Networks (CNNs) capture the bias caused by keywords appearing everywhere in the text, not only towards the end. A recurrent structure which is a bidirectional recurrent neural network is used in the proposed model to capture the contexts [14]. The word embedding in the model are pretrained using the Skip-gram model. The accuracy achieved by the model when using Convolutional Neural Network (CNN) is 94.79 while the accuracy achieved by using Recurrent Convolutional Neural Networks (RCNN) is 96.49.
A Graph neural Network is a multi layer neural network that operates directly on graphs and properties of the neighbourhoods of the nodes are used to induce embeddings of vectors of nodes [15]. The proposed model TextGCN builds a heterogeneous graph which models global word concurrence explicitly by taking into account word nodes and document nodes [16]. The accuracy of TextGCN model is better than most of the models used like CNN, LSTM, etc. The maximum accuracy achieved was 0.9797 with a range of 0.0010 on the R8 dataset.
## III Methodology
### _Dataset Collection_
We have gathered our data for the dataset creation from: Codefores1. Codefores is a well-known website used by many programmers to increase their logic building, debugging, problem solving, and other programming skills. It has problems in the form of contests, and one contest has many problems depending on the type of contest. Also, the level of problems keeps increasing as we move forward in the contest. So for collecting the data of these contest from these site we made use of a python library called BeautifulSoup2. BeautifulSoup is a web scrapper that is used by many for extracting data from HTML or XML files.
Footnote 1: [https://codeforces.com/](https://codeforces.com/)
Footnote 2: [https://beautiful-soup-4.readthedocs.io/en/latest/](https://beautiful-soup-4.readthedocs.io/en/latest/)
The structure of any problem in a contest is shown in Figure No. 1, As we can see, there is a main problem statement (highlighted using a red box) that states what the problem is about and what the programmer needs to do in order to solve that problem. Next, there are constraints, which describe the constraints (i.e., the maximum and minimum limits) for the variables mentioned in the problem statement. The input and output sections describe in what format the program will get its input and output, respectively. Next, there will be some sample test cases that will be shown as examples so that the
onstants get a clear idea of how their program will work. Lastly, there will be some tags (highlighted using a green box) that are assigned to that problem, which give hints to the solution of that problem. So for multi-label text classification, we will need the problem statements and the tags assigned to that question. First, we scraped its contest page and gathered all the problem ids for each contest, and then for each problem id, we scraped its problem statement and tags assigned to it, and lastly, we stored it in a Pandas dataframe and exported it in the form of a CSV. Now that our two dataset was ready, we started to preprocess it, which is covered in the next subsection.
### _Pre-processing Data_
As soon as the CSV was ready, we started to pre-process it, for which the first and most important step was to decide which tags we wanted to consider in our dataset. For pre-processing tags, all the special characters were removed from the tag name, and all the tags were scraped in the form of a string, which was converted into a list, and only those 3 tags were kept; the rest were deleted, and now all the rows that were empty (nan) were removed from the dataset, so that we can get only the problem statements that have those 3 tags in them. After all the unwanted tags were removed, we started to clean up the problem statements, as they had some LaTeX tags and some other unwanted items in them.
So first all these unwanted characters were removed and secondly all the sentences were converted into lowercase then tokenized using "word_tokenize" from NLTK3 library. After which all the stop words were removed. Stop words are those words that are filtered out in the process of NLP (like prepositions, conjunctions, pronouns, articles, etc.). Once the stop words were removed, the text was lemmitized using "WordNetLemmatizer" from NLTK. Once all the cleaning and preprocessing was done, both datasets were combined, and finally we got our dataset, which had 992 problems and 3 tags in it. The 3 tags selected were greedy, graphs and implementation. If a problem had multiple only the tag belonging to the aforementioned 3 tags was selected and the others were discarded.
Footnote 3: [https://www.nltk.org/](https://www.nltk.org/)
After the generation of the dataset, the problem statements in the generated dataframe were broken down into individual words using the word tokenizer present in the NLTK library. After that, we find out the maximum length of the sequences, which was 951. We then use the Tensorflow tokenizer to split the given problem statements into tokens. This is then used to pad the problem statements to the maximum length.
### _Proposed Approach_
#### Iii-C1 Long Short-Term Memory (LSTM) Networks
LSTM is a popular approach used in deep learning. Long Short Term Memory (LSTM) is a special kind of Recurrent Neural network capable of learning long-term dependencies [17]. They are specifically designed to solve the vanishing gradient problem that comes with a traditional Simple Recurrent Neural Network by remembering information for long periods of time. We initially have the embedding layer with the input dimension as the vocabulary size Instead of having a simple single layer, we have three layers interacting with each other. We one LSTM layer with 32 neurons followed by a dense layer of 32 neurons. The activation function used is ReLU for all layers and Sigmoid for the final output layer. We used the rmsprop optimizer and used categorical cross entropy as the loss function.
#### Iii-C2 Gated Recurrent Units
GRU (Gated Recurrent Units): Gated Recurrent Units (GRU) were introduced in 2014 with the aim of solving the vanishing gradient problem that comes with a standard Recurrent Neural Network. GRU has an update gate and a reset gate [18]. These two gates decide what information should be passed to the output. This helps to keep the information from long ago without washing it through time or removing information that is irrelevant to the prediction, thereby solving the vanishing gradient problem. We have a GRU layers with 32 neurons followed by dense layer of
Fig. 1: Problem Page of Codeforces website.
Fig. 2: Codeforces problem statements belonging to different categories by considering one category per problem statement
32 neurons. The final output layer has a Sigmoid activation function, while the ReLU activation function is used for all other layers. Categorical cross entropy is used as the loss function in this case, and rmsprop is the optimizer used.
#### Iv-C3 Multi Layer Perceptron
A Multi-Layer Perceptron is a feed-forward neural network. In the proposed system of the Multi-Layer Perceptron, we have one input layer, which is a Dense layer with 64 neurons. After that, we add a Dropout layer with a fraction of 0.5. This is followed by another Dense layer of 32 neurons and a Dropout layer with a fraction of 0.5. The final layer is an output layer with 8 neurons. The activation function is used for all layers in ReLu (Rectified Linear Unit), while the final layer has the Sigmoid activation function. We used the rmsprop optimizer and categorical cross entropy as the loss function.
### _Training & Testing of proposed model_
Ater preprocessing the data using the above methods, we split the traning dataset into training and testing dataset. Because the dataset is small, we split the 992 problems as follows: 950 problems were used for training the models and the remaining 42 problems were used for testing. The input to the model was the input sequence of the problem statement generated by the tokenizer and the output contained a softmax activation which outputs 3 probabilities of each tag.
## IV Results and Discussions
Table I shows the accuracy of the models tried in the paper. It was found that Multi Layer Perceptron (MLP) gave the maximum accuracy. The other models used like Long Short Term Memory(LSTM) and Gated Recurrent Units (GRU) did not give a good enough accuracy. The maxmum accuracy that was achieved on the training set for Multi Layer Perceptron(MLP) was 73% while the training accuracy on Long Short Term Memory(LSTM) was 59% and the training accuracy on Gated Recurrent Unit(GRU) was 59%. It can be observed that LSTM and GRU give almost the same accuracy for the problem statement defined. A reason for the poor performance of LSTM and GRU is the lack of training examples and the bias in the dataset because of which the models are not able to capture the feature vectors effectively. A way to improve that would be to use other competitive programming websites like topcoder ([https://topcoder.com](https://topcoder.com)) and hackerrank ([https://hackerrank.com](https://hackerrank.com)) to increase the size of our training dataset which will subsequently improve the performance of the models.
## V Conclusion And Future Work
Hence, in this paper we present ways to classify competitive programming problems into their subsequent categories. Several widely used deep learning techniques were used. The MLP model was the best performing model giving an accuracy of 72%. In the future, the methods proposed can be used to classify all types of competitive programming problems on various platforms available. Models can further be refined and fine tuned to classify a greater number of tags Even though we classify the problem statements having only 3 tags, the models show considerable performance and can be improved to improve the accuracy. The scope can further be widened by including multi-label classification which will classify problems not into one category but in multiple categories.
|
2307.00320 | Automatic Solver Generator for Systems of Laurent Polynomial Equations | In computer vision applications, the following problem often arises: Given a
family of (Laurent) polynomial systems with the same monomial structure but
varying coefficients, find a solver that computes solutions for any family
member as fast as possible. Under appropriate genericity assumptions, the
dimension and degree of the respective polynomial ideal remain unchanged for
each particular system in the same family. The state-of-the-art approach to
solving such problems is based on elimination templates, which are the
coefficient (Macaulay) matrices that encode the transformation from the initial
polynomials to the polynomials needed to construct the action matrix. Knowing
an action matrix, the solutions of the system are computed from its
eigenvectors. The important property of an elimination template is that it
applies to all polynomial systems in the family. In this paper, we propose a
new practical algorithm that checks whether a given set of Laurent polynomials
is sufficient to construct an elimination template. Based on this algorithm, we
propose an automatic solver generator for systems of Laurent polynomial
equations. The new generator is simple and fast; it applies to ideals with
positive-dimensional components; it allows one to uncover partial $p$-fold
symmetries automatically. We test our generator on various minimal problems,
mostly in geometric computer vision. The speed of the generated solvers exceeds
the state-of-the-art in most cases. In particular, we propose the solvers for
the following problems: optimal 3-view triangulation, semi-generalized hybrid
pose estimation and minimal time-of-arrival self-calibration. The experiments
on synthetic scenes show that our solvers are numerically accurate and either
comparable to or significantly faster than the state-of-the-art solvers. | Evgeniy Martyushev, Snehal Bhayani, Tomas Pajdla | 2023-07-01T12:12:52Z | http://arxiv.org/abs/2307.00320v1 | # Automatic Solver Generator for Systems of Laurent Polynomial Equations
###### Abstract
In computer vision applications, the following problem often arises: Given a family of (Laurent) polynomial systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family member as fast as possible. Under appropriate genericity assumptions, the dimension and degree of the respective polynomial ideal remain unchanged for each particular system in the same family. The state-of-the-art approach to solving such problems is based on elimination templates, which are the coefficient (Macaulay) matrices that encode the transformation from the initial polynomials to the polynomials needed to construct the action matrix. Knowing an action matrix, the solutions of the system are computed from its eigenvectors. The important property of an elimination template is that it applies to all polynomial systems in the family. In this paper, we propose a new practical algorithm that checks whether a given set of Laurent polynomials is sufficient to construct an elimination template. Based on this algorithm, we propose an automatic solver generator for systems of Laurent polynomial equations. The new generator is simple and fast; it applies to ideals with positive-dimensional components; it allows one to uncover partial \(p\)-fold symmetries automatically. We test our generator on various minimal problems, mostly in geometric computer vision. The speed of the generated solvers exceeds the state-of-the-art in most cases. In particular, we propose the solvers for the following problems: optimal 3-view triangulation, semi-generalized hybrid pose estimation and minimal time-of-arrival self-calibration. The experiments on synthetic scenes show that our solvers are numerically accurate and either comparable to or significantly faster than the state-of-the-art solvers.
Laurent polynomial, elimination template, generalized eigenvalue problem, minimal problem.
## 1 Introduction
Many problems of applied science can be reduced to finding common roots of a system of multivariate (Laurent) polynomial equations. Such problems arise in chemistry, mathematical biology, theory of ODE's, geodesy, robotics, kinematics, acoustics, geometric computer vision, and many other areas. For some problems, it is only required to find all (or some) roots of a particular polynomial system, and the root-finding time does not matter much.
In contrast, other problems require finding roots for a family of polynomial systems with the same monomial structure, but different coefficient values. For a given set of coefficients, the roots must be found quickly and with acceptable accuracy. Under appropriate genericity assumptions on the coefficients, the dimension, and degree of the corresponding polynomial ideal remain unchanged. The state-of-the-art approach to solving such problems is to use symbolic-numeric solvers based on elimination templates [33, 34, 28, 3]. These solvers have two main parts. In the first offline part, an elimination template is constructed. The template consists of a map (formulas) from input data to a (Macaulay) coefficient matrix. The structure of the template is the same for each set of coefficients. In the second online phase, the coefficient matrix is filled with the data of a particular problem, reduced by the Gauss-Jordan (G-J) elimination, and used to construct an eigenvalue/eigenvector computation problem of an action matrix that delivers the solutions of the system.
While the offline phase is not time critical, the online phase has to be computed very fast (usually in sub-milliseconds) to be useful for robust optimization based on the RANSAC schemes [15]. The speed of the online phase is mainly determined by two operations, namely the G-J elimination of the template matrix and the eigenvalue/eigenvector computation of the action matrix. Therefore, one approach to generating fast solvers, is to find elimination templates that are as small as possible. The size of the elimination templates affects not only the speed of the resulting solvers, but also their numerical stability. The latter is more subtle, but experiments show that the larger templates have worse stability without special stability enhancing techniques, see e.g. [54, 10].
### _Contribution_
We propose a new automatic generator of elimination templates for efficiently solving systems of Laurent polynomial equations. The advantages of our generator are as follows.
* **Flexibility:** It finds elimination templates for a possibly redundant number of roots. In some cases, it can significantly reduce the template size and thus speed up the root computation.
* **Versatility:** (i) It is applicable to polynomial ideals with positive-dimensional components; (ii) It is also applicable to uncovering the partial \(p\)-fold symmetries to generate smaller templates.
* **Simplicity:** By and large, it uses only manipulations with sets of monomials and G-J elimination on matrices over a finite field.
We demonstrate our method on a variety of minimal problems mostly in geometric computer vision. For many of them, we have constructed solvers that are faster than the state-of-the-art.
We propose a solver for the famous problem of optimal 3-view triangulation [11, 34, 51], which is naturally formulated as a system of Laurent polynomial equations. Our solver for this problem is numerically accurate and slightly faster than the state-of-the-art solvers from [11, 34].
We also propose a fast solver for the semi-generalized hybrid pose estimation problem [5]. Defined as the problem of estimating the relative pose of a pinhole camera with unknown focal length w.r.t. a calibrated generalized camera, from a hybrid set of one 2D-2D and three 2D-3D point correspondences, its original formulation in [5] used a homography-based formulation along with the elimination ideal method [29]. However, this led to large expressions for the polynomial coefficients, resulting in slow solvers. In comparison, our solver relies on a depth-based formulation that results in a Laurent polynomial system. The coefficients of this system are much simpler expressions. Therefore, the solver generated using our proposed AG is \(20-30\) times faster than the solvers based on the homography formulation.
Finally, we propose solvers for the \(4s/6r\) and \(5s/5r\) Time-of-Arrival minimal problems [24, 31, 34]. Our solvers have comparable numerical accuracy and are \(1.3-1.8\) times faster than the state-of-the-art solvers from [31].
### _Related work_
Elimination templates are matrices that encode the transformation from polynomials of the initial system to polynomials needed to construct the action matrix. Knowing an action matrix, the solutions of the system are computed from its eigenvectors. _Automatic generator_ (AG) is an algorithm that takes a polynomial system as input and outputs an elimination template for the action matrix computation.
**Automatic generators:** The first automatic generator was built in [28], where the template was constructed iteratively by expanding the initial polynomials with their multiples of increasing degree. This AG has been widely used by the computer vision community to construct polynomial solvers for a variety of minimal problems, e.g., [6, 7, 30, 43, 49, 56], see also [33, Tab. 1]. Paper [33] introduced a non-iterative AG based on tracing the Grobner basis construction and subsequent syzygy-based reduction. This AG allowed fast template construction even for hard problems. An alternative AG based on the use of sparse resultants was proposed in [3]. This method, together with [36], are currently the state-of-the-art automatic template generators.
**Improving stability:** The standard way to construct the action matrix from a template requires performing its LU decomposition. For large templates, this operation often leads to significant round-off and truncation errors, and hence to numerical instability. The series of papers [10, 11, 12] addressed this problem and proposed several methods of improving stability, e.g. by performing a QR decomposition with column pivoting on the step of constructing the action matrix from a template.
**Optimizing formulations:** Choosing an appropriate formulation of a minimal problem can drastically simplify finding its solutions. The paper [29] proposed the variable elimination strategy, which reduces the number of unknowns in the initial polynomial system. For some problems, this strategy led to significantly smaller templates [20, 35].
**Optimizing templates:** Much effort has been spent on speeding up the action matrix method by optimizing the template construction step. The paper [44] introduced a method to optimize templates by removing some unnecessary rows and columns. The method in [27] exploited the sparsity of elimination templates by converting a large sparse template into the so-called singly-bordered block-diagonal form. This allowed splitting the initial problem into several smaller subproblems that are easier to solve. In paper [36], the authors proposed two methods that significantly reduced the size of elimination templates. The first method used the so-called Grobner fan of a polynomial ideal to construct templates w.r.t. all possible standard bases of the quotient space. The second method went beyond Grobner bases and introduced a random sampling strategy to construct non-standard bases. In [40], the authors proposed a heuristic greedy optimization strategy to reduce the templates obtained by the non-iterative AG from [33].
**Optimizing root solving:** Complex roots are spurious for most problems arising in applications. The paper [8] introduced two methods to avoid the computation of complex roots, resulting in a significant speedup of polynomial solvers.
**Discovering symmetries:** Polynomial systems for certain minimal problems may have hidden symmetries. Uncovering these symmetries is another way to optimize templates. This approach was demonstrated for the simplest partial \(p\)-fold symmetries in [26, 32]. A more general case was studied in [14].
**Laurent polynomial ideals:** Some application problems can be naturally formulated as a system of Laurent polynomial equations, and only the toric roots of the system are of interest. Clearly, any Laurent polynomial equation can be transformed either into an ordinary polynomial equation by taking its numerator, or into a system of ordinary polynomial equations by introducing new variables. It follows that any AG for ordinary polynomials can be also applied to Laurent polynomials. However, such an approach can have unwanted consequences: increasing the number of variables, increasing the total degree of polynomials, introducing false (non-toric) roots. All this can complicate the root-finding process. Working directly in the Laurent polynomial ring is preferable as it provides more "degrees of freedom" in choosing action polynomial and constructing shifts of the initial polynomials. The Grobner and the border bases for Laurent polynomial ideals were introduced in [47] and [42] respectively. An eigenvalue method for solving square systems of Laurent polynomial equations has been proposed in [53]. For Laurent systems with more polynomials than the number of variables, i.e., non-square systems, a sparse resultant-based method has been proposed in [3] which uses Newton polytopes [13] to generate the elimination template as a resultant matrix.
**The most related work:** Our work is essentially based on the results of papers [12, 28, 36, 40].
## 2 Solving sets of Laurent monomials
We use \(\mathbb{K}\) for a field, \(X=\{x_{1},\ldots,x_{k}\}\) for a set of \(k\) variables, \(R=\mathbb{K}[X,X^{-1}]\) for the \(\mathbb{K}\)-algebra of Laurent polynomials over \(\mathbb{K}\).
Let \(F=\{f_{1},\ldots,f_{s}\}\subset R\setminus\mathbb{K}\) and \(J=\langle F\rangle\) be the ideal generated by \(F\). Let
\[\mathcal{V}=\{p\in(\mathbb{K}\setminus\{0\})^{k}\,:\,f_{1}(p)=\ldots=f_{s}(p )=0\}\]
be the set of common roots of \(F\). We assume that \(\mathcal{V}\) is 0-dimensional, i.e., it is a finite set of points. More generally, \(\mathcal{V}\) is reducible and one of its components is 0-dimensional, i.e., \(\mathcal{V}=\widetilde{\mathcal{V}}\cup\mathcal{V}_{0}\) with \(\dim\mathcal{V}_{0}=0\). The positive-dimensional variety \(\widetilde{\mathcal{V}}\) consists of superfluous unfeasible roots. This case was addressed in [34] for polynomial systems. In the sequel, we assume that \(\dim\mathcal{V}=0\).
It is clear that there exists \((\alpha_{1}^{j},\ldots,\alpha_{k}^{j})\in\mathbb{Z}_{\geq 0}^{k}\) such that
\[\widetilde{f}_{j}=x_{1}^{\alpha_{1}^{j}}\ldots x_{k}^{\alpha_{k}^{j}}f_{j}\in \mathbb{K}[X]\]
for each \(j=1,\ldots,s\). Thus, \(\mathcal{V}\) can be also obtained as a set of common roots of the polynomial system \(\widetilde{F}=0\), where \(\widetilde{F}=\{\widetilde{f}_{1},\ldots,\widetilde{f}_{s}\}\). However, the use of \(\widetilde{F}\) instead of \(F\) may result in the appearance of superfluous roots that do not belong to the torus \((\mathbb{K}\setminus\{0\})^{k}\). Saturating these roots is an additional non-trivial problem in general. Furthermore, the total degrees of the polynomials in \(\widetilde{F}\) can increase significantly, which can lead to larger elimination templates. In contrast, our examples show that working directly with the Laurent polynomials leads to smaller elimination templates and thus to faster solvers, cf. Problems #35 and #36 in Tab. 1 below.
We start by generalizing the definition of solving bases (in this paper we will use the term "solving sets") from [12] for Laurent polynomials. For simplicity, we restrict ourselves to the solving sets consisting of monomials. Let
\[U=\{x_{1}^{\alpha_{1}}\ldots x_{k}^{\alpha_{k}}\,:\,(\alpha_{1},\ldots,\alpha _{k})\in\mathbb{Z}^{k}\}\]
be the set of Laurent monomials in \(X\).
We denote by \(v(\mathcal{A})\) the vector consisting of the elements of a finite set of Laurent monomials \(\mathcal{A}\subset U\) which are ordered according to a certain total ordering on \(U\), e.g., the graded reverse lex ordering (grevlex) with \(x_{1}>\ldots>x_{k}\) which compares monomials first by ther total degree, i.e., \(\alpha_{1}+\ldots+\alpha_{k}\), and breaks ties by smallest degree in \(x_{k}\), \(x_{k-1}\), etc. Note that grevlex is not a well-ordering on \(U\), but this is of no importance for our purposes.
**Definition 1**.: Let \(\mathcal{B}\subset U\) and \(a\in R\setminus\mathbb{K}\). Let us define the vector
\[C:=a\,T_{1}v(\mathcal{B})-T_{0}v(\mathcal{B})\in R^{d}, \tag{1}\]
where \(d=\#\mathcal{B}\), \(T_{0},T_{1}\in\mathbb{K}^{d\times d}\), and \(\det T_{1}\neq 0\). The set of monomials \(\mathcal{B}\) is called the _solving set_ for the ideal \(J\) if the following condition holds:
1. \(C\subset J\), i.e., each element of \(C\) is a Laurent polynomial from \(J\).
In this case the polynomial \(a\) is called the _action polynomial_.
If \(\mathcal{B}\) is a solving set for \(J\), then \(C(p)=0\) for any \(p\in\mathcal{V}\) and hence we come to the generalized eigenproblem [16]
\[T_{0}v(\mathcal{B}(p))=a(p)\,T_{1}v(\mathcal{B}(p)). \tag{2}\]
It follows that
\[a(p)\in\sigma(T_{0},T_{1})=\{\lambda\in\mathbb{K}\,:\,\det(T_{0}-\lambda T_{1 })=0\}.\]
In this paper we restrict ourselves to the case \(\det T_{1}\neq 0\), which guarantees that the set \(\sigma(T_{0},T_{1})\) is finite [16]. Since the matrix \(T_{1}\) is invertible, the problem (2) can be solved as the regular eigenproblem for the action matrix \(T_{1}^{-1}T_{0}\). The drawback of such an approach is that an ill-conditioned matrix \(T_{1}\) can cause significant inaccuracies in the computed eigenvalues. On the other hand, there is a numerically backward stable QZ algorithm [22] for solving the problem (2).
For each \(p\in\mathcal{V}\) there exists \(\lambda\in\sigma(T_{0},T_{1})\) such that \(a(p)=\lambda\). If the related eigenspace \(\ker(T_{0}-\lambda T_{1})\) is 1-dimensional and \(u\) is its basis vector, then \(u=v(\mathcal{B}(p))\) up to scale.
Note that the vector \(C\) may vanish at a point \(p\notin\mathcal{V}\). Therefore the set \(\{a(p)\,:\,p\in\mathcal{V}\}\) may be a proper subset of \(\sigma(T_{0},T_{1})\), i.e., it may happen that \(d>\#\mathcal{V}\). In this case, the solving set is said to be _redundant_[12]. It may also happen that \(d=\#\mathcal{V}\) or \(d<\#\mathcal{V}\). The latter case applies e.g. to systems with the partial \(p\)-fold symmetries [26, 32].
Next, given a solving set \(\mathcal{B}\) let us introduce the following additional condition:
1. for each variable \(x_{i}\in X\) there is an element \(b_{i}\in\mathcal{B}\) such that \(x_{i}\cdot b_{i}\in\mathcal{B}\).
Condition (C2) guarantees that the root \(p\) can be directly computed from the eigenvector \(u\). If \(x_{i}\cdot b_{i}=b^{\prime}\) and the elements \(b_{i}\) and \(b^{\prime}\) are at the \(r\)th and \(q\)th positions of vector \(v(\mathcal{B})\) respectively, then \(x_{i}(p)=u^{q}/u^{r}\), where \(u^{q}\) and \(u^{r}\) are the \(q\)th and \(r\)th entries of vector \(u\) respectively. On the other hand, if \(\mathcal{B}\) does not satisfy condition (C2), then additional computations may be required to derive roots.
To summarize, knowing the solving set \(\mathcal{B}\), which additionally satisfies condition (C2), together with the Laurent polynomials from \(J=\langle F\rangle\), which have the form (1), allows one to compute the roots of the system \(F=0\). The main question is, _how to find the solving sets?_ For this purpose we propose to use elimination templates and the incremental approach similar to that from [28].
## 3 Macaulay matrices and elimination templates
Given a Laurent polynomial \(f\), we denote by \(U_{f}\) the support of \(f\), i.e.,
\[U_{f}=\{m\in U\,:\,c(f,m)\neq 0\},\]
where \(c(f,m)\) is the coefficient of \(f\) at monomial \(m\). Given a set of Laurent polynomials \(F=\{f_{1},\ldots,f_{s}\}\), we denote by \(U_{F}\) the support of \(F\), i.e.,
\[U_{F}=\bigcup_{i=1}^{s}U_{f_{i}}.\]
Let \(n=\#U_{F}\) be the cardinality of the finite set \(U_{F}\). The _Macaulay matrix_\(M(F)\in\mathbb{K}^{s\times n}\) is defined as follows: its \((i,j)\)th element is the coefficient \(c(f_{i},m_{j})\) of the
polynomial \(f_{i}\in v(F)\) at the monomial \(m_{j}\in U_{F}\), i.e., \(M(F)_{ij}=c(f_{i},m_{j})\). Thus,
\[M(F)\,v(U_{F})=0\]
is the vector form of the Laurent polynomial system \(F=0\).
A _shift_ of a polynomial \(f\) is a multiple of \(f\) by a monomial \(m\in U\). Let \(A=(A_{1},\ldots,A_{s})\) be an ordered \(s\)-tuple of finite sets of monomials \(A_{j}\subset U\) for all \(j\). We define the _set of shifts_ of \(F\) as
\[A\cdot F=\{m\cdot f_{j}\,:\,m\in A_{j},f_{j}\in F\}.\]
Let \(a\) be a Laurent polynomial and \(\mathcal{B}\) be a finite subset of Laurent monomials from \(U_{A\cdot F}\) such that \(U_{a\,m}\subset U_{A\cdot F}\) for each \(m\in\mathcal{B}\). We define the two subsets
\[\mathcal{R} =\bigcup_{b\in U_{a}}\{b\,m\,:\,m\in\mathcal{B}\}\setminus \mathcal{B},\] \[\mathcal{E} =U_{A\cdot F}\setminus(\mathcal{R}\cup\mathcal{B}).\]
Clearly, the subsets \(\mathcal{B}\), \(\mathcal{R}\), \(\mathcal{E}\) are pairwise disjoint and \(U_{A\cdot F}=\mathcal{E}\cup\mathcal{R}\cup\mathcal{B}\).
**Definition 2**.: A Macaulay matrix \(M(A\cdot F)\) with columns arranged in ordered blocks \(M(A\cdot F)=\begin{bmatrix}M_{\mathcal{E}}&M_{\mathcal{R}}&M_{\mathcal{B}} \end{bmatrix}\) is called the _elimination template_ for \(F\) w.r.t. \(a\) if the reduced row echelon form of \(M(A\cdot F)\) is
\[\widetilde{M}(A\cdot F)=\left[\begin{array}{ccc}\varepsilon&\pi&\pi\\ *&0&*\\ 0&I&\widetilde{M}_{\mathcal{B}}\\ 0&0&0\end{array}\right],\]
where \(*\) means a submatrix with arbitrary entries, \(0\) is the zero matrix of a suitable size, \(I\) is the identity matrix of order \(\#\mathcal{R}\) and \(\widetilde{M}_{\mathcal{B}}\) is a matrix of size \(\#\mathcal{R}\times\#\mathcal{B}\).
It follows from the definition that if a Macaulay matrix \(M(A\cdot F)\) is an elimination template, then the set \(\mathcal{B}\) is the solving set for \(J=\langle F\rangle\). On the other hand, the action polynomial \(a\), the \(s\)-tuple of sets \(A\) and the solving set \(\mathcal{B}\) uniquely determine the elimination template \(M(A\cdot F)\) (up to reordering its rows and columns in \(M_{\mathcal{E}}\), \(M_{\mathcal{R}}\), \(M_{\mathcal{B}}\)).
## 4 Automatic solver generator
Our automatic solver generator consists of two main steps: (i) finding an elimination template for a set of Laurent polynomials (TemplateFinder); (ii) reducing the template by removing all its unnecessary rows and columns (TemplateReduction). Both steps are essentially based on the procedure that checks whether a given set of polynomials is sufficient to construct an elimination template for a given action polynomial (TemplateTest). To speed up the computation, both steps are performed over a finite field of sufficiently large order. We assume that there exists a generic instance of the problem with coefficients in this field.
### _Elimination template test_
For the sake of brevity, we denote the support \(U_{F}\) of a finite set of Laurent polynomials \(F\) by \(\mathcal{U}\).
Given a Laurent polynomial \(a\), we define the set of _permissible monomials_[12] as
\[\mathcal{P}=\bigcap_{b\in U_{a}}\{m\in\mathcal{U}\,:\,b\,m\in\mathcal{U}\},\]
the set of _reducible monomials_ as
\[\mathcal{R}=\bigcup_{b\in U_{a}}\{b\,m\,:\,m\in\mathcal{P}\}\setminus\mathcal{P},\]
and the set of _excessive monomials_\(\mathcal{E}\) consisting of monomials from \(\mathcal{U}\) which are neither in \(\mathcal{R}\) nor in \(\mathcal{P}\), i.e.,
\[\mathcal{E}=\mathcal{U}\setminus(\mathcal{R}\cup\mathcal{P}).\]
First we set \(\mathcal{U}_{0}=\mathcal{U}\) and \(\widetilde{\mathcal{E}}_{0}=\varnothing\). We open the loop over the index \(i\) starting with \(i=1\). At the \(i\)th iteration we set
\[\mathcal{U}_{i} =\mathcal{U}_{i-1}\setminus\widetilde{\mathcal{E}}_{i-1},\] \[\mathcal{B}_{i} =\bigcap_{b\in U_{a}}\{m\in\mathcal{U}_{i}\,:\,b\,m\in\mathcal{U} _{i}\}.\]
If \(\mathcal{B}_{i}=\varnothing\), then the algorithm terminates with the empty set. Otherwise, we proceed
\[\mathcal{R}_{i} =\bigcup_{b\in U_{a}}\{b\,m\,:\,m\in\mathcal{B}_{i}\}\setminus \mathcal{B}_{i},\] \[\mathcal{E}_{i} =\widetilde{\mathcal{E}}_{i-1}\cup\mathcal{U}_{i}\setminus( \mathcal{R}_{i}\cup\mathcal{B}_{i}).\]
Let \(M\) be a Macaulay matrix corresponding to \(F\) and \(V\) be the related monomial vector. We reorder the columns of matrix \(M\) and the entries of vector \(V\) according to the partition \(\mathcal{E}_{i}\cup\mathcal{R}_{i}\cup\mathcal{B}_{i}\). The resulting Macaulay matrix and the resulting monomial vector, denoted by \(M_{i}\) and \(V_{i}\) respectively, obey the relation \(M_{i}V_{i}=MV\).
Next, let \(\widetilde{M}_{i}\) be the reduced row echelon form of \(M_{i}\) and \(\widetilde{F}_{i}=\{\widetilde{M}_{i}V_{i}\}\) be the corresponding set of Laurent polynomials. We define the following subset of \(\mathcal{R}_{i}\):
\[\widetilde{\mathcal{R}}_{i}=\{m\in\mathcal{R}_{i}\,:\,m-\sum_{j}\gamma_{j}b_{j} \in\widetilde{F}_{i},\gamma_{j}\in\mathbb{K},b_{j}\in\mathcal{B}_{i}\}.\]
If \(\widetilde{\mathcal{R}}_{i}=\mathcal{R}_{i}\), then we set \(l=i\) and terminate the loop over \(i\). Otherwise, we set \(\widetilde{\mathcal{E}}_{i}=\mathcal{E}_{i}\cup(\mathcal{R}_{i}\setminus \widetilde{\mathcal{R}}_{i})\) and proceed with \(i+1\).
The algorithm generates the following sequence of proper subsets
\[\mathcal{P}=\mathcal{B}_{1}\supset\mathcal{B}_{2}\supset\ldots\supset\mathcal{ B}_{l-1}\supset\mathcal{B}_{l}=\mathcal{B}.\]
It follows that the algorithm always terminates in a finite number of steps. By the construction, the resulting subset \(\mathcal{B}\) is either the empty set or the set satisfying condition (C1). We additionally check if \(\mathcal{B}\) satisfies condition (C2). If so, the algorithm returns the solving set \(\mathcal{B}\). The respective Macaulay matrix \(M_{l}\) is the elimination template. Otherwise, the algorithm returns the empty set. The template test function is summarized in Alg. 1.
**Example 1**.: This example demonstrates the work of the template test function from Alg. 1 on the following set of two Laurent polynomials from \(\mathbb{Q}[x^{\pm 1},y^{\pm 1}]\):
\[F=\{f_{1},f_{2}\}=\Big{\{}\frac{2y^{2}}{x}-7x-4y+9,\frac{2x^{2}}{y}-7y-4x+9 \Big{\}}.\]
The system \(F=0\) has the following three roots in \((\mathbb{Q}\setminus\{0\})^{2}\): \((1,1)\), \((-1,2)\), \((2,-1)\).
First, let us show that the \(2\times 5\) Macaulay matrix for the initial system is an elimination template for \(F\) w.r.t. the action monomial \(a=\nicefrac{{x}}{{y}}\). At the first iteration (\(i=1\)) we have
\[\mathcal{U}_{1}=\{\nicefrac{{x^{2}}}{{y}},x,y,\nicefrac{{y^{2}}}{{x}},1\},\]
\[\mathcal{E}_{1}=\{1\},\quad\mathcal{R}_{1}=\{\nicefrac{{x^{2}}}{{y}}\},\quad \mathcal{B}_{1}=\{x,y,\nicefrac{{y^{2}}}{{x}}\}.\]
The Macaulay matrix of the initial system whose columns are arranged w.r.t. \(\mathcal{E}_{1}\cup\mathcal{R}_{1}\cup\mathcal{B}_{1}\) is given by
\[M_{1}=\nicefrac{{f_{1}}}{{f_{2}}}\,\left[\begin{array}{c|ccc}\nicefrac{{1} }{{x^{2}}}/\!{y}&x&y&\nicefrac{{y^{2}}}{{x}}\\ \left[\begin{array}{c|ccc}9&0&-7&-4&2\\ 9&2&-4&-7&0\end{array}\right].\end{array}\right.\]
The reduced row echelon form of \(M_{1}\) has the form
\[\widetilde{M}_{1}=\left[\begin{array}{c|ccc}\nicefrac{{1}}{{x^{2}}}/\!{y} &x&y&\nicefrac{{y^{2}}}{{x}}\\ \left[\begin{array}{c|ccc}1&0&-7/9&-4/9&2/9\\ 0&1&\nicefrac{{3}}{{2}}&-3/2&-1\end{array}\right].\end{array}\right.\]
The second row implies \(\frac{x^{2}}{y}+\frac{3}{2}\,x-\frac{3}{2}\,y-\frac{y^{2}}{x}=0\), i.e., \(\widetilde{\mathcal{R}}_{1}=\mathcal{R}_{1}\). It follows that the matrix \(M_{1}\) is the elimination template for \(F\) w.r.t. \(a\). The set \(\mathcal{B}_{1}\) does satisfy condition (C1) but does not satisfy condition (C2): there is no element \(b\in\mathcal{B}_{1}\) such that \(x\cdot b\in\mathcal{B}_{1}\) or \(y\cdot b\in\mathcal{B}_{1}\). Therefore, none of the two coordinates of a solution can be read off from the eigenvectors of the related action matrix. The algorithm returns the empty set.
Now let us consider the set of shifts \(A\cdot F=\{f_{2}/x,f_{2},f_{1}\}\) and the same action monomial \(a=\nicefrac{{x}}{{y}}\). At the first iteration \((i=1)\) we have
\[\mathcal{U}_{1}=\{\nicefrac{{x^{2}}}{{y}},x,y,\nicefrac{{y^{2}}}{{x}}, \nicefrac{{x}}{{y}},1,\nicefrac{{y}}{{x}},\nicefrac{{1}}{{x}}\},\]
\[\mathcal{E}_{1}=\{\nicefrac{{1}}{{x}}\},\quad\mathcal{R}_{1}=\{\nicefrac{{x^{2 }}}{{y}},\nicefrac{{x}}{{y}}\},\quad\mathcal{B}_{1}=\{x,y,\nicefrac{{y^{2}}}{{ x}},1,\nicefrac{{y}}{{x}}\}.\]
The Macaulay matrix of the expanded system whose columns are arranged w.r.t. \(\mathcal{E}_{1}\cup\mathcal{R}_{1}\cup\mathcal{B}_{1}\) is given by
\[M_{1}=\nicefrac{{f_{2}}}{{f_{2}}}\,\left[\begin{array}{c|ccc}\nicefrac{{1} }{{x}}&\nicefrac{{x^{2}}}{{y}}&\nicefrac{{x^{2}}}{{y}}&x&y&\nicefrac{{y^{2}}}{ {x}}&1&\nicefrac{{y}}{{x}}\\ \left[\begin{array}{c|ccc}1&0&\nicefrac{{2}}{{2}}/\!{y}&0&0&0&-4/9\\ 0&1&0&0&-3/14&-4/7&27/14&0\\ 0&0&0&1&\nicefrac{{4}}{{7}}&-2/7&-9/7&0\end{array}\right].\]
The reduced row echelon form of \(M_{1}\) has the form
\[\widetilde{M}_{1}=\left[\begin{array}{c|ccc}\nicefrac{{1}}{{x}}&\nicefrac{{x ^{2}}}{{y}}&\nicefrac{{x^{2}}}{{y}}&x&y&\nicefrac{{y^{2}}}{{x}}&1&\nicefrac{{ y}}{{x}}\\ \left[\begin{array}{c|ccc}1&0&\nicefrac{{2}}{{2}}/\!{y}&0&0&0&-4/9\\ 0&1&0&0&-33/14&-4/7&27/14&0\\ 0&0&0&1&\nicefrac{{4}}{{7}}&-2/7&-9/7&0\end{array}\right].\]
The last two rows imply that \(\widetilde{\mathcal{R}}_{1}=\{\nicefrac{{x^{2}}}{{y}}\}\neq\mathcal{R}_{1}\) and hence we proceed by setting
\[\widetilde{\mathcal{E}}_{1}=\mathcal{E}_{1}\cup(\mathcal{R}_{1}\setminus \widetilde{\mathcal{R}}_{1})=\{\nicefrac{{x}}{{y}},\nicefrac{{1}}{{x}}\}.\]
At the second iteration (\(i=2\)) we have
\[\mathcal{U}_{2}=\mathcal{U}_{1}\setminus\widetilde{\mathcal{E}}_{1}=\{ \nicefrac{{x^{2}}}{{y}},x,y,\nicefrac{{y^{2}}}{{x}},1,\nicefrac{{y}}{{x}}\},\]
\[\mathcal{E}_{2}=\{\nicefrac{{x}}{{y}},\nicefrac{{1}}{{x}}\},\quad\mathcal{R}_{ 2}=\{\nicefrac{{x^{2}}}{{y}},1\},\quad\mathcal{B}_{2}=\{x,y,\nicefrac{{y^{2}}}{ {x}},\nicefrac{{y^{2}}}{{x}},\nicefrac{{y}}{{x}}\}.\]
The rearranged Macaulay matrix is given by
\[M_{2}=\left[\begin{array}{c|ccc}\nicefrac{{x}}{{y}}&\nicefrac{{1}}{{x}}& \nicefrac{{x^{2}}}{{y}}&1&x&y&\nicefrac{{y^{2}}}{{x}}&\nicefrac{{y^{2}}}{{x}}\\ \left[\begin{array}{c|ccc}2&9&0&-4&0&0&0&-7\\ 0&0&2&9&-4&-7&0&0\\ 0&0&0&9&-7&-4&2&0\end{array}\right].\end{array}\right.\]
The reduced row echelon form of \(M_{2}\) has the form
\[\widetilde{M}_{2}=\left[\begin{array}{c|ccc}\nicefrac{{x}}{{y}}&\nicefrac{{1}}{{x}}& \nicefrac{{x^{2}}}{{y}}&1&x&y&\nicefrac{{y^{2}}}{{x}}&\nicefrac{{y^{2}}}{{x}}\\ \left[\begin{array}{c|ccc}1&\nicefrac{{9}}{{2}}&0&0&-14/9&-8/9&4/9&-7/2\\ 0&0&1&0&\nicefrac{{3}}{{2}}&-3/2&-1&0\\ 0&0&0&1&-7/9&-4/9&2/9&0\end{array}\right].\]
The last two rows imply \(\widetilde{\mathcal{R}}_{2}=\mathcal{R}_{2}\) and hence \(M_{2}\) is the elimination template for \(F\) w.r.t. \(a=\nicefrac{{x}}{{y}}\). Now the solving set \(\mathcal{B}_{2}\) does satisfy condition (C2) as
\[x\cdot\nicefrac{{y}}{{x}}\in\mathcal{B}_{2},\quad y\cdot\nicefrac{{y}}{{x}}\in \mathcal{B}_{2}.\]
Finally we note that the first two columns of matrix \(M_{2}\), corresponding to the excessive monomials, are linearly dependent. Removing one of these columns results in the reduced elimination template of size \(3\times 7\). The related action matrix is of order \(4\), i.e., the solver has one redundant root.
### _Finding template_
Based on the template test function described in the previous subsection, we propose the algorithm for finding an elimination template for a given set \(F\) of \(s\) Laurent polynomials.
First we define the trivial \(s\)-tuple \(A^{0}=(\{1\},\ldots,\{1\})\) such that \(A^{0}\cdot F=F\).
We open the loop over the index \(i\) starting with \(i=1\). At the \(i\)th iteration we expand the \(s\)-tuple \(A^{i-1}=(A_{1}^{i-1},\ldots,A_{s}^{i-1})\) as follows
\[A_{j}^{i}=A_{j}^{i-1}\cup\{x^{\pm 1}\cdot m\,:\,x\in X,m\in A_{j}^{i-1}\} \quad\forall j.\]
Then we construct the set of shifts \(A^{i}\cdot F\). For each monomial \(a\in X^{-1}\cup X\), where \(X^{-1}=\{x_{1}^{-1},\ldots,x_{k}^{-1}\}\), we evaluate \(\mathcal{B}_{i}=\textsc{TemplateTest}(A^{i}\cdot F,a)\). If \(\mathcal{B}_{i}\neq\varnothing\), then \(\mathcal{B}_{i}\) is the solving set and the algorithm terminates with the data \(a,A_{i},\mathcal{B}_{i}\) required to construct the elimination template. Otherwise, we proceed with the \((i+1)\)th iteration.
To ensure that the algorithm terminates in a finite number of steps, we limited iterations to a natural number \(N\). In our experiments we found that for all (tractable) systems it is sufficient to set \(N=10\). The template finding function is summarized in Alg. 2.
```
1:functionTemplateFinder(\(F\))
2:\(X\leftarrow\) set of variables for \(F\)
3:\(A\leftarrow\)\(s\)-tuple of \(\{1\}\)
4:for\(i=1\) to \(N\)do
5:for\(a\) in \(X^{-1}\cup X\)do
6:\(\mathcal{B}\leftarrow\textsc{TemplateTest}(A\cdot F,a)\)
7:if\(\mathcal{B}\neq\varnothing\)then
8:return\(a,A,\mathcal{B}\)
9:endif
10:endfor
11:for\(j=1\) to \(s\)do
12:\(A_{j}\gets A_{j}\cup\{x^{\pm 1}\cdot m\,:\,x\in X,m\in A_{j}\}\)
13:endfor
14:\(A\leftarrow(A_{1},\ldots A_{s})\)
15:enddo
16:return\(\varnothing\setminus\) no template found
17:endfunction
```
**Algorithm 2** Given a set of Laurent polynomials \(F\) and a natural number \(N\), returns either the action polynomial \(a\), the \(s\)-tuple of sets \(A\), and the solving set \(\mathcal{B}\), or the empty set.
### _Reducing template_
In general, the template returned by Alg. 2 may be very large. In this subsection we propose a quite straightforward algorithm for its reduction.
Given the \(s\)-tuple of sets \(A=(A_{1},\ldots,A_{s})\) and the solving set \(\mathcal{B}\), we set \(A^{\prime}=A\) and \(\mathcal{B}^{\prime}=\mathcal{B}\). For each \(j=1,\ldots,s\) and \(m_{r}\in A_{j}\) we define the intermediate \(s\)-tuple
\[A^{\prime\prime}=(A_{1}^{\prime},\ldots A_{j-1}^{\prime},A_{j}^{\prime}\setminus m _{r},A_{j+1}^{\prime},\ldots,A_{s}^{\prime}).\]
Then we evaluate \(\mathcal{B}^{\prime\prime}=\textsc{TemplateTest}(A^{\prime\prime}\cdot F,a)\). It may happen that \(\mathcal{B}^{\prime\prime}\neq\mathcal{B}\). The cardinality of the solving set is allowed to decrease and is not allowed to increase while reduction. Therefore, we set \(A^{\prime}=A^{\prime\prime}\), \(\mathcal{B}^{\prime}=\mathcal{B}^{\prime\prime}\) if and only if \(\mathcal{B}^{\prime\prime}\neq\varnothing\) and \(\#\mathcal{B}^{\prime\prime}\leq\#\mathcal{B}\). Then we proceed with the next monomial \(m_{r+1}\). If \(r+1>\#A_{j}\), then we proceed with \(j+1\). The template reduction function is summarized in Alg. 3.
The templates are also reduced by removing all linearly dependent columns corresponding to the excessive monomials as it is described in [40]. As a result, our templates always satisfy the "necessary condition of optimality":
\[\#\text{ of columns }-\#\text{ of rows }=\#\text{ of roots}.\]
Finally, for problems that contain sparse polynomials with all constant coefficients, we applied the Schur complement reduction [40].
## 5 Experiments
In this section, we test our solver generator on 36 minimal problems from geometric computer vision and acoustics. We compare our AG with one of the state-of-the-art AGs from [40] (Greedy). The results are presented in Tab. I, and we make the following remarks about them.
**1.** The experiments were performed on a system with Intel(R) Core(TM) i5-1155G7 @ 2.5 GHz and 8 GB of RAM.
**2.** In general, the size of a template alone is not an appropriate measure of the efficiency of the corresponding solver. For example, the 5-point absolute pose estimation problem for a known refractive plane (Problem #9) has the templates of sizes \(38\times 58\) and \(57\times 73\). The first template is smaller but is followed by the eigendecomposition of a \(20\times 20\) matrix. On the other hand, the second template is larger but requires the eigendecomposition of a smaller matrix of size \(16\times 16\). At first glance, it is unclear which of these two templates would provide a faster solver. Therefore, to compare the efficiency of the solvers, we reported the template size, the number of roots and the average runtime of the corresponding Matlab [18] implementation. The reported times include the derivation of the action matrix and its eigendecomposition and do not include the construction of the coefficient matrix.
**3**. The numerical error is defined as follows. Let the Laurent polynomial system \(F=0\) be written in the form \(M(F)Z=0\), where \(M(F)\) and \(Z=v(U_{F})\) are the Macaulay matrix and monomial vector respectively. The matrix \(M(F)\) is normalized so that each its row has unit length. Let \(d_{0}\) be the number of roots to \(F=0\) and \(d\geq d_{0}\) be the number of roots returned by our solver, i.e., there are \(d-d_{0}\) false roots. Let \(Z_{i}\) be the monomial vector \(Z\) evaluated at the \(i\)th (possibly false) root. We compute \(d\) values \(\epsilon_{i}=\left\|M(F)\frac{Z_{i}}{\|Z_{i}\|_{2}}\right\|_{q}\), where \(\|\cdot\|_{2}\) is the Frobenius norm. Then the numerical error for our solvers is measured by the value \(\frac{1}{2}\log_{10}\sum_{i}\epsilon_{i}^{2}\), where the sum is taken over \(d_{0}\) smallest values of \(\epsilon_{i}\).
**4**. The hard minimal problem of relative pose estimation from \(9\) lines in \(3\) uncalibrated images (Problem #23) was first addressed in [46] where, using the homotopy continuation method, it was shown that the problem has \(36\) solutions. In [33], the authors proposed an efficient formulation of the problem consisting of \(21\) polynomials in \(14\) variables and first attempted to propose an eigenvalue solver for this problem by constructing a giant elimination template of size \(16,278\times 13,735\). We started with exactly the same formulation as in [33]. By applying the G-J elimination on the initial coefficient matrix, we excluded \(4\) variables resulting in the formulation consisting of \(17\) polynomials in \(10\) variables. Our generator found the template of size \(2,163\times 2,616\) with \(116\) roots in approximately \(20\) minutes. Then it was reduced to the reported size in approximately \(13\) hours.
**5**. Problems #13, #14, #16, #17, #18, #30 contain sparse polynomials with all (or all but one) constant coefficients. We additionally reduced the templates for these problems by the Schur complement reduction, see [40] for details.
**6**. The 2-fold symmetries in the formulations of Problems #25 and #26 were uncovered manually by changing variables. On the other hand, our generator automatically uncovered the partial symmetries for Problems #27 and #28 by constructing the solving set of cardinality less than the degree of the related ideal.
**7**. The AG from [40] applies only to zero-dimensional ideals. Therefore, to apply it to Problems #29-#36, we saturated the positive-dimensional components in their formulations either by the Rabinowitsch trick [48], or by a cascade of G-J eliminations as in [38]. The remaining problems were compared using the same formulations.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Problem} & \multicolumn{5}{c}{Our} & \multicolumn{5}{c}{Greedy [40]} \\ \cline{3-10} & & size & roots & time (ms) & mean & med. & size & roots & time (ms) & mean & med. \\ \hline
1 & Bel. pose \(\lambda\)+\(F\)+\(\lambda\)s & \(\
**8**. The Maple [37] implementation of the new AG, as well as the Matlab [18] solvers for all the minimal problems from Tab. 1, are made publicly available at [https://github.com/martyushev/EliminationTemplates](https://github.com/martyushev/EliminationTemplates).
### _Optimal 3-view triangulation_
The optimal 3-view triangulation problem, first addressed in [51], is formulated as follows. Given three projective camera matrices \(P_{1}\), \(P_{2}\), \(P_{3}\) and image point correspondences \(x_{1}\leftrightarrow x_{2}\leftrightarrow x_{3}\), find the space point \(X^{*}\) so that the reprojection error is minimized. That is
\[X^{*}=\arg\min_{X}\sum_{i=1}^{3}\left(\frac{P_{i}^{1}X}{P_{i}^{3}X}-x_{i}^{1} \right)^{2}+\left(\frac{P_{i}^{2}X}{P_{i}^{3}X}-x_{i}^{2}\right)^{2},\]
where \(X=\begin{bmatrix}x&y&z&1\end{bmatrix}^{\top}\), \(P_{i}^{j}\) is the \(j\)th row of \(P_{i}\) and \(x_{i}^{j}\) is the \(j\)th entry of \(x_{i}\). By choosing an appropriate projective coordinate frame, we can assume that
\[P_{1}^{3} =\begin{bmatrix}1&0&0&0\end{bmatrix},\] \[P_{2}^{3} =\begin{bmatrix}0&1&0&0\end{bmatrix},\] \[P_{3}^{3} =\begin{bmatrix}0&0&0&1\end{bmatrix},\]
i.e., the image plane of the third camera is the plane at infinity. Such parametrization, proposed in [34], leads to smaller templates compared to \(P_{3}^{3}=\begin{bmatrix}0&0&1&0\end{bmatrix}\) proposed in [51].
The optimal solution is one of the \(47\) stationary points which are found as roots of a system of three Laurent polynomial equations in three variables \(x\), \(y\), \(z\). Unlike previous work, our generator is able to work directly with the Laurent polynomial formulation.
The problem has been extensively studied [9, 11, 34, 51]. The solvers from [11, 34] are currently the state-of-the-art.
In Fig. 1, we show the support \(U_{F}\) of the initial system as well as the solving set \(\mathcal{B}\) with \(\#\mathcal{B}=58\). The related elimination template is of size \(69\times 127\), cf. Tab. 1, Problem #35.
We tested the new solver on synthetic scenes. We modeled a 3D point \(X\) lying in a cube with edge of length \(1\) centered at the coordinate origin. The point is viewed by three cameras. The centers \(c_{i}\) (here and below \(i=1,2,3\)) of the cameras randomly lie on a sphere of radius \(1\) also centered at the origin. The three rotation matrices \(R_{i}\) are chosen randomly and the calibration matrices \(K_{i}\) all have the focal length and the principal point approximately \(1,000\)px and \((500\)px, \(500\)px) respectively. The initial data for our solver are the three camera matrices \(P_{i}=K_{i}\begin{bmatrix}R_{i}&-R_{i}c_{i}\end{bmatrix}\) and the projections \(x_{i}=P_{i}X\) normalized so that \(x_{i}^{3}=1\).
We tested the numerical accuracy of our solver by constructing the distribution of the errors in 3D placement on noise-free image data. We kept the real roots, including false ones, and then picked out the unique root by calculating the reprojection errors. The 3D placement error distributions for 10K trials are compared in Fig. 2.
The speed and the failure rate of the solvers are compared in Tab. 2.
### _Semi-generalized hybrid relative pose: \(\mathbf{H}13f\)_
Consider the problem of registering a partially calibrated pinhole camera \(\mathcal{P}\) (with unknown focal length \(f\)) w.r.t. a generalized camera \(\mathcal{G}\) from a hybrid set of point correspondences, i.e., one 2D-2D correspondence \(p_{1}\leftrightarrow(q_{11},t_{9})\) and three 2D-3D correspondences \(p_{j}\leftrightarrow X_{j}\), \(j=1,\ldots,3\). The generalized camera \(\mathcal{G}\) is considered as a set of multiple pinhole cameras, \(\{\mathcal{G}_{i}\}\), which have been registered w.r.t. a global coordinate frame.
The goal is to estimate the relative pose, i.e., the rotation \(R\) and the translation \(T\), required to align the coordinate frame of \(\mathcal{P}\) w.r.t. to the global coordinate frame, as well as its focal length \(f\). This problem was studied in [5] using a homography matrix-based formulation, leading to a system of two degree-\(3\), one degree-\(4\) and three degree-\(8\) polynomials in three variables. Using [36] led to a minimal solver with a template of size \(70\times 82\) with \(12\) roots. However, the polynomial coefficients are quite complicated, resulting in an inefficient execution time of \(55\) ms.
Instead, we generated a more efficient solver using a depth-based problem formulation. The pose and the focal length are constrained via the following equations:
\[\alpha_{1}RK^{-1}p_{1}+T =\beta_{11}q_{11}+t_{g_{1}}, \tag{3}\] \[\alpha_{j}RK^{-1}p_{j}+T =X_{j},\quad j=2,\ldots,4,\]
where \(K=\text{diag}([f,f,1])\) is the calibration matrix for \(\mathcal{P}\), \(\alpha_{j}\) and \(\beta_{ij}\) denote the depths of the \(j\)th 3D point in the coordinate frames of \(\mathcal{P}\) and \(\mathcal{G}_{i}\) respectively. Without loss of generality, we transform the coordinate frame of \(\mathcal{G}\) such that its origin coincides with the camera center of \(\mathcal{G}_{1}\), i.e., \(t_{g_{1}}=\begin{bmatrix}0&0&0\end{bmatrix}^{\top}\). For the sake of brevity, assume \(X_{1}=\beta_{11}q_{11}\) in Eq. (3). Eliminating \(T\) from Eq. (3) gives the following equations:
\[RK^{-1}(\alpha_{i_{1}}p_{i_{1}}-\alpha_{i_{2}}p_{i_{2}})=X_{i_{1}}-X_{i_{2}}, \tag{4}\]
where \(1\!\leq\!i_{1}\!\leq\!4,i_{1}\!<\!i_{2}\!\leq\!4\). For the sake of brevity, assume \(Y_{i_{1},i_{2}}=\alpha_{i_{1}}p_{i_{1}}-\alpha_{i_{2}}p_{i_{2}}\) and \(X_{i_{1},i_{2}}=X_{i_{1}}-X_{i_{2}}\). Eliminating \(R\) from Eq. (4) yields
\[\|K^{-1}Y_{i_{1},i_{2}}\|_{2}^{2} =\|X_{i_{1},i_{2}}\|_{2}^{2}, \tag{5}\] \[Y_{i_{1},i_{2}}^{\top}K^{-2}Y_{i_{2},i_{4}} =X_{i_{1},i_{2}}^{\top}X_{i_{3},i_{4}},\]
where \(1\!\leq\!i_{1},\!i_{3}\!\leq\!4,i_{1}\!<\!i_{2}\!\leq\!4,i_{3}\!<\!i_{4}\! \leq\!4,(i_{1},i_{2})\!\neq\!(i_{3},i_{4})\). Equation (5) denotes the depth formulation for the minimal problem and consists of \(20\) Laurent polynomials in \(6\) variables viz., \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\beta_{11},f\). The depth formulation tends to induce polynomials in more variables, but with much simpler coefficients, than those resulting from the homography formulation. The effect is primarily observed in the execution times of the minimal solvers based on the proposed formulation versus the homography-based formulation.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Solver & Our & [11] (STD) & [11] (QR) & [34] \\ \hline Time/call & \(1.34\)ms & \(1.54\)ms & \(2.07\)ms & \(1.56\)ms \\ Relative time & \(1\) & \(1.15\) & \(1.54\) & \(1.16\) \\ Fail (error \(>1\)) & \(1.11\%\) & \(3.09\%\) & \(0.29\%\) & \(8.91\%\) \\ Fail (error \(>0.1\)) & \(1.74\%\) & \(5.01\%\) & \(0.54\%\) & \(13.97\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE II:
Table III (**Row 1**) shows the average time taken/call, measured for both the proposed and the SOTA homography-based solvers.
We also evaluated the numerical performance of the proposed depth-based solver for synthetic scenes. For this purpose, we generated 5K 3D scenes with known ground truth parameters. In each scene, the 3D points were randomly distributed within a cube of dimensions \(10\times 10\times 10\) units. Note that for the \(\mathbf{H}13f\) case, there is only one 2D-2D point correspondence. Therefore, each 3D point was projected into two pinhole cameras with realistic focal lengths. One camera acts as a query camera, \(\mathcal{P}\), which has to be registered, while the other camera represents the generalized camera \(\mathcal{G}\) (consisting of only one pinhole camera). The orientations and positions of the cameras were randomly chosen so that they looked at the origin from a random distance of \(15\) to \(25\) units from the scene. The simulated images had a resolution of \(1,000\times 1,000\) pixels. The failure rate for focal length estimation is reported in Tab. III (**Row 3** and **Row 4**). Note that the proposed solver has a lower or comparable failure rate than the SOTA homography-based minimal solvers [5] generated using the Grobner basis [33] and the resultant [4]. At the same time, the proposed solver is \(20\) to \(30\) times faster than the two SOTA solvers.
We also evaluated the solver performance in the presence of noisy scene points by introducing Gaussian noise into the coordinates of the 3D points sampled in the synthetic scene. The standard deviation of the noise was varied as a percentage of their depths, to simulate the different quality of the keypoints used to triangulate these 3D points. We also introduced \(0.5\mathrm{px}\) image noise to simulate noisy feature detection. For such a scene setup, we evaluated the stability of the proposed depth-based minimal solver against the SOTA homography-based minimal solvers in [5] using the methods based on Grobner bases and resultants. Figure 4 shows the error in focal length estimated by the solvers. Here, the box plots show the \(25\%\) to \(75\%\) quantiles as boxes with a horizontal line for the median. We note that our proposed depth-based solver has fewer errors, even with increasing noise in the 3D points, compared to the homography-based solvers from [5].
### _Time-of-Arrival self-calibration_
The Time-of-Arrival (ToA) \((m,n)\) problem is formulated as follows. Given \(m\times n\) distance measurements \(d_{ij}\), \(i=1,\ldots,m_{s}\)\(j=1,\ldots,n\), find \(m\) points \(s_{i}\) (senders) and \(n\) points \(r_{j}\) (receivers) in 3-space such that \(d(s_{i},r_{j})=d_{ij}\) for all \(i,j\). Here \(d(x,y)=\|x-y\|_{2}\) is the distance function.
Fig. 1: The support \(U_{F}\) of the Laurent polynomial system for the problem of optimal 3-view triangulation and the related solving set \(\mathcal{B}\)
Fig. 3: The distribution of the error in focal length for the \(\mathbf{H}13f\) minimal problem
Fig. 2: The distribution of the error in 3D placement for the problem of optimal 3-view triangulation
\begin{table}
\begin{tabular}{l r r r} \hline \hline Solver & Our & [5] (GB [33]) & [5] (Res [4]) \\ \hline Time/call & \(1.10\)ms & \(27.41\)ms & \(33.00\)ms \\ Relative time & \(1\) & \(24.74\) & \(30\) \\ Fail (error \(>1\)) & \(0.02\%\) & \(0.68\%\) & \(0.06\%\) \\ Fail (error \(>0.1\)) & \(0.96\%\) & \(5.78\%\) & \(0.2\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE III:
All the points (senders and receivers) are assumed to be in general position in space. Clearly, any solution to the ToA problem can be only found up to an arbitrary Euclidean isometry.
In the real world, the ToA problem arises from measuring the absolute travel times from unknown senders (e.g., speakers) to unknown receivers (e.g., microphones). If the signal speed is known, then the distances between the senders and receivers are also known, and we arrive at the ToA problem.
The ToA \((4,6)\) and \((5,5)\) problems are minimal and have up to \(38\) and \(42\) solutions respectively. These problems have been studied in papers [24, 31, 34]. The solvers from [31] are currently the state-of-the-art.
We used the ToA problem parametrization proposed in [24]. The \((4,6)\) problem is formulated as a system of four polynomials of degree \(3\) and one of degree \(4\) in \(5\) unknowns. The related affine variety is the union of two subvarieties of dimensions \(1\) and \(0\). The \(1\)-dimensional component consists of superfluous roots that have no feasible interpretation, while the \(0\)-dimensional component consists of \(38\) feasible (complex) solutions to the problem.
Similarly, the \((5,5)\) problem is formulated as a system of five polynomials of degree \(3\) and one of degree \(4\) in \(6\) unknowns. The related variety is the union of a \(2\)-dimensional "superfluous" subvariety and a \(0\)-dimensional component consisting of \(42\) complex roots.
Our generator automatically found the redundant solving sets of cardinality \(48\) for the \((4,6)\) problem and of cardinality \(60\) for the \((5,5)\) problem. The respective elimination templates are of size \(427\times 475\) and \(772\times 832\), see Tab. I, Problems #31 and #32.
We tested the new solvers on synthetic scenes. We modeled \(m\) senders and \(n\) receivers uniformly distributed in a cube with edge of length \(1\). The ground truth positions of the receivers and senders are \(3\)-vectors \(s_{i}\) and \(r_{j}\), respectively. The initial data for our solvers are the \(m\times n\) distances \(d(s_{i},r_{j})\) for all \(i=1,\ldots,m\), \(j=1,\ldots,n\).
We tested the numerical stability of the solvers on noise-free data by measuring the following error:
\[\epsilon=\min\biggl{(}\sum_{k>i}(d(s_{i},s_{k}) -d(\hat{s}_{i},\hat{s}_{k}))^{2}\] \[+\sum_{l>j}(d(r_{j},r_{l})-d(\hat{r}_{j},\hat{r}_{l}))^{2}\biggr{)} ^{1/2},\]
where \(\hat{s}_{i}\) and \(\hat{r}_{j}\) are the estimated positions of the senders and receivers and the minimum is taken over all real roots. The results are presented in Fig. 5.
The speed and the failure rate of the solvers are compared in Tab. IV.
## 6 Conclusion
In this paper, we have proposed a new algorithm for automatically generating small and stable elimination templates for solving Laurent polynomial systems. The proposed automatic generator is flexible, versatile, and easy-to-use. It is applicable to polynomial ideals with positive-dimensional components. It is also useful for automatically uncovering the partial \(p\)-fold symmetries, thereby leading to smaller templates. Using the proposed automatic generator, we have been able to generate state-of-the-art elimination templates for many minimal problems, leading to substantial improvement in the solver performance.
## Acknowledgments
Snehal Bhayani has been supported by a grant from the Finnish Foundation for Technology Promotion. T. Pajdla was supported by EU H2020 SPRING No. 871245 project.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Solver & Our \((4,6)\) & [31]\((4,6)\) & Our \((5,5)\) & [31]\((5,5)\) \\ \hline Time/call & \(6.75\)ms & \(8.97\)ms & \(18.68\)ms & \(33.55\)ms \\ Relative time & \(1\) & \(1.33\) & \(1\) & \(1.80\) \\ Fail (no sol.) & \(5.1\%\) & \(9.9\%\) & \(3.3\%\) & \(2.7\%\) \\ Fail (\(\epsilon>1\)) & \(10.8\%\) & \(17.1\%\) & \(9.1\%\) & \(5.8\%\) \\ Fail (\(\epsilon>0.1\)) & \(28.7\%\) & \(40.1\%\) & \(23.4\%\) & \(19.3\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV:
Fig. 4: A boxplot depicting the error in focal length estimates for the problem \(\mathbf{H}13f\), in the presence of varying noise in the 3D points and \(0.5\)px image noise
Fig. 5: The error distribution for the Time-of-Arrival \((4,6)\) and \((5,5)\) problems |
2310.15484 | NuTrea: Neural Tree Search for Context-guided Multi-hop KGQA | Multi-hop Knowledge Graph Question Answering (KGQA) is a task that involves
retrieving nodes from a knowledge graph (KG) to answer natural language
questions. Recent GNN-based approaches formulate this task as a KG path
searching problem, where messages are sequentially propagated from the seed
node towards the answer nodes. However, these messages are past-oriented, and
they do not consider the full KG context. To make matters worse, KG nodes often
represent proper noun entities and are sometimes encrypted, being uninformative
in selecting between paths. To address these problems, we propose Neural Tree
Search (NuTrea), a tree search-based GNN model that incorporates the broader KG
context. Our model adopts a message-passing scheme that probes the unreached
subtree regions to boost the past-oriented embeddings. In addition, we
introduce the Relation Frequency-Inverse Entity Frequency (RF-IEF) node
embedding that considers the global KG context to better characterize ambiguous
KG nodes. The general effectiveness of our approach is demonstrated through
experiments on three major multi-hop KGQA benchmark datasets, and our extensive
analyses further validate its expressiveness and robustness. Overall, NuTrea
provides a powerful means to query the KG with complex natural language
questions. Code is available at https://github.com/mlvlab/NuTrea. | Hyeong Kyu Choi, Seunghun Lee, Jaewon Chu, Hyunwoo J. Kim | 2023-10-24T03:24:15Z | http://arxiv.org/abs/2310.15484v1 | # NuTrea: Neural Tree Search
###### Abstract
Multi-hop Knowledge Graph Question Answering (KGQA) is a task that involves retrieving nodes from a knowledge graph (KG) to answer natural language questions. Recent GNN-based approaches formulate this task as a KG path searching problem, where messages are sequentially propagated from the seed node towards the answer nodes. However, these messages are past-oriented, and they do not consider the full KG context. To make matters worse, KG nodes often represent proper noun entities and are sometimes encrypted, being uninformative in selecting between paths. To address these problems, we propose Neural Tree Search (NuTrea), a tree search-based GNN model that incorporates the broader KG context. Our model adopts a message-passing scheme that _probes_ the unreached subtree regions to boost the past-oriented embeddings. In addition, we introduce the Relation Frequency-Inverse Entity Frequency (RF-IEF) node embedding that considers the global KG context to better characterize ambiguous KG nodes. The general effectiveness of our approach is demonstrated through experiments on three major multi-hop KGQA benchmark datasets, and our extensive analyses further validate its expressiveness and robustness. Overall, NuTrea provides a powerful means to query the KG with complex natural language questions. Code is available at [https://github.com/mlvlab/NuTrea](https://github.com/mlvlab/NuTrea).
## 1 Introduction
The knowledge graph (KG) is a multi-relational data structure that defines entities in terms of their relationships. Given its enormous size and complexity, it has long been a challenge to properly query the KG via human languages [1; 2; 3; 4; 5; 6]. A corresponding machine learning task is knowledge graph question answering (KGQA), which entails complex reasoning on the KG to retrieve the nodes that correctly answers the given natural language question. To resolve the task, several approaches focused on parsing the natural language to a KG-executable form [7; 8; 9; 10], whereas others tried to process the KG so that answer nodes can be ranked and retrieved [11; 12; 13; 14; 2]. Building on these works, there has been a recent stream of research focusing on answering more complex questions with intricate constraints, which demand multi-hop reasoning on the KG.
Answering complex questions on the KG requires processing both the KG nodes (entities) and edges (relations). Recent studies have addressed multi-hop KGQA by aligning the question text with KG edges (relations), to identify the correct path from seed nodes (_i.e._, nodes that represent the question subjects) towards answer nodes. Many of these methods, however, gradually expand the search area outwards via message passing, whose trailing path information is aggregated onto the nodes, resulting in node embeddings that are past-oriented. Also, as many complex multi-hop KGQA questions require selecting nodes that satisfy specific conditions, subgraph-level (subtree-level) comparisons are necessary in distinguishing the correct path to the answer node. Furthermore, KG node entities often consist of uninformative proper nouns, and sometimes, for privacy concerns, they may be encrypted [15]. To address these issues, we propose **Neural**T**ree **S**earch (NuTrea), a graph neural network (GNN) model that adopts a tree search scheme to consider the broader KG contexts in searching for the path towards the answer nodes.
NuTrea leverages expressive message passing layers that propagate subtree-level messages to explicitly consider the complex question constraints in identifying the answer node. Each message passing layer consists of three steps, Expansion \(\rightarrow\) Backup \(\rightarrow\) Node Ranking, whose Backup step _probes_ the unreached subtree regions to boost the past-oriented embeddings with future information. Moreover, we introduce the Relation Frequency-Inverse Entity Frequency (RF-IEF) node embedding, which takes advantage of the global KG statistics to better characterize the KG node entities. Overall, NuTrea provides a novel approach in addressing the challenges of querying the KG, by allowing it to have a broader view of the KG context in aligning it with human language questions. The general effectiveness of NuTrea is evaluated on three major multi-hop KGQA benchmark datasets: WebQuestionsSP [16], ComplexWebQuestions [17], and MetaQA [18].
Then, our contributions are threefold:
* We propose _Neural Tree Search_ (NuTrea), an effective GNN model for multi-hop KGQA, which adopts a tree search scheme with an expressive message passing algorithm that refers to the future-oriented subtree contexts in searching paths towards the answer nodes.
* We introduce _Relation Frequency-Inverse Entity Frequency_ (RF-IEF), a simple node embedding technique that effectively characterizes uninformative nodes using the global KG context.
* We achieve the state-of-the-art on multi-hop KGQA datasets, WebQuestionsSP and ComplexWebQuestions, among weakly supervised models that do not use ground-truth logical queries.
## 2 Related Works
A knowledge graph (KG) is a type of a heterogeneous graph [19; 20]\(\mathcal{G}=(\mathcal{V},\mathcal{E})\), whose edges in \(\mathcal{E}\) are each assigned to a relation type by the mapping function \(f:\mathcal{E}\rightarrow\mathcal{R}\). The KGs contain structured information from commonsense knowledge [15; 21] to domain-specific knowledge [22], and the Knowledge Graph Question Answering (KGQA) task aims to answer natural language questions grounding on these KGs by selecting the set of answer nodes. Recent methods challenge on more complex questions that require multi-hop graph traversals to arrive at the correct answer node. Thus, the task is aliased as multi-hop KGQA or complex KGQA, which is generally discussed in terms of two mainstream approaches [23]: Semantic Parsing and Information Retrieval.
Semantic Parsing.The main idea of semantic parsing-based methods is to first parse natural language questions into a logical query. The logical query is then grounded on the given KG in an executable form. For example, [8] applies the semantic query graph generated by natural language questions. In replace of hand-crafted query templates, [7] introduced a framework that automatically learns the templates from the question-answer pairs. Also, [24] proposed a novel graph generation method for query structure prediction. Other methods take a case-based reasoning (CBR) approach, where previously seen questions are referenced to answer a complex question. Approaches like [9; 25; 26] use case-based reasoning by referring to similar questions or KG structures. Recently, [10] proposed a framework that jointly infers logical forms and direct answers to reduce semantic errors in the logical query. A vast majority of methods that take the semantic parsing approach utilize the ground-truth logical forms or query executions during training. Thus, these supervised methods are generally susceptible to incomplete KG settings, but have high explainability.
Information Retrieval.Information Retrieval-based methods focus on processing the KG to retrieve the answer nodes. The answer nodes are selected by ranking the subgraph nodes conditioned on the given natural language question. One of the former works [27] proposes an enhanced Key-Value
Memory neural network to answer more complex natural language questions. [28] and [29] extract answers from question-specific subgraphs generated with text corpora. To deal with incompleteness and sparsity of KG, [11] presents a KG embedding method to answer questions. Also, [12] handles this problem by executing query in the latent embedding space. In an effort to improve explainability of the information retrieval approach, [13] infers an adjacency matrix by learning the activation probability of each relation type. By adapting the teacher-student framework with the Neural State Machine (NSM), [14] made learning more stable and efficient. Furthermore, [30] utilized multi-task learning that jointly trains on the KG completion task and multi-hop KGQA, while [2] provides a novel link prediction framework. Recently, there has been a trend of borrowing the concept of logical queries from Semantic Parsing approaches, attempting to _learn_ the logical instructions that guide the path search on the KG [2, 1]. Our NuTrea also builds on these approaches.
## 3 Method
In recent studies, models that sequentially process the knowledge graph (KG) from the seed nodes have shown promising results on the KGQA task. Building upon this approach, we propose Neural Tree Search (NuTrea). NuTrea adopts a novel message passing scheme that propagates the broader subtree-level information between adjacent nodes, which provides more context in selecting nodes that satisfy the complex question constraints. Additionally, we introduce a node embedding technique called _Relation Frequency-Inverse Entity Frequency_ (RF-IEF), which considers the global KG information when initializing node features. These methods allow for a richer representation of each node by leveraging the broader KG context in answering complex questions on the KG.
### Problem Definition
Here, we first define the problem settings for Neural Tree Search (NuTrea). The multi-hop KGQA task is primarily a natural language processing problem that receives a human language question \(x_{q}\) as input, and requires retrieving the set of nodes from \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) that answer the question. Following the standard protocol in KGQA [11], the subject entities in \(x_{q}\) are given and assumed always to be mapped to a node in \(\mathcal{V}\) via entity-linking algorithms [31]. These nodes are called seed nodes, denoted \(v_{s}\in\mathcal{V}_{s}\), which are used to extract a subgraph \(\mathcal{G}_{q}=(\mathcal{V}_{q},\mathcal{E}_{q})\) from \(\mathcal{G}\) so that \(\mathcal{V}_{q}\) is likely to contain answer nodes \(v_{a}\in\mathcal{V}_{a}\). Then, the task reduces to a binary node classification problem of whether each node \(v\in\mathcal{V}_{q}\) satisfies \(v\in\mathcal{V}_{a}\).
Following prior works [14, 1], we first compute different question representation vectors with a language model based module, called Instruction Generators (IG). We have two IG modules, each for the Expansion (section 3.2.1) and Backup (section 3.2.2) step, to compute
\[\{\mathbf{q}_{\text{exp}}^{(i)}\}_{i=1}^{N}=\text{IG}_{\text{exp}}(x_{q}), \quad\{\mathbf{q}_{\text{bak}}^{(j)}\}_{j=1}^{M}=\text{IG}_{\text{bak}}(x_{q}), \tag{1}\]
where we name \(\mathbf{q}_{\text{exp}}^{(i)},\mathbf{q}_{\text{bak}}^{(j)}\in\mathbb{R}^{D}\) as the expansion instruction and backup instruction, respectively. Detailed description on the IG module is in the supplement. Then, the learnable edge (relation) type embeddings \(\mathbf{R}\in\mathbb{R}^{|\mathcal{V}_{q}|\times D}\) are randomly initialized or computed with a pretrained language model. On the other hand, node embeddings \(\mathbf{H}\in\mathbb{R}^{|\mathcal{V}_{q}|\times D}\) of \(\mathcal{V}_{q}\) are initialized using the edges in \(\mathcal{E}_{q}\) and their relation types by function \(\mathcal{H}\) as \(\mathbf{H}=\mathcal{H}(\mathcal{E}_{q},\mathbf{R})\) (e.g., \(\mathcal{H}\) = arithmetic mean of incident edge relations). This is because the node entities are often uninformative proper nouns or encrypted codes. Then, our NuTrea model \(\mathcal{F}\) function is defined as
\[\mathbf{\hat{y}}=\mathcal{F}(\{\mathbf{q}_{\text{exp}}^{(i)}\}_{i=1}^{N},\{ \mathbf{q}_{\text{bak}}^{(j)}\}_{j=1}^{M},\mathcal{V}_{s}\;;\mathcal{G}_{q}, \mathbf{H},\mathbf{R}), \tag{2}\]
where \(\mathbf{\hat{y}}\in\mathbb{R}^{|\mathcal{V}_{q}|}\) is the predicted node score vector normalized across nodes in \(\mathcal{V}_{q}\), whose ground-truth labels are \(\mathbf{y}=[\mathbb{I}\left(v\in\mathcal{V}_{a}\right)]_{v\in\mathcal{V}_{q}} \in\mathbb{R}^{|\mathcal{V}_{q}|}\). Then, the model is optimized with the KL divergence loss between \(\mathbf{\hat{y}}\) and \(\mathbf{y}\). In this work, we claim our contributions in \(\mathcal{F}\) (section 3.2) and \(\mathcal{H}\) (section 3.3).
### Neural Tree Search (\(\mathcal{F}\))
Our Neural Tree Search (NuTrea) model consists of multiple layers that each performs message passing in three consecutive steps: (1) Expansion, (2) Backup, and (3) Node ranking. The Expansion step propagates information outwards to expand the search tree, which is followed by the Backup step, where the depth-\(K\) subtree content is aggregated to each of their root nodes, to enhance the past-oriented messages from the Expansion step. Then, the nodes are scored based on how likely they answer the given question.
#### 3.2.1 Expansion
Starting from the seed node \(v_{s}\in\mathcal{V}_{s}\), a NuTrea layer first expands the search tree by sequentially propagating messages outwards to the adjacent nodes. The propagated messages \(\mathbf{f}_{uv}^{(i)}\) are conditioned on the expansion instructions \(\{\mathbf{q}_{\text{exp}}^{(i)}\}_{i=1}^{N}\), which are computed as
\[\mathbf{f}_{uv}^{(i)}=\text{ReLU}(\mathbf{W_{f}}\mathbf{r}_{uv}\odot\mathbf{q }_{\text{exp}}^{(i)}), \tag{3}\]
where \(i\in[1,N]\), \(\mathbf{W_{f}}\in\mathbb{R}^{D\times D}\) is a learnable linear projection, \(\mathbf{r}_{uv}=\text{row}(\mathbf{R})\in\mathbb{R}^{D}\) is the relation type embedding of edge \(u\to v\), and \(\odot\) is an element-wise product operator. Optionally, we use the relative position embedding \(\mathbf{e}_{uv}\in\mathbb{R}^{D}\) as
\[\mathbf{f}_{uv}^{(i)}=\text{ReLU}((\mathbf{W_{f}}\mathbf{r}_{uv}+\mathbf{e}_ {uv})\odot\mathbf{q}_{\text{exp}}^{(i)}), \tag{4}\]
where \(\mathbf{e}_{uv}\) is defined for each relation type. Then, for an edge \(u\to v\), \(N\) types of messages are propagated, which are element-wise products between the edge relation and the \(N\) different question representation vectors. This operation highlights the edges that are relevant to the given question. Then, the messages are aggregated to a node \(v\) via an MLP aggregator, computed as
\[\mathbf{\tilde{f}}_{v}=\big{\|}_{i=1}^{N}\sum_{u\in N(v)}s_{u}\mathbf{f}_{uv}^ {(i)} \tag{5}\]
\[\mathbf{f}_{v}=\text{MLP}(\mathbf{h}_{v}\parallel\mathbf{\tilde{f}}_{v}),\]
where \(\mathbf{h}_{v}=\text{row}(\mathbf{H})\in\mathbb{R}^{D}\) is the node embedding and \(s_{u}\in\mathbb{R}\) is the score value of a head node \(u\). In the first layer, the seed nodes are the only head nodes, whose scores are initially set to 1. In subsequent layers, we use the updated node scores as \(s_{u}\), whose computation will be introduced shortly in section 3.2.3. Notably, the nodes with score \(s_{u}=0\), which are typically nodes that are yet to be reached, do not pass any message to their neighbors.
#### 3.2.2 Backup
After the Expansion step grows the search tree, the leaf nodes of the tree naturally contain the trailing path information from the seed nodes, which is past-oriented. To provide future context, we employ
Figure 1: **Neural Tree Search. Given a natural language question, a corresponding KG subgraph \(\mathcal{G}_{q}\) is extracted, and expansion instructions \(\mathbf{q}_{\text{exp}}\) and backup instructions \(\mathbf{q}_{\text{black}}\) are computed with Instruction Generators (IG). The node embeddings of \(\mathcal{G}_{q}\) are first defined by our RF-IEF (section 3.3), which characterizes nodes based on the global KG information by suppressing prevalent relation types that hold less meaning. Then, in each NuTrea layer (section 3.2), messages are propagated outwards from the seed node with respect to \(\mathbf{q}_{\text{exp}}\) (Expansion), which contain future-oriented subtree information conditioned by \(\mathbf{q}_{\text{black}}\) (Backup). Then, the nodes are scored by adding and normalizing the logits (Node Ranking), which are utilized in the subsequent layer. The figure describes the first layer of NuTrea whose Backup subtree depth \(K\) is 2, and \(\phi\) is the softmax function. Overall, NuTrea considers the broader KG contexts in distinguishing the correct paths, contrary to previous methods that do not incorporate the Backup step.**
the Backup step to aggregate contextual information from subtrees rooted at the nodes reached by previous NuTrea layers. We denote a subtree of depth \(K\) rooted at node \(v\) as \(\mathcal{T}_{v}^{K}=(\mathcal{V}_{v},\mathcal{E}_{v})\subset\mathcal{G}_{q}=( \mathcal{V}_{q},\mathcal{E}_{q})\). Here, \(\mathcal{V}_{v}=\{u\mid\mathrm{SP}(u,v)\leq K\text{ and }u\in\mathcal{V}_{q}\}\) and \(\mathcal{E}_{v}=\{(u_{1},u_{2})\mid(u_{1},u_{2})\in\mathcal{E}\text{ and }u_{1},u_{2}\in\mathcal{V}_{v}\}\), where \(\mathrm{SP}(u,v)\) is a function that returns the length of the shortest path between nodes \(u\) and \(v\). For the Backup step, we consider only the edge set \(\mathcal{E}_{v}\) of \(\mathcal{T}_{v}^{K}\). The reason behind this is that the edges (relation types) better represent the question context in guiding the search on the KG. Also, using both the node and edge sets may introduce computational redundancy, as the initial node features originate from the edge embeddings [14, 1] (See section 3.1).
To pool the constraint information from \(\mathcal{E}_{v}\), we apply max-pooling conditioned on the question content. Specifically, we take a similar measure as the Expansion step by computing \(M\) types of messages conditioned on the backup instructions \(\{\mathbf{q}_{\mathrm{bak}}^{(j)}\}_{j=1}^{M}\).
\[\mathbf{c}_{u_{1}u_{2}}^{(j)}=\mathrm{ReLU}(\mathbf{W_{e}}\mathbf{r}_{u_{1}u_{2}} \odot\mathbf{q}_{\mathrm{bak}}^{(j)}), \tag{6}\]
where \(j\in[1,M]\) and \(\mathbf{W_{e}}\in\mathbb{R}^{D\times D}\). Then, we max-pool the messages as
\[\mathbf{c}_{v}^{(j)}=\text{MAX-POOL}(\{\mathbf{c}_{u_{1}u_{2}}^{(j)}\mid(u_{1 },u_{2})\in\mathcal{E}_{v}\}), \tag{7}\]
which represents the extent to which the local subtree context of \(v\) is relevant to the conditions and constraints in the question. Next, the information is aggregated with an MLP layer, and the node embedding \(\mathbf{h}_{v}\) is updated as
\[\mathbf{c}_{v} =\big{\|}_{j=1}^{M}\ \mathbf{c}_{v}^{(j)} \tag{8}\] \[\mathbf{h}_{v} :=\text{MLP}(\mathbf{f}_{v}\parallel\mathbf{c}_{v}),\]
where \(\mathbf{f}_{v}\) refers to the propagated embeddings originating from Eq. (5). This serves as a correction of the original past-oriented message with respect to question constraints, providing the next NuTrea layer with rich local context.
#### 3.2.3 Node Ranking
Finally, each node is scored and ranked based on the embeddings before and after the Backup step. For node \(v\), \(\mathbf{f}_{v}\) from Eq. (5) and \(\mathbf{h}_{v}\) from Eq. (8) are projected to the expansion-score \(s_{v}^{(e)}\) and backup-score \(s_{v}^{(b)}\), respectively, as
\[s_{v}^{(e)}=\mathbf{f}_{v}\cdot W_{e}\qquad\quad s_{v}^{(b)}=\mathbf{h}_{v} \cdot W_{b}, \tag{9}\]
where \(W_{e},W_{b}\in\mathbb{R}^{D}\). The final node score is retrieved by adding the two scores. We use a context coefficient \(\lambda\) to control the effect of the Backup step, and apply softmax to normalize the scores as
\[s_{v}=\text{Softmax}([s_{v}^{(e)}+\lambda\cdot s_{v}^{(b)}])_{v\in\mathcal{V} _{q}}, \tag{10}\]
which is passed on to the next layer to be used for Eq. (5). Figure 1 provides a holistic view of our method, and pseudocode is in the supplement.
Overall, the message passing scheme of NuTrea resembles the algorithm of Monte Carlo Tree Search (MCTS): Selection \(\rightarrow\) Expansion \(\rightarrow\) Simulation \(\rightarrow\) Backup. The difference is that our method replaces the node 'Selection' and 'Simulation' steps with a soft GNN-based approach that rolls out subtrees and updates the nodes at once, rather than applying Monte Carlo sampling methods.
### RF-IEF Node Embedding (\(\mathcal{H}\))
Another notable challenge with KGs is embedding nodes. Many KG entities (nodes) are proper nouns that are not informative, and several KGQA datasets [16, 17] consist of encrypted entity names. Thus, given no proper node features, the burden is on the model layers to learn meaningful node embeddings. To alleviate this, we propose a novel node embedding method, Relation Frequency-Inverse Entity Frequency (RF-IEF), which grounds on the local and global topological information of the KG nodes.
Term frequency-inverse document frequency (TF-IDF) is one effective feature that characterizes a textual document [32, 33, 34]. The bag-of-words model computes the frequency of terms in a document and penalizes frequent but noninformative words, _e.g.,_ 'a', 'the', 'is', 'are', by the frequency of the
term across documents. Motivated by the idea, we represent a node on a KG as a bag of relations. An entity node is characterized by the frequencies of rare (or informative) relations. Similar to TF-IDF, we define two functions: Relation Frequency (RF) and Inverse Entity Frequency (IEF). The RF function is defined for node \(v\in\mathcal{V}_{q}\) and relation type \(r\in\mathcal{R}\) as
\[\text{RF}(v,r)=\sum_{e\in I(v)}\mathbb{1}\{f(e)=r\}, \tag{11}\]
where \(\mathcal{I}(v)\) is the set of incident edges of node \(v\), \(\mathbb{1}\) is an indicator function, and \(f:\mathcal{E}\rightarrow\mathcal{R}\) is a function that retrieves the relation type of an edge. Then, the output of the RF function is a matrix \(RF\in\mathbb{R}^{|\mathcal{V}_{q}|\times|\mathcal{R}|}\) that counts the occurrence of each relation type incident to each node in the KG subgraph. We used raw counts for relation frequency (RF) to reflect the local degree information of a node. On the other hand, the IEF function is defined as
\[\text{IEF}(r)=\log\frac{|\mathcal{V}_{q}|}{1+\text{EF}(r)}, \tag{12}\]
where
\[\text{EF}(r)=\sum_{v\in\mathcal{V}_{q}}\mathbb{1}\{\exists\ e\in\mathcal{I}(v )\text{ s.t. }f(e)=r\}. \tag{13}\]
EF counts the global frequency of nodes across KG subgraphs that have relation \(r\) within its incident edge set. With \(IEF\in\mathbb{R}^{|\mathcal{R}|}\), the RF-IEF matrix \(\mathbf{F}\in\mathbb{R}^{|\mathcal{V}_{q}|\times|\mathcal{R}|}\) is computed as
\[\mathbf{F}=RF\operatorname{diag}(IEF), \tag{14}\]
where \(\operatorname{diag}(IEF)\in\mathbb{R}^{|\mathcal{V}_{q}|\times|\mathcal{R}|}\) denotes a diagonal matrix constructed with the elements of \(IEF\).
The RF-IEF matrix \(\mathbf{F}\) captures both the local and global KG structure and it can be further enhanced with the rich semantic information on the edges of KGs. Unlike entity nodes with uninformative text, _e.g.,_ proper nouns and encrypted entity names, edges generally are accompanied by linguistic descriptions (relation types). Hence, the relations are commonly embedded by a pre-trained language model in KGQA. Combining with the relation embeddings \(\mathbf{R}\in\mathbb{R}^{|\mathcal{R}|\times D}\), our final RF-IEF node embeddings \(\mathbf{H}\) is computed as
\[\mathbf{H}=\mathbf{F}\ \mathbf{R}\ \mathbf{W}_{h}, \tag{15}\]
where \(\mathbf{H}\in\mathbb{R}^{|\mathcal{V}_{q}|\times D}\), and \(\mathbf{W}_{h}\in\mathbb{R}^{D\times D}\). The RF-IEF node embeddings can be viewed as the aggregated semantics of relations, which are represented by a language model, based on graph topology as well as the informativeness (or rareness) of relations. A row in \(\mathbf{H}\) is a node embedding vector \(\mathbf{h}_{v}\), which is used in Eq. (5) at the first NuTrea layer.
### Discussion
Expressiveness of the NuTrea layer.While many previous multi-hop KGQA models _simultaneously_ update all node embeddings, recent works [14; 2; 1] have shown the superiority of approaches that search paths on the KG. These methods gradually expand the searching area by _sequentially_
Figure 2: **Expressiveness of NuTrea layers. In the input KG subgraph (left-most figure), let node 1 and 2 be the seed and answer node, respectively, where information \(d\) is critical in choosing node 2 as the answer. The shaded nodes indicate the regions that are sequentially reached by consecutive GNN layers. Compared to (a) previous methods [14; 1] that require 3 message passing steps for node 2 to access \(d\), our (b) NuTrea layer requires only 1 step for node 2 to acquire information \(d\) to determine it as the answer.**
updating nodes closer to the seed node towards the answer nodes. Our model builds on the latter _sequential search_ scheme as well, enhancing expressiveness with our proposed NuTrea layers.
With a simple toy example in Figure 2, we compare the message flow of previous sequential search models and our NuTrea. The example demonstrates that our NuTrea layer (b) can probe subtrees to quickly gather fringe node (_i.e_., node 4 or 5) information without exhaustively visiting them. This is accomplished by our Backup step, which boosts the original past-oriented node embeddings with future-oriented subtree information.
## 4 Experiments
### Experimental Settings
Datasets.We experiment on three large-scale multi-hop KGQA datasets: MetaQA [35], WebQuestionsSP (WQP) [16] and ComplexWebQuestions (CWQ) [17]. Meta-QA consists of three different splits, 1-hop, 2-hop, and 3-hop, each indicating the number of hops required to reach the answer node from the seed node. The questions are relatively easy with less constraints. WQP and CWQ, on the other hand, contain more complex questions with diverse constraints. WQP is relatively easier, as CWQ is derived from WQP by extending its questions with additional constraints. MetaQA is answerable with the WikiMovies knowledge base [27], while WQP and CWQ require the Freebase KG [15] to answer questions. Further dataset information and statistics are provided in the supplement.
Baselines.We mainly compare with previous multi-hop KGQA methods that take the Information Retrieval approach (section 2). These models, unlike Semantic Parsing approaches, do not access the ground truth logical queries and focus on processing the KG subgraph to rank the nodes to identify answer nodes. To introduce the three most recent baseline models: (1) SQALER [3] proposes a scalable KGQA method whose complexity scales linearly with the number of relation types, rather than nodes or edges. (2) TERP [2] introduces the rotate-and-scale entity link prediction framework to integrate textual and KG structural information. (3) ReaRev [1] adaptively selects the next reasoning step with a variant of breadth-first search (BFS). Other baselines are introduced in the supplement.
Implementation Details.For WQP, 2 NuTrea layers with subtree depth \(K=1\) is used, while CWQ with more complex questions uses 3 layers with depth \(K=2\). In the case of RF-IEF node embedding, we pre-compute the Entity Frequency (EF) values in Eq. (13) for subgraphs in the training set before training. We use the same EF values throughout training, validation, and testing. This stabilizes computation by mitigating the large variance induced by relatively small batch sizes. For MetaQA, the number of NuTrea layers are selected from {2, 3}, and \(K\) for ego-graph pooling from {1,2}. See the supplement for further hyperparameter settings and details.
### Main Experiments
Here, we present the experimental results of NuTrea. Following the common evaluation practice of previous works, we test the model that achieved the best performance on the validation set. In the WQP dataset experiments in Table 1, we achieved the best performance of 77.4 H@1 among strong KGQA baselines that take an information retrieval approach, as discussed in Section 2. Compared to the previous best, this is a large improvement of 0.6 points. In terms of the F1 score, which evaluates the answer _set_ prediction, our method achieved a score of 72.7, exceeding the previously recorded value by a large margin of 1.8 points. In addition, we also improved the previous state-of-the-art performance on the CWQ dataset by achieving an 53.6 H@1, which is an improvement of 0.7 points.
We also experimented NuTrea on MetaQA to see if it performs reasonably well with easy questions as well. On three data splits, NuTrea achieved comparable performance with previous state-of-the-art methods for simple question answering. Evaluating with the average H@1 score of the three splits, NuTrea performs second best among all baseline models.
### Incomplete KG Experiments
The KG is often human-made and the contents are prone to being incomplete. Hence, it is a norm to test a model's robustness on incomplete KG settings where a certain portion of KG triplets, _i.e_., a tuple of \(\langle\)head, relation, tail\(\rangle\), are dropped. This experiment evaluates the robustness of our model to missing relations in a KG. We follow the experiment settings in [1], and use the identical incomplete KG dataset which consists of WQP samples with [50%, 30%, 10%] of the original KG triplets remaining. In Table 2, NuTrea performs the best in most cases, among the GNN models
designed to handle incomplete KGs. We believe that our model adaptively learns multiple alternative reasoning processes and can plan for future moves beforehand via our Backup step, so that it provides robust performance with noisy KGs.
## 5 Analysis
In this section, we provide comprehensive analyses on the contributions of NuTrea to ensure its effectiveness in KGQA. We try to answer the following research questions: **Q1.** How does each component contribute to the performance of NuTrea? **Q2.** What is the advantage of NuTrea's tree search algorithm over previous methods? **Q3.** What are the effects of the RF-IEF node embeddings?
### Ablation Study
Here, we evaluate the effectiveness of each component in NuTrea to answer **Q1**. An ablation study is performed on our two major contributions: the Backup step in our NuTrea layers and the RF-IEF node embeddings. By removing the RF-IEF node embeddings, we instead apply the common node initialization method used in [14; 1], which simply averages the relation embeddings incident to each node. Another option is to use zero embeddings, but we found it worse than the simple averaging method. For NuTrea layer ablation, we remove the Backup step which plays a key role in aggregating the future-oriented subtree information onto the KG nodes. Then, only the expansion-score (\(s_{v}^{(e)}\)) is computed, and the backup-score (\(s_{v}^{(b)}\)) is always 0. Also, the embedding \(\mathbf{f}_{v}\) (Eq. (5)), is directly output from the NuTrea layer and no further updates are made via the Backup step.
In Table 3, we can see that the largest performance drop is observed when the Backup step is removed. Without it, the model has limited access to the broader context of the KG and cannot reflect the complex question constraints in node searching. Further discussion on this property of NuTrea's message passing is provided in the next Section 5.2. Also, we observed a non-trivial 0.6 point drop
\begin{table}
\begin{tabular}{l|c c|c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**WQP**} & \multicolumn{2}{c}{**CWQ**} & \multicolumn{3}{c}{**MetaQA**} \\ & **H@1** & **F1** & **H@1** & **F1** & 1-hop & 2-hop & 3-hop & **Avg. H@1** \\ \hline KV-Mem [27] & 46.7 & 38.6 & 21.1 & - & 95.8 & 25.1 & 10.1 & 43.7 \\ GraftNet [28] & 66.7 & 62.4 & 32.8 & - & - & - & - & 96.8 \\ PullNet [29] & 68.1 & - & 45.9 & - & 97.0 & 99.9 & 91.4 & 96.1 \\ EmbedKGQA [11] & 66.6 & - & - & - & **97.5** & 98.8 & 94.8 & 97.0 \\ ReifieldKB [36] & 52.7 & - & - & - & 96.2 & 81.1 & 72.3 & 83.2 \\ EMQL [37] & 75.5 & - & - & - & 97.2 & 98.6 & 99.1 & 98.3 \\ TransferNet [13] & 71.4 & 48.6 & 48.6 & - & **97.5** & **100.0** & **100.0** & **99.2** \\ NSM(+p) [14] & 73.9 & 66.2 & 48.3 & 44.0 & 97.3 & 99.9 & 98.9 & 98.7 \\ NSM(+h) [14] & 74.3 & 67.4 & 48.8 & 44.0 & 97.2 & 99.9 & 98.9 & 98.6 \\ Rigel [38] & 73.3 & - & 48.7 & - & - & - & - & - \\ SQALER+GNN [3] & 76.1 & - & - & - & - & 99.9 & 99.9 & - \\ TERP [2] & 76.8 & - & 49.2 & - & **97.5** & 99.4 & 98.9 & 98.6 \\ ReaRev [1] & 76.4 & 70.9 & 52.9 & - & - & - & - & - \\ \hline
**NuTrea (Ours)** & **77.4** & **72.7** & **53.6** & **49.5** & 97.4 & **100.0** & 98.9 & 98.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results on multi-hop KGQA datasets.** The Hit@1 and F1 scores are reported. The baselines are taken from the original papers. The best performances are in bold, and the second best are underlined.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline \multirow{2}{*}{**Portion of KG triplets (\%)**} & **50\%** & **30\%** & **10\%** \\ & H@1 F1 & H@1 F1 & H@1 F1 \\ \hline Graftnet [28] & 47.7 & 34.3 & 34.9 & 20.4 & 15.5 & 6.5 \\ SGReader [39] & 49.2 & 33.5 & 35.9 & 20.2 & 17.1 & 7.0 \\ HGCN [40] & 49.3 & 34.3 & 35.2 & 21.0 & 18.3 & 7.9 \\ ReaRev [1] & 53.4 & 39.9 & 37.9 & 23.6 & **19.4** & 8.6 \\ \hline
**NuTrea (Ours)** & **53.7** & **40.1** & **38.3** & **24.1** & 18.9 & **8.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Incomplete KG experiments.** NuTrea also performs well in incomplete KG settings. The baseline figures were taken from [1].
\begin{table}
\begin{tabular}{c c|c c} \hline \hline \multirow{2}{*}{**RF-IEF Node Emb.**} & **Backup step** & **WQP H@1** & **WQP F1** \\ \hline ✓ & ✓ & **77.4** (–0.0) & **72.7** (–0.0) \\ ✓ & & 74.8 (–2.6) & 70.4 (–2.3) \\ & ✓ & 76.8 (–0.6) & 71.5 (–1.2) \\ & & 73.4 (–4.0) & 70.9 (–1.8) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Component ablation experiments.** The ablation experiments were done on the WQP dataset. Two main contributions are studied.
in H@1 by removing the RF-IEF initialization method. By ablating both components, there was a significant degradation of 4.0 H@1 points.
### Advantages of NuTrea
To answer **Q2**, we highlight the key advantages of NuTrea over recent approaches. We analyze the efficiency of NuTrea, and provide qualitative results.
#### 5.2.1 Efficiency of NuTrea
In addition, to further verify the utility of our model, we provide analyses on the latency of NuTrea with WQP in Table 4. Compared to the most recent ReaRev [1] model, training and inference latency per epoch/sample is slightly bigger, due to our additional Backup module. However, thanks to our expressive message passing scheme, NuTrea converges way faster, allowing the training GPU hours to be reduced from 4.3 hours to 2.9 hours.
To provide more insight in terms of number of NuTrea layers, we also reveal its effect on model performance, in Figure 5. The figure reports the F1 scores for both with and without the Backup module, evaluated on the WebQuestionsSP dataset. "NuTrea without Backup" has an equivalent model configuration used in the Backup ablation experiment of Table 3. Overall, the performance of "NuTrea without Backup" does generally improve with additional layers, but "NuTrea with Backup" reached the highest score of 72.7 with only 2 NuTrea layers. This is enabled by our Backup module, which alleviates the burden of exhaustively searching deeper nodes. This is computationally more efficient than stacking multiple layers to achieve higher performance. Specifically, by comparing the 2-layer "NuTrea with Backup" and 5-layer "NuTrea without Backup", the former required an average of 73.8 ms of inference time per question, whereas the latter required 108.6 ms. With only 68% of compute, our NuTrea achieved comparable performance with the deeper "NuTrea without Backup". Note, these latency values were evaluated on a different environment from values reported in Table 4.
#### 5.2.2 Qualitative Results
In Figure 3, we demonstrate several qualitative examples from our error analysis. In each question, the blue (node) entity is the correct answer choice, while the red is the wrong choice made by the
\begin{table}
\begin{tabular}{l|c c} \hline \hline Models & NuTrea (Ours) & ReaRev \\ \hline Training Latency (per epoch) & 100.2 s & 78.0 s \\ Inference Latency (per sample) & 67.7 ms & 51.3 ms \\ Training GPU Hours & 2.9 H & 4.3 H \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Latency of ReaRev and NuTrea.**
Figure 5: **Effect of number of layers.**
model without the Backup step. The values in parentheses demonstrate the difference in node scores between models _without_ and _with_ Backup. Without it, the sequential search model cannot refer to local contexts of a node and frequently predicts an extremely low score for the correct answer node (_e.g._, 0.0062 for "Queen Elizabeth the Queen Mother" in the first question of Figure 3). Such a problem is mitigated by our NuTrea's Backup step, which tends to boost scores of correct answers and tone down wrong choices that were wrongly assigned with a high score. More qualitative examples are provided in the supplement, along with an analysis on the different importances of the Backup step in different datasets, by controlling the context coefficient \(\lambda\).
### Effect of RF-IEF
The RF-IEF node embedding is a simple method inspired by an effective text representation technique in natural language processing. Here, we disclose the specific effect of RF-IEF on the relation embedding aggregation weights, thereby answering **Q3**. In Figure 4 (left), the log-scaled \(EF\) (Eq. 13) value of each relation type is sorted. The globally most frequent relation types, including "self_loop"s, are too general to provide much context in characterizing a KG node. Our RF-IEF suppresses such uninformative relation types for node embedding initialization, resulting in a weight distribution like Figure 4 (right). The pie charts display two examples on the difference between aggregation weights for a node entity before and after RF-IEF is applied. To illustrate, the weight after RF-IEF corresponds to a row of \(\mathbf{F}\) in Eq. (14), while the weight before RF-IEF would be uniform across incident edges. As "Alexander Bustamante" is a politician, the relation types "organizations founded" and "founders" become more lucid via RF-IEF, while relations like "events_competed_in" and "competitors" are emphasized for an athlete like "Kemar Bailey-Cole". Likewise, RF-IEF tends to scale up characteristic relation types in initializing the node features, thereby enhancing differentiability between entities. To further demonstrate RF-IEF's general applicability, we also provide a plug-in experiment on another baseline model in the supplement.
## 6 Conclusion
Neural Tree Search (NuTrea) is an effective GNN model for multi-hop KGQA, which aims to better capture the complex question constraints by referring to the broader KG context. The high expressiveness of NuTrea is attained via our message passing scheme that resembles the MCTS algorithm, which leverages the future-oriented subtree information conditioning on the question constraints. Moreover, we introduce the RF-IEF node embedding technique to also consider the global KG context. Combining these methods, our NuTrea achieves the state-of-the-art in two major multi-hop KGQA benchmarks, WebQuestionsSP and ComplexWebQuestions. Further analyses on KG incompleteness and the qualitative results support the effectiveness of NuTrea. Overall, NuTrea reveals the importance of considering the broader KG context in harnessing the knowledge graph via human languages.
## Acknowledgments and Disclosure of Funding
This work was partly supported by ICT Creative Consilience program (IITP-2023-2020-0-01819) supervised by the IITP, the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2023R1A2C2005373), and KakaoBrain corporation.
## References
* [1] Costas Mavromatis and George Karypis. Rearev: Adaptive reasoning for question answering over knowledge graphs. In _EMNLP-Findings_, 2022.
* [2] Zile Qiao, Wei Ye, Tong Zhang, Tong Mo, Weiping Li, and Shikun Zhang. Exploiting hybrid semantics of relation paths for multi-hop question answering over knowledge graphs. In _COLING_, 2022.
* [3] Mattia Atzeni, Jasmina Bogojeska, and Andreas Loukas. Sqaler: Scaling question answering by decoupling multi-hop and logical reasoning. _NeurIPS_, 2021.
* [4] Jinyoung Park, Hyeong Kyu Choi, Juyeon Ko, Hyeonjin Park, Ji-Hoon Kim, Jisu Jeong, Kyungmin Kim, and Hyunwoo Kim. Relation-aware language-graph transformer for question answering. In _AAAI_, 2023.
* [5] Kuan Wang, Yuyu Zhang, Diyi Yang, Le Song, and Tao Qin. Gnn is a counter? revisiting gnn for question answering. In _ICLR_, 2021.
* [6] X Zhang, A Bosselut, M Yasunaga, H Ren, P Liang, C Manning, and J Leskovec. Greaselm: Graph reasoning enhanced language models for question answering. In _ICLR_, 2022.
* [7] Abdalghani Abujabal, Mohamed Yahya, Mirek Riedewald, and Gerhard Weikum. Automated template generation for question answering over knowledge graphs. In _WWW_, 2017.
* [8] Sen Hu, Lei Zou, and Xinbo Zhang. A state-transition framework to answer complex questions over knowledge base. In _EMNLP_, 2018.
* [9] Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. Case-based reasoning for natural language queries over knowledge bases. In _EMNLP_, 2021.
* [10] Donghan Yu, Sheng Zhang, Patrick Ng, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Yiqun Hu, William Wang, Zhiguo Wang, and Bing Xiang. Decaf: Joint decoding of answers and logical forms for question answering over knowledge bases. _arXiv preprint arXiv:2210.00063_, 2022.
* [11] Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In _ACL_, 2020.
* [12] Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Michihiro Yasunaga, Haitian Sun, Dale Schuurmans, Jure Leskovec, and Denny Zhou. Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs. In _ICML_, 2021.
* [13] Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, and Hanwang Zhang. Transfernet: An effective and transparent framework for multi-hop question answering over relation graph. In _EMNLP_, 2021.
* [14] Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. In _WSDM_, 2021.
* [15] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In _ACM SIGMOD_, 2008.
* [16] Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In _ACL_, 2016.
* [17] Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. In _NAACL_, 2018.
* [18] Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. Variational reasoning for question answering with knowledge graph. In _AAAI_, 2018.
* [19] Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. Heterogeneous graph neural network. In _ACM SIGKDD_, 2019.
* [20] Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, and S Yu Philip. A survey on heterogeneous graph embedding: methods, techniques, applications and sources. _IEEE Transactions on Big Data_, 2022.
* [21] Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In _AAAI_, 2017.
* [22] Di Jin, Eileen Pan, Nassim Oufatolle, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. _Applied Sciences_, 11(14):6421, 2021.
* [23] Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. A survey on complex knowledge base question answering: Methods, challenges and solutions. In _IJCAI_, 2021.
* [24] Yongrui Chen, Huiying Li, Yuncheng Hua, and Guilin Qi. Formal query building with query structure prediction for complex question answering over knowledge base. In _IJCAI_, 2020.
* [25] Dung Thai, Srinivas Ravishankar, Ibrahim Abdelaziz, Mudit Chaudhary, Nandana Mihindukulasooriya, Tahira Naseem, Rajarshi Das, Pavan Kapanipathi, Achille Fokoue, and Andrew McCallum. Cbr-ikb: A case-based reasoning approach for question answering over incomplete knowledge bases. _arXiv preprint arXiv:2204.08554_, 2022.
* [26] Rajarshi Das, Ameya Godbole, Ankita Naik, Elliot Tower, Manzil Zaheer, Hannaneh Hajishirzi, Robin Jia, and Andrew McCallum. Knowledge base question answering by case-based reasoning over subgraphs. In _ICML_. PMLR, 2022.
* [27] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. In _EMNLP_, 2016.
* [28] Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. Open domain question answering using early fusion of knowledge bases and text. In _EMNLP_, 2018.
* [29] Haitian Sun, Tania Bedrax-Weiss, and William Cohen. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In _EMNLP-IJCNLP_, 2019.
* [30] Lihui Liu, Boxin Du, Jiejun Xu, Yinglong Xia, and Hanghang Tong. Joint knowledge graph completion and question answering. In _ACM SIGKDD_, 2022.
* [31] Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. Semantic parsing via staged query graph generation: Question answering with knowledge base. In _Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing of the AFNLP_, 2015.
* [32] Hans Peter Luhn. The automatic creation of literature abstracts. _IBM Journal of research and development_, 2(2):159-165, 1958.
* [33] Stephen E Robertson and K Sparck Jones. Relevance weighting of search terms. _Journal of the American Society for Information science_, 27(3):129-146, 1976.
* [34] Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval. _Information processing & management_, 24(5):513-523, 1988.
* [35] Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. Variational reasoning for question answering with knowledge graph. In _AAAI_, 2018.
* [36] William W Cohen, Haitian Sun, R Alex Hofer, and Matthew Siegler. Scalable neural methods for reasoning with a symbolic knowledge base. In _ICLR_, 2020.
* [37] Haitian Sun, Andrew Arnold, Tania Bedrax Weiss, Fernando Pereira, and William W Cohen. Faithful embeddings for knowledge base queries. _NeurIPS_, 2020.
* [38] Priyanka Sen, Armin Oliya, and Amir Saffari. Expanding end-to-end question answering on differentiable knowledge graphs with intersection. In _EMNLP_, 2021.
* [39] Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. Improving question answering over incomplete kbs with knowledge-aware reader. _ACL_, 2019.
* [40] Jiale Han, Bo Cheng, and Xu Wang. Open domain question answering based on text enhanced knowledge graph with hyperedge infusion. In _EMNLP-Findings_, 2020.
# NuTrea: Neural Tree Search
for Context-guided Multi-hop KGQA (Supplement)
Overview.We here provide additional materials that were excluded from the main paper due to limited space. We provide the pseudocode of our method in section A; The Instruction Generator is described in section B; Dataset details and statistics are summarized in section C; All baseline models are briefly introduced in section D; Implementation details and specific hyperparameter settings for each dataset are enlisted in section E; A statistical significance test is conducted on NuTrea in section F; The effect of the context coefficient \(\lambda\) is analyzed to verify the effect of the Backup step in section G; Additional qualitative examples are provided in section H; NuTrea was compared with deeper baseline models to demonstrate its effectiveness in section I; RF-IEF is applied to another model to check its generality in section J; A short discussion on RF-IEF and Adamic-Adar is provided in section K; Limitations and possible future directions are finally discussed in section L.
## Appendix A Pseudocode
```
0: KG subgraph \(\mathcal{G}_{q}=(\mathcal{V}_{q},\mathcal{E}_{q})\), Relation embeddings \(\mathbf{R}\), Node embeddings \(\mathbf{h}_{v\in\mathcal{V}_{q}}=\text{row}(\mathbf{H})\), Node scores \(s_{u\in\mathcal{V}_{q}}\), Expansion instructions \(\{\mathbf{q}^{(i)}_{\text{exp}}\}_{i=1}^{N}\), Backup instructions \(\{\mathbf{q}^{(j)}_{\text{bak}}\}_{j=1}^{M}\)
1:for\(i=1\)to\(N\)do
2:\(\mathbf{f}^{(i)}_{v}\leftarrow\phi(\{\mathbf{f}_{uv}|u\in\mathcal{N}(v), \mathbf{q}^{(i)}_{\text{exp}},\mathbf{R},s_{u\in\mathcal{V}_{q}}\})\)\(\triangleright\) Expansion
3:endfor
4:\(\tilde{\mathbf{f}}_{v}\leftarrow\big{\|}_{i=1}^{N}\mathbf{f}^{(i)}_{v}\)
5:\(\mathbf{f}_{v}\leftarrow\text{MLP}(\mathbf{h}_{v}\parallel\tilde{\mathbf{f}} _{v})\)
6:for\(j=1\)to\(M\)do
7:\(\mathbf{c}^{(j)}_{v}\leftarrow\text{MAX-POOL}(\{\mathbf{c}_{u_{1}u_{2}}|(u_{1},u_{2})\in\mathcal{E}_{v},\mathbf{q}^{(j)}_{\text{bak}},\mathbf{R}\})\)\(\triangleright\) Backup
8:endfor
9:\(\mathbf{c}_{v}\leftarrow\big{\|}_{j=1}^{M}\mathbf{c}^{(j)}_{v}\)
10:\(\mathbf{h}_{v}\leftarrow\text{MLP}(\mathbf{f}_{v}\parallel\mathbf{c}_{v})\)
11:\(s^{(e)}_{v}\leftarrow\mathbf{f}_{v}\cdot W_{e}\)
12:\(s^{(b)}_{v}\leftarrow\mathbf{h}_{v}\cdot W_{b}\)
13:\(s_{v}\leftarrow\text{Softmax}(\{s^{(e)}_{v}+\lambda\cdot s^{(b)}_{v}\})_{v \in\mathcal{V}_{q}}\)\(\triangleright\) Node Ranking
14:return\(\mathbf{h}_{v},\ s_{v}\)
```
**Algorithm 1** NuTrea Layer
For better understanding, we here provide the pseudocode of our NuTrea layer in Algorithm 1. Each layer consists of (1) Expansion, (2) Backup, and (3) Node Ranking.
We also provide pseudocode for the entire inference process in Algorithm 2, which includes some details that were not discussed in the main paper. Similar to [1], the inference process repeats the GNN forward pass multiple times to compute the final node score. However, we find iterating only twice is enough in our case, thanks to our effective RF-IEF node embedding. Also note that after the first forward pass, the expansion and backup instructions are modified before the next iteration.
## Appendix B Instruction Generator Descriptions
In Neural Tree Search, we extract \(N+M\) different question representations. \(N\) are expansion instructions, while \(M\) are backup instructions. The instruction vectors are computed with a module commonly used in KGQA (_e.g._, NSM [2], ReaRev [1]) to retrieve \(N\) different question representations. Specifically, we followed [1]. An IG takes the natural language question \(\{\mathbf{q}_{\exp}^{(i)}\}_{i=1}^{N}\) as input to output question representations as
\[\{\mathbf{q}_{\exp}^{(i)}\}_{i=1}^{N}=\text{IG}_{\exp}(x_{q}). \tag{1}\]
In the IG function, a tokenizer converts input \(x_{q}\) to tokens \(\{\mathbf{x}_{t}\}_{t=1}^{T}\) that are used to retrieve the sentence embedding \(\mathbf{q}_{\text{LM}}\) with a language model \(\text{LM}(\cdot)\) (e.g., SentenceBERT) as
\[\mathbf{q}_{\text{LM}}=\text{LM}(\{\mathbf{x}_{t}\}_{t=1}^{T}). \tag{2}\]
To deterministically sample a sequence of sentence representations, a quasi-monte carlo sampling (or non-probability sampling) approach is adopted. \(\mathbf{q}_{\text{LM}}\) is first used to compute attention weights \(a_{t}^{(i)}\) as
\[\mathbf{q}^{(i)}=\boldsymbol{W}^{(i)}(\mathbf{q}^{(i-1)}||\mathbf{q}_{\text{ LM}}||\mathbf{q}_{\text{LM}}-\mathbf{q}^{(i-1)}||\mathbf{q}_{\text{LM}} \odot\mathbf{q}^{(i-1)}) \tag{3}\]
\[a_{t}^{(i)}=\text{Softmax}(\boldsymbol{W}_{a}(\mathbf{q}^{(i)}\odot\mathbf{x}_ {t})), \tag{4}\]
where \(i\in[1,N]\), and \(\mathbf{q}^{(0)}\) is a zero vector. Also, \(||\) indicates the concatenation operator, and \(\boldsymbol{W}_{a}\in\mathbb{R}^{D\times D}\), \(\boldsymbol{W}_{a}\in\mathbb{R}^{D\times D}\) are learnable matrices. Finally, each question representation \(\mathbf{q}_{\text{exp}}^{(i)}\) is computed as
\[\mathbf{q}_{\text{exp}}^{(i)}=\sum_{t}a_{t}^{(i)}\mathbf{x}_{t}. \tag{5}\]
The same process is repeated to compute the Backup instructions \(\{\mathbf{q}_{\text{bak}}^{(i)}\}_{i=1}^{M}\).
## Appendix C Dataset Details
Here, we provide further details regarding the benchmark datasets.
WebQuestionsSPcontains 4,727 natural language questions that can be answered with the Freebase KG [3]. For each question, at least one topic entity is assigned and a subgraph within 2-hops from the topic entities is extracted. Approximately 30% of the questions require more than two KG triplets to attain the correct answers, and 7% of them need complex reasoning with the constraints in the question [1]. Also, note that there are two evaluation protocols. One uses a validation set with size 100, which was adopted in [4]. Another has a validation set size of 250, which originates from [5]. We follow the latter setting for our experiments.
ComplexWebQuestionscontains 34,689 natural language questions also answerable with Freebase KG. This dataset is derived from WQP by extending the questions with additional constraints. The questions require more complex reasoning with composition, conjunction, comparison, and superlatives. Accordingly, up to 4-hops are needed to reach an answer node.
MetaQAContains more than 400K questions, which consists of three splits: 1-hop, 2-hop, and 3-hop. Each split's questions are answered by traversing the corresponding amount of hops on the KG extracted from the WikiMovies [6] knowledge base. The KG incorporates information on the movie domain, and contains 43K entities (nodes), 9 relation types (predicates), and 135K triplets (facts).
Table 1 contains additional dataset statistics for each train, validation, test split.
## Appendix D Other Baselines
We here provide full introduction of the 10 baselines we compared with NuTrea in the main table:
(1) KV-Mem [6] enhances the key-value memory network, (2) GraftNet [5] is a GNN-based model, (3) PullNet [7] is also a GNN-based model that attempts to generate a more question-relevant subgraph, (4) EmbedKGQA [4] attempts to embed the knowledge graph entities and select the answer node based on the similarity between the entity embeddings and the question embedding. (5) EMQL [8] and Rigel [9] attempts to enhance ReifiedKB [10]. (6) TransferNet [11] aims to enhance explainability of KGQA reasoning by explicitly learning the transition matrix for each reasoning step. (7) NSM [2] is a GNN-based model that adopts the Neural State Machine, (8) SQALER [12] proposes a scalable KGQA method whose complexity scales linearly with the number of relation types, rather than nodes or edges. (9) TERP [13] introduces the rotate-and-scale entity link prediction framework to integrate textual and KG structural information. (10) ReaRev [1] adaptively selects the next reasoning step with a variant of breadth-first search (BFS).
## Appendix E Hyperparameters and Experimental Details
To train our models, we use a single RTX 3090 GPU with a RAdam optimizer [14]. The initial learning rate is set to 0.0005 with an exponential learning rate scheduler with a decay rate of 0.99. Also, we use the KL divergence as the loss function. To evaluate the F1 score, we retrieve nodes in the order that has higher confidence until the total probability sums up to 0.95. Other hyperparameter settings that are used for the experiments are tabularized in Table 2.
## Appendix F Statistical Significance Test
During training, we have observed that the models' performance variance is quite high. Thus, it would be more reasonable to compare the performances by evaluating the average of multiple runs to validate the statistical significance of the gains. As WebQuestionsSP showed the highest variance, we
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Datasets & Train & Val & Test \\ \hline WebQuestionsSP & 2,848 & 250 & 1,639 \\ ComplexWebQuestions & 27,639 & 3,519 & 3,531 \\ MetaQA 1-hop & 96,106 & 9,992 & 9,947 \\ MetaQA 2-hop & 118,948 & 14,872 & 14,872 \\ MetaQA 3-hop & 114,196 & 14,274 & 14,274 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Dataset statistics.**
conducted a t-test on the multiple runs with five different seeds (0,1,2,3,4) to assess the statistical significance of our NuTrea over comparable models. Unfortunately, however, only the source code for ReaRev is currently available among the three comparable models in our table (e.g. SQALER, TERP, ReaRev). So we compare NuTrea only with ReaRev.
In Table 3, we report the "average (std dev)" of WQP experiments along with the t-test p-value. Our NuTrea's average H@1 across five seeds excels ReaRev by 2.1, and by 2.0 for average F1 score. To evaluate the statistical significance of the differences, we applied the t-test and retrieved p-values of 0.020 and 0.009, respectively. Thus, we can assure that the performance gain is statistically significant. The trained parameters will be released along with the code.
## Appendix G Sensitivity Analysis on the Context Coefficient
Here, we analyze the context coefficient \(\lambda\) that balances the effect of the Backup step. Figure 1 shows the H@1 and F1 scores of our models trained on WQP and CWQ with different context coefficients \(\lambda\in[0.3,0.6,1.0]\) As mentioned in the main paper, the CWQ dataset contains more complex questions that require intricate consideration of diverse constraints compared to the WQP dataset. As we expected, on CWQ the models with a relatively larger context coefficient \(\lambda\) perform better, which means that the constraint information needs to be further considered with the Backup step. For WQP dataset with simpler questions, the model trained with a small \(\lambda\) achieves the overall best performance. These experimental results evidence that our model can be optimized depending on the datasets by properly choosing context coefficient \(\lambda\) to balance the impact of the Backup module.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Hyperparameter** & **WQP** & **CWQ** & **MetaQA 1-hop** & **MetaQA 2-hop** & **MetaQA 3-hop** \\ \hline batch size & 8 & 8 & 32 & 32 & 32 \\ train epochs & 100 & 50 & 10 & 10 & 10 \\ model dimension & 100 & 100 & 50 & 50 & 50 \\ max-pool depth \(K\) & 1 & 2 & 1 & 1 & 2 \\ context coefficient \(\lambda\) & 0.3 & 1.0 & 1.0 & 1.0 & 1.0 \\ relative positional embedding & O & X & X & O & O \\ Expansion instruction num. \(N\) & 2 & 3 & 2 & 2 & 2 \\ Backup instruction num. \(M\) & 3 & 3 & 3 & 3 & 3 \\ layer num. \(L\) & 2 & 3 & 2 & 3 & 3 \\ dropout probability & 0.3 & 0.3 & 0.2 & 0.2 & 0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Hyperparameter settings.**
Figure 1: **Effect of the context coefficient \(\lambda\)**. The NuTrea model trained on the CWQ with more complex questions prefers larger \(\lambda\) values, whereas the WQP models prefer relatively smaller coefficients.
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline Models & ReaRev & NuTrea (Ours) & t-test p-value \\ \hline Average H@1 & 74.2 (1.4) & 76.3 (1.4) & 0.020\({}^{*}\) \\ Average F1 & 69.8 (1.2) & 71.8 (0.7) & 0.009\({}^{**}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **t-test results on ReaRev and NuTrea.**
Additional Qualitative Examples
Quantitatively, the Backup step had a significant impact on the H@1 and F1 score by improving them from 74.8 to 77.4, and 70.4 to 72.7, respectively (See Table 3 of main paper). Here, in Figure 2, we extend the list of qualitative examples provided in the main paper, to support the effectiveness of our Backup module. Similar to the main paper, the blue corresponds to the correct answer node entity, while red is the wrong choice made by the model that does not leverage the Backup step in message passing. The values in the parentheses refer to the scores of ("Without Backup" \(\Rightarrow\) "With Backup").
## Appendix I Model Scale Analysis
In order to verify whether our NuTrea has an advantage over a deeper ReaRev [1] model variants, we provide relevant experimental results in Table 4. We tested a deeper model with 2 to 5 layers, and also tried increasing the model dimension, denoted ReaRev-5-wide. Notably, ReaRev-5-wide has more parameters compared to our NuTrea model. In the table, NuTrea outperforms all ReaRev variants by significant margins. Also note that all models were trained under a new identical environment.
We conjecture the key advantage of NuTrea over deeper ReaRev is that it has a smaller smoothing effect. Assume, for instance, that the model is at a node that is 2 hops away from the seed node (_i.e._,
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Model & \# Layers & \# Params & H@1 & F1 \\ \hline ReaRev-2 (base) & 2 & 23.47 M & 75.4 & 70.4 \\ ReaRev-3 & 3 & 23.49 M & 74.0 & 69.9 \\ ReaRev-4 & 4 & 23.50 M & 73.8 & 69.5 \\ ReaRev-5 & 5 & 23.51 M & 74.4 & 70.5 \\ ReaRev-5-wide & 5 & 27.86 M & 75.0 & 70.3 \\ NuTrea (ours) & 2 & 27.43 M & **77.3** & **72.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Comparison with ReaRev of different scales.**
Figure 2: **More Qualitative Examples. The blue choices are the correct answers, whereas red choices are the wrongly selected answers by the model without Backup.**
on the 2nd layer) and needs to be confirmed by the depth-4 information. NuTrea requires only 2 expansion steps (message-passing layers), and 2 backup steps without any node/edge updates. The baseline model, on the other hand, will need up to 6 layers (i.e., hops) to achieve the same effect.
## Appendix J Generality of RF-IEF
To verify how RF-IEF node embedding improves baseline models, we also applied our method to ReaRev [1]. The benchmark setting adopts the node initialization method used in [2], as
\[\mathbf{e}^{(0)}=\sigma\left(\sum_{<\mathbf{e}^{\prime},\mathbf{r}^{\prime},\mathbf{e}>\mathbf{ \in}\mathbf{N}_{\mathbf{e}}}\mathbf{r}\cdot\mathbf{W}_{\mathbf{T}}\right), \tag{6}\]
where \(\mathbf{e}^{(0)}\) is the initialized embedding of a node, \(\mathbf{r}\) is the relation embedding, and \(\mathbf{W}_{\mathbf{T}}\) is a learnable matrix. This equation does not reweight relation types and applies a uniform weight over relation embeddings. In contrast, our RF-IEF highlights distinctive relation types to better characterize the node entity. In Table 5, the average H@1 performance over multiple runs (seeds \(0\sim 4\)) of ReaRev and NuTrea are improved by 0.4 and 0.8, respectively, when our RF-IEF embeddings are applied.
## Appendix K Analogy of RF-IEF to Adamic-Adar
Adamic-Adar [15] is a popular measure to quantify the linkage between two nodes in a graph. The Adamic-Adar function \(A(x,y)\) is defined as
\[A(x,y)=\sum_{u\in\mathcal{N}(x)\cap\mathcal{N}(y)}\frac{1}{\log|\mathcal{N}(u )|}, \tag{7}\]
which measures the sum of the inverse logarithmic degree value of the shared neighbors for two arbitrary nodes \(x\) and \(y\). That is, the function counts the number of neighboring nodes, scaled disproportionally to each of the neighbors' popularity (_i.e._, how central the neighbor is, measured by its degree value). This aspect of Adamic-Adar shares an intuition with our Relation Frequency-Inverse Entity Frequency (RF-IEF) node embedding technique. Similar to Admaic-Adar, which scales down neighbor _nodes_ that are connected everywhere, RF-IEF suppresses omnipresent _relation_ types in defining the node via the IEF function. Considering that Adamic-Adar-style methods have long been successful in many tasks that deal with defining and predicting linkages [16, 17, 18], the idea of initializing a node in a similar fashion is promising and intuitive.
## Appendix L Limitations and Future Work
We observed that quite a great portion of the questions in WebQuestionsSP and ComplexWebQuestions do not contain an answer node in the extracted subgraph. Our model does not specifically deal with such cases, and sometimes outputs predictions with high probability even when no answer node exists. To our knowledge, no research has tackled this problem regarding confidence calibration or out-of-distribution KG settings. Calibrating the confidence score and enabling the model to reasonably predict a null set will be an important future research direction.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Models & ReaRev & NuTrea (Ours) \\ \hline without RF-IEF & 74.2 (1.4) & 75.5 (1.1) \\ with RF-IEF & **74.6** (0.8) & **76.3** (1.4) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **RF-IEF applied to ReaRev and NuTrea.** |
2302.04940 | Conceptual design of 20 T hybrid accelerator dipole magnets | Hybrid magnets are currently under consideration as an economically viable
option towards 20 T dipole magnets for next generation of particle
accelerators. In these magnets, High Temperature Superconducting (HTS)
materials are used in the high field part of the coil with so-called insert
coils, and Low Temperature Superconductors (LTS) like Nb3Sn and Nb-Ti
superconductors are used in the lower field region with so-called outsert
coils. The attractiveness of the hybrid option lays on the fact that, on the
one hand, the 20 T field level is beyond the Nb3Sn practical limits of 15-16 T
for accelerator magnets and can be achieved only via HTS materials; on the
other hand, the high cost of HTS superconductors compared to LTS
superconductors makes it advantageous exploring a hybrid approach, where the
HTS portion of the coil is minimized. We present in this paper an overview of
different design options aimed at generating 20 T field in a 50 mm clear
aperture. The coil layouts investigated include the Cos-theta design (CT), with
its variations to reduce the conductor peak stress, namely the Canted Cos-theta
design (CCT) and the Stress Management Cos-theta design (SMCT), and, in
addition, the Block-type design (BL) including a form of stress management and
the Common-Coil design (CC). Results from a magnetic and mechanical analysis
are discussed, with particular focus on the comparison between the different
options regarding quantity of superconducting material, field quality,
conductor peak stress, and quench protection. | P. Ferracin, G. Ambrosio, M. Anerella, D. Arbelaez, L. Brouwer, E. Barzi, L. Cooley, J. Cozzolino, L. Garcia Fajardo, R. Gupta, M. Juchno, V. V. Kashikhin, F. Kurian, V. Marinozzi, I. Novitski, E. Rochepault, J. Stern, G. Vallone, B. Yahia, A. V. Zlobin | 2023-02-09T21:15:41Z | http://arxiv.org/abs/2302.04940v1 | # Conceptual design of 20 T hybrid accelerator dipole magnets
###### Abstract
Hybrid magnets are currently under consideration as an economically viable option towards 20 T dipole magnets for next generation of particle accelerators. In these magnets, High Temperature Superconducting (HTS) materials are used in the high field part of the coil with so-called "insert coils", and Low Temperature Superconductors (LTS) like NbSn and Nb-Ti superconductors are used in the lower field region with so-called "outsert coils". The attractiveness of the hybrid option lays on the fact that, on the one hand, the 20 T field level is beyond the NbSn practical limits of 15-16 T for accelerator magnets and can be achieved only via HTS materials; on the other hand, the high cost of HTS superconductors compared to LTS superconductors makes it advantageous exploring a hybrid approach, where the HTS portion of the coil is minimized. We present in this paper an overview of different design options aimed at generating 20 T field in a 50 mm clear aperture. The coil layouts investigated include the Cos-theta design (CT), with its variations to reduce the conductor peak stress, namely the Canted Cos-theta design (CCT) and the Stress Management Cos-theta design (SMCT), and, in addition, the Block-type design (BL) including a form of stress management and the Common-Coil design (CC). Results from a magnetic and mechanical analysis are discussed, with particular focus on the comparison between the different options regarding quantity of superconducting material, field quality, conductor peak stress, and quench protection.
Superconducting magnets, dipole magnets, NbSn magnets, HTS, hybrid magnets.
## I Introduction
The superconducting magnet community, which is working on the next generation of magnets for future particle colliders, has being considering the option of a "20 T" dipole magnet since approximately 20 years. The first proposal was formulated by P. McIntyre _et al._[1], who, considering the nominal field of 8.3 T of the LHC dipoles, explored in 2005 the possibility of a 24 T dipole magnet for an "LHC tripler". In 2011, the design studies carried out by E. Todesco, _et al._[2]-[3] and by R. Gupta, _et al._[4] were focused on dipole magnets generating an operational field of 20 T, with the goal of "opening the way for a 16.5 TeV beam energy accelerator in the LHC tunnel", being 7 TeV the nominal beam energy of the LHC. A similar field level was then considered for the future Super proton-proton Collider (SppC) in China by G. Sabbi, _et al._[5] and by Q. Xu _et al._[6], and for the European Future Circular Collider (FCC) by J. van Nugten, _et al._[7].
A different viewpoint to explain the rationale behind the idea of a 20 T accelerator magnet lays in the continuous push towards high field magnets to achieve higher collision energy [8], and in particular to a sort of "4 T step" that has characterized the R&D on superconducting accelerator magnets in the last two decades. In fact, a 4 T jump has characterized the increase in field from the Nb-Ti dipole magnets installed in the LHC [9] to the Nb\({}_{5}\)Sn magnets (in this case quadrupoles) planned for the HL-LHC project and expected to operate with a conductor peak field approaching 12 T [10]. The FCC design study has then worked on arc dipoles with a bore field of 16 T, a level considered as the practical limit for the Nb\({}_{3}\)Sn technology [11]-[12]. In this landscape, the next natural milestone is represented by a 20 T magnet, where so called High Temperature Superconductor (HTS), in particular Bi2212 [13] and REBCO [14], need to be adopted to push the field beyond the Low Temperature Superconductors (namely Nb\({}_{5}\)Sn) limits.
As a last consideration, one has to take into account the still relevant higher cost of HTS conductor compared to Nb\({}_{3}\)Sn. The significant difference in superconductor price justifies investigating the hybrid option, where Nb\({}_{5}\)Sn is included in the coil design to minimize the quantity of HTS material. This option was recently tested with the FRESCA2 large aperture dipole magnet as outsert and with the HTS EUCARD2 coil as insert [15]-[16], and explored in a recent conceptual design study [17].
We describe in this paper three conceptual designs of a 20 T hybrid magnet. The work is a continuation of a preliminary and broader investigation carried out in [18] as part of the US Magnet Development Program (MDP) [19]. After summarizing in Section I the design criteria, in Section II we perform a parametric analysis using sector coils. In Section III we then describe costheta, block and common-coil designs, focusing on magnetic parameters and coil stresses. Some consideration regarding fabrication options and challenges will also be provided.
## II Design Criteria and Conductor Parameters
The design criteria set as a goal of the conceptual design are given in Table I. The dipole has to generate a 20 T field of accelerator field quality with appropriate margin in a 50 mm clear bore. With respect to the criteria considered in [18], the target geometrical harmonics is reduced to \(<\)3 units. In addition, the maximum load-line fraction \(I_{\sigma}\)/\(I_{ss}\), i.e. the ratio between the operational current and the magnet current limit based on conductor properties (short sample current) is set to 87%, the same value adopted for the LHC dipoles [9] and similar to the 86% considered in the FCC design study [11]. Again, similarly to the FCC criteria, the maximum Von Mises stress allowed in the Nb\({}_{3}\)Sn coils is 180 MPa at 1.9 K; for the HTS conductor, a more conservative limit of 120 MPa has been assumed.
The two dashed lines in Fig. 1 depict the engineering current densities (\(j_{e}=\) L\({}_{\rm{norm}}\)/A\({}_{\rm{norm}}\)) used in the magnetic computations. For the Nb\({}_{3}\)Sn conductor, the curves correspond to a superconductor current density (virgin strand) of 3000 A/mm\({}^{2}\) at 12 T and 4.2 K (a level achieved within the US Conductor Development Program [20]), which, assuming a 1.1 Cu/Non-Cu ratio, results in a \(j_{e}\) of 870 A/mm\({}^{2}\) at 16 T, 1.9 K, including 5% of cabling degradation. For the HTS conductor, we assumed a \(j_{e}\) of 740 A/mm\({}^{2}\) at 1.9 K and 20 T. This current level was achieved in short samples of Bi2212 strands used in racetrack sub-scale coils [21].
## III Sensitivity Analysis with Sector Coils
By simulating the superconducting coil as a 60\({}^{\circ}\) sector with a uniform overall current density (\(j_{o}=\) L\({}_{\rm{abs}}\)/A\({}_{\rm{norm}}\)) it is possible to carry out a sensitivity analysis where the key magnet parameters are investigated, as show in [22]-[23]. The magnetic numerical model (implemented in ANSYS 2D) assumes a 0.67 ratio between \(j_{o}\) and \(j_{e}\) (obtained by considering the Nb\({}_{3}\)Sn insulated cable for the MQXF project [24]) and a 250 mm thick iron yoke placed at 25 mm from the outer radius of the coil. In order to investigate the stress induced on the coil mid-plane by the azimuthal and radial electro-magnetic (e.m.) forces, the numerical mechanical model (implemented in ANSYS 2D) imposes an infinitely rigid structure all around the coil. The coil itself is also simulated with an infinity rigidity (to avoid bending effects) and with minimum shear modulus, in such a way that only the accumulation of e.m. forces on the mid-plane and on the outer radius are estimated. As output of the computations we focus on coil size, stresses and stored energies.
As a result of the slow and almost linear decrease in critical current as a function for the applied field observed in the HTS (see Fig. 1), the bore field increases almost linearly with the coil width, without exhibiting the "saturation" towards 10 T and 16 T observed in the Nb-Ti and Nb\({}_{3}\)Sn dipole magnets [23]. At a load-line fraction of 87%, a 20 T sector coil has a width of about 70 mm, compared to about 45 mm at 16 T (see Fig. 2).
The peak azimuthal and radial compressive stresses on the mid-plane due to the accumulation of the azimuthal and radial e.m. forces (see Fig. 3) reach -150 MPa with a bore field of 16 T and increase to more than -200 MPa at 20 T. This level of stress implies that stress management components have to be inserted in the coil design to reduce not only the azimuthal stress, as traditionally assumed, but also the radial stress, which appears to be the largest at 20 T and more dependent to the bore field.
With a value of 2.2 MJ/m, the 20T sector coil more than double the estimate of the stored energy for the 16 T (see Fig. 4). However, if the stored energy density over the insulated cable total area is considered, a value of 0.13 J/mm\({}^{3}\) is obtained, still higher but more similar to the values computed or the FCC dipole magnets [25].
Fig. 1: Engineering current density (\(j_{e}=\) L\({}_{\rm{norm}}\)/A\({}_{\rm{norm}}\)) assumed in the computations for Nb\({}_{3}\)Sn and Bi2212 strands (dashed lines). Solid lines represent the load-lines defined by the operational and short sample currents (markers) for the cos-theta (CT), block (BL) and common-coil (CC) designs in the HTS and LTS coils.
Fig. 2: Bone field vs coil width computed with a sector coil numerical model for an 87% and 100% load-line fraction L\({}_{\rm{eff}}\)/L\({}_{\rm{c}}\).
## IV Conceptual Designs
In [18], 10 different designs were preliminary investigated to provide a first feedback on the general coils' size, load-line margin, and field quality. Starting from that analysis, we introduce in this paper the stress criteria provided in Table I. The results are described in the next sub-sections, where three designs are considered: a cos-theta (CT), a block (BL) and a common-coil (CC). The cable and magnet parameters of the three designs are summarized in Table II.
In terms of magnetic analysis, the strands diameters for both the Nb\({}_{3}\)Sn and HTS ranges from 0.85 to 1.15 mm, and the cable width from 13.3 mm to 24.4 mm. A cable compaction similar to the one of the MQXF cable [24] is assumed, again for both Nb\({}_{3}\)Sn and HTS cables. As for the sector coil analysis, a 250 mm thick iron is considered in the computations. The load-lines are shown in Fig.1, where the markers indicated the operational and shorts sample conditions.
As expected, meeting the coil stress criteria turned out to be the biggest challenge during the optimization of the coil design, since the high e.m. forces impose the use of stress management elements within the coil turns. The optimization was carried out to maintain the Von Mises stress below 120 MPa in the HTS [26, 27], and below 180 MPa in the Nb\({}_{3}\)Sn, consistently to previous design studies [2, 11] and experimental studies [28, 29]. In addition, the following assumptions were set: 1) an elastic modulus of 25 GPa is associated to the coil turns and blocks; 2) the coil turns and blocks are surrounded by solid (i.e. "deformable") components made of stainless steel, bronze or Ti alloy (indicated in the following figure captions); 3) the coil turns and blocks are allowed to separate and slide with a 0.2 friction factor with respect to the stress management elements; 4) the surrounding iron yoke, not shown in the following cross-section figures is assumed to be infinitely rigid; 5) no pre-stress nor cool-down is applied. The mechanical analysis, whose results are described in the following sub-sections, is aimed exclusively at providing a first investigation on the level of stress interception and of the type of intercepting elements required to reduce the coil stresses produced only by the accumulation of the e.m. forces. It does not address the design of support structure, the pre-stress process, and cool-down conditions, which will be covered in the next phase of the conceptual design.
### _Cos-theta (CT) Design_
The cross-section of the cos-theta design, analyzed in details in [30], is shown in Fig. 5, where the central red circle represents the 50 mm clear aperture and the dashed lines indicate the separation between the HTS insert and the LTS outsert. The layout is characterized by three double-layer coils wound with a continuous cable unit length. This option prevents the use of internal splices, as in most of the cos-theta Nb\({}_{5}\)Sn coils fabricated so far, with the exception of the CERN-ELIN and UT-CERN dipole magnets [12]. In the innermost two layers HTS cable turns are wound into individual slots in the coil support structure, as in a canted cos-theta (CCT) design [31]-[33]. In the two central layers, groups of turns (turn blocks) are wound in the coil structure proves, as it is done in the Stress Management cos-theta (SMCT) design [34]-[36]. Finally, the two outermost layers can be defined a traditional cos-theta coil with turn blocks separated by spacers [37, 38].
The cable width ranges from 17.7 mm in layers 5-6 to 24.4 mm in layer 3-4. The use of wider cable in layer 3-4 compared to layer 1-2 is aimed at minimizing the size of the HTS coils by increasing the size of the LTS ones, a design choice inspired to the "anti-grading" sector coils shown in [18].
In operational conditions with a bore field of 20 T, the calculated geometrical harmonics are within 3 units, the conductor peak field is 20.5 T in the HTS and 16.0 T in the LTS, and the corresponding load-line ratio is 80% in all coils.
The use of three different cos-theta coil designs is exclusively related to the outputs of the mechanical analysis. In fact, the combined effect of deformation induced by the large e.m. forces and of the low stress limit of 120 MPa assumed for the HTS coils could be overcome only by implementing a high level of stress interception (see Fig. 6).
Fig. 8: Cross-section of the block (BL) design. The circle and the center of the coil aperture indicates the 50 mm clear aperture. The dashed line separates the HTS insert from and LTS outsert.
Fig. 7: Von Mises stress (Pa) in the conductor under the action of e.m. forces: HTS inserts (left) and LTS outsert (right).
Fig. 9: Mechanical design of the block (BL) design. All the structure elements are assumed to be in stainless steel (purple) and Ti alloy (orange).
This is the case in the CCT-like layer 1-2, where each turn is separated by ribs. The ribs have a minimum thickness of 0.4 mm and are connected to a 5 mm spar (or mandrel). In the layer 3-4, a lower level of stress interception, magnetically more efficient, was adopted to maintain the Nb\({}_{5}\)Sn coil stress level below 180 MPa, where coil blocks (not the individual turns) are separated by ribs, following the SMCT design. Finally, no stress management elements were used in layer 5-6. As can be seen in Fig. 7, both HTS and LTS coils have Von Mises stress under the limits established by the design criteria, except for small corner effects (gray areas in Fig. 7, left) in the HTS turns of layer 2.
### _Block (BL) Design_
The block design, shown in Fig. 8 and analyzed in [39], features also three double-layers coils, all composed by narrow HTS inner blocks and wide LTS outer blocks. As for the CT option, no internal splices are assumed. The overall design follows the main characteristics of the HD2 [40] and FRESCA2 [41] designs and of other conceptual designs [42, 43], with blocks aligned in the outer edge. The cable width is 14.7 mm for both HTS and LTS coils, but, similarly to the CT design, with a higher thickness in the LTS. The design meets the field quality requirements, and with a bore field of 20 T it operates at a load-line ratio of 75% in the HTS and 84% in the LTS. Also, both the HTS and the LTS coil area are similar to the CT design.
The mechanical design (see Fig. 9) is characterized by a 10 mm thick internal support (winding pole) which brings the coil aperture to 70 mm. A similar support was implemented in both HD2 and FRESCA2. In addition, the coils are vertically separated by horizontal plates, which provide vertical stress management, and by vertical ribs, which separate the HTS and LTS blocks and provide horizontal stress management. In particular, the ribs transfer the horizontal e.m. force to the horizontal plates, in a way that maintains the coil stress within the limits in both the LTS and HTS. Horizontal plates aimed at intercepting the vertical forces were included in the design of the Test Facility Dipole [44]. The most challenging aspect of the optimization consisted in minimizing the bending of the ribs, which could generate extremely high stress in the corners of the coil blocks. A solution was found by including gaps (or clearances) of 0.200 to 0.300 mm between the ribs and the plates. Under these conditions, only an initial small fraction of the e.m. force is transferred from the HTS blocks to the LTS blocks. And once the ribs get in contact with the plates, the force is transmitted to the latter, and the ribs bending is minimized. The results of the mechanical analysis are shown in Fig. 10, with all the stress within the design criteria.
As a last general consideration regarding the block design, it is important to point out that at the moment no block coil has been fabricated with different cables sizes (grading) or different superconducting materials (hybrid). Therefore, inserting an HTS block coil inside an LTS block coil appears to be the biggest design and fabrication challenge for this option. Possible fabrication and assembly solutions for this issue are provided in [39].
### _Common-Coil (CC) Design_
The common-coil design (CC) is characterized by large race-track coils that cover both apertures [45]-[47]. In Fig. 11, the coil cross-section of one aperture is shown. Unlike the CT and BL designs, the coil aperture and the clear aperture are identical, so no internal support is considered, similarly to [46]. The HTS part is composed by two blocks (per quadrants) close to the aperture, each with two turns, and by a single-layer large racetrack coil. All HTS blocks are wound with an 18.4 mm wide cable.
Fig. 11: Cross-section (one aperture) of the common-coil (BL) design. The circle and the center of the coil aperture indicates the 50 mm clear aperture. The dashed line separates the HTS insert from and LTS outsert.
Fig. 12: Mechanical design of the common-coil (CC) design. All the structure elements are assumed to be in stainless steel.
Fig. 10: Von Mises stress (Pa) in the conductor under the action of e.m. forces: HTS inserts (left) and LTS outsert (right).
The two-turns blocks close to the aperture, often referred as "pole coils", have the main function of correcting the field quality, and they were included also in previous design studies [6, 45, 46, 47]. Since pole coils require some sort of hard-way bend of the cable to clear the path of the bore tube, they represent a departure from the typical common-coil advantage of using simple racetrack coils. However, the bending radius remains significantly larger compared to the CT design.
and, not being ever implemented in previous CC magnets, they constitute a design and fabrication challenge. In fact, since some sort of hard-way bend of the cable is required to clear the path of the bore tube, they represent a departure from the typical common-coil advantage of using simple racetrack coils.
The Nb\({}_{5}\)Sn part of the coil is composed by three layers, all using the same 13.3 mm wide cable. Unlike the CT and BL design, a single layer coil can be easily connected to another single layer coil, thanks to the wide central winding pole which can provide enough real estate for the splicing operation. Therefore, double-layer coils were not imposed to the CC design, as was done for the previous two designs. Another important characteristic of the CC lay-out is that the vertical dimensions of the layers can be easily fine-tuned by simply stacking or removing turns. This possibility is not available for example in the BL design, were the vertical dimensions are defined by layers with a given cable width. These two advantages of the CC design (single layer coils and vertical tunability of blocks' size) provide an additional flexibility in the optimization of the coil shape compared to the CT and BL designs.
The CC design has all geometric harmonics below 3 units, and load-line ratio is is within 1 % of the limits set in the criteria, i.e. 88% in the HTS and 86% in the LTS.
Stress management in the CC design is obtain again by vertical plates and horizontal ribs (see Fig. 12). The vertical plates are allowed to slide with respect to the external collars. Similarly, the ribs are allowed to slide with respect to the plates. As a result, no vertical stress management is provided, and only the horizontal forces are intercepted, in this case by the vertical plates supported by the horizontal ribs. With this mechanical design, the stress in the HTS blocks is maintained within 120 MPa. However, stresses higher than 180 MPa can be seen in the top part of the LTS coils (see Fig. 13).
The total area of the HTS block is similar to the one of the CT and BL designs, but a significant lower area for the LTS is observed in the CC. However, it is important to point out that the CC has a lower coil aperture, a lower load-line margin, and still a higher conductor peak stress in the LTS compared to the CT and BL designs.
## V Conclusions
We presented in this paper the conceptual design of a dipole magnet with an operational field of 20 T, generated by a hybrid coil made with both HTS and LTS (Nb\({}_{5}\)Sn) superconducting materials. The analysis included both a magnetic study, focused on bore field, load-line ratio and field quality, and a mechanical study, aimed at keeping the Von Mises stress below 180 (120) MPa in the LTS (HTS) conductor. An initial analytical/numerical study using sector coils indicated that in a 20 T dipole magnet, 1) the coil has to be about 70 mm wide, 2) both radial and azimuthal stress in the coil induced by the accumulation of the e.m. forces are above 200 MPa, and 3) the stored energy densities in the insulated cables are of about 0.13 J/mm\({}^{3}\). Three were the design options analyzed, all with stress management elements: 1) a cos-theta design, including CCT-like SMCT traditional cos-theta two-layer coils, 2) a block-type design, and 3) a common-coil design. All layouts meet the bore field, margin, and field quality requirements. In terms of conductor quantity, the designs have similar HTS conductor area, while a lower LTS area is obtained in the common-coil. The mechanical analysis showed that the cos-theta option requires individual turn support in the HTS layers and coil blocks support in the inner LTS layers to reduce the coil peak stress. Also, in both the block and common coil designs, a series of plates and ribs are necessary to intercept the e.m. forces and to keep the accumulated stress within the limits.
|
2306.07766 | Consistency of eight-dimensional supergravities: Anomalies, Lattices and
Counterterms | We reexamine the question of quantum consistency of supergravities in eight
dimensions. Theories with 16 supercharges suffer from the anomalies under the
action of its discrete modular groups. In minimally supersymmetric theory
coupled to Yang-Mills multiples of rank $l$ with the moduli space given by
$\text{SO}(2,l)/ (\text{U}(1) \times \text{SO}(l))$, the existence of a
counterterm together with the requirement that its poles and zeros correspond
to the gauge symmetry enhancement imposes nontrivial constraints on the
lattice. The counterterms needed for anomaly cancellation for all cases, that
are believed to lead to consistent theories of quantum gravity ($l = 2,10,18$),
are discussed. | Bing-Xin Lao, Ruben Minasian | 2023-06-13T13:31:29Z | http://arxiv.org/abs/2306.07766v2 | # Consistency of eight-dimensional supergravities: Anomalies, Lattices and Counterterms
###### Abstract
We reexamine the question of quantum consistency of supergravities in eight dimensions. Theories with both 32 and 16 supercharges suffer from the anomalies under the action of their respective discrete modular groups. In maximal supergravity the anomaly cancellation requires a surprising modification of the Chern-Simons couplings. In minimally supersymmetric theory coupled to Yang-Mills multiples of rank \(l\) with the moduli space given by \(\mathrm{SO}(2,l)/(\mathrm{U}(1)\times\mathrm{SO}(l))\), the existence of a counterterm together with the requirement that its poles and zeros correspond to the gauge symmetry enhancement imposes nontrivial constraints on the lattice. The counterterms needed for anomaly cancellation for all cases, that are believed to lead to consistent theories of quantum gravity (\(l=2,10,18\)), are discussed.
## 1 Introduction and summary
Existence of an anomaly cancellation mechanism in (super)gravity theories serves as a good guideline for selecting candidates for theories that can be consistent at the quantum level.
In minimally supersymmetric theories in ten dimensions, existence of Green-Schwarz mechanism reduces the number of possible choices for the gauge groups in the YM sector
to four [1]. From the other side, the existence of an anomaly inflow mechanism to two-dimensional chiral strings coupled to the theory restricts this number to two by ruling out the theories with abelian gauge factors [2; 3]. In six dimensions, there is an infinite number of anomaly-free minimal supergravities [1]. Many, notably infinite families of \((1,0)\) theories, are ruled out by a closer examination of the inflow mechanism and the anomaly cancellation for two-dimensional \((0,4)\) strings coupled to the theory [3; 4; 5].
The focus of this paper is on eight-dimensional (mostly) minimal supergravities. Classically, the 8D \(\mathcal{N}=1\) supergravity multiplet, made of a graviton, \(B\)-field, dilaton, two vector fields as well as spin-\(\frac{3}{2}\) and spin-\(\frac{1}{2}\) Majorana fermions (gravitino and a dilatino), can be coupled to any number of vector multiplets each comprising a vector field (photon), a gaugino (spin-\(\frac{1}{2}\) Majorana fermion) and 2 real scalars [6]. Supposing the number of vector multiplets is \(l\), the \(2l\) real scalars contained in the matter sector parametrize the moduli space given by a Kahler manifold
\[\mathcal{M}=\frac{\mathrm{SO}(2,l)}{\mathrm{U}(1)\times\mathrm{SO}(l)}\,. \tag{1}\]
These \(l\) vectors together with two vectors in the gravity multiplet form an \((l+2)\)-dimensional representation of \(\mathrm{SO}(2,l)\).
The first restriction on admissible values of \(l\) once more comes from anomalies - theories with odd numbers of Majorana fermions in 8D and 9D suffer form global anomalies [7], and hence \(l\) has to be even [8]. There are further restrictions:
* In theories with 16 supercharges in \(D\) dimensions the number of vector multiples consistently coupled to gravity is bound by \(26-D\) in order to assure the unitarity of strings couples to the theory [9]. Hence \(l\leq 18\).
* Considering 8D theories on particular backgrounds and using 6D anomaly cancellation it has been argued that in fact the only admissible values of \(l\) are \(l=2\), \(l=10\) and \(l=18\)[8].
* The symmetry enhancement (as well as the rank of of the YM algebra coupled to string probes) as predicted by the consistency of the supergravity [10] is an agreement with the landscape of 8D string constructions [11; 12].
* In the formulation of the theory with a four-form potential in the gravity multiplet, constraints on the global structure of the gauge groups can be deduced from the the absence of anomalies between large gauge transformations of \(B_{4}\) and 1-form symmetries [13; 14]1. Footnote 1: Somewhat orthogonal to our discussion, global anomalies and topological analogues of Green-Schwarz mechanism have been discussed in 8D with 16 supercharges [15] and in 10D type IIB theory [16]. In this paper we are mostly concerned with the existence of local counterterms.
We would like to reexamine these results from the point of view of 8D anomaly cancellation. Neither the \(\mathcal{N}=1\) theory nor its \(\mathcal{N}=2\) counterpart, where the scalars parametrize the \(\mathrm{SL}(2)\times\mathrm{SL}(3)/\left(\mathrm{U}(1)\times\mathrm{SO}(3)\right)\) coset, suffers from chiral anomalies. However, both
theories, with 16 and 32 supercharges, have local anomalies under the composite U(1) in the denominator of the coset.
The moduli space of supergravity theories with extended supersymmetry typically has scalars parametrizing a coset \(G/H\). The numerator of the coset, \(G\), denotes the U-duality group of the theory, and some discrete version of it gives rise to an exact symmetry after quantization. Theory can be formulated in a way that \(G\) acts only on bosonic fields. The denominator \(H\), which is the maximal compact subgroup of \(G\), is regarded as a gauge symmetry of the theory. Indeed, the compact part of the Cartan-Maurer form of the coset element transforms as a gauge field under the \(H\) transformations. The supersymmetry variations of all fermionic fields, which are inert under \(G\), involve this composite connection corresponding to \(H\). When \(H\) contains a U(1) factor, it may couple to fermions in a chiral fashion, a priori giving rise to a composite chiral anomaly [17]. This is exactly what happens in eight dimensions.
The physical content of the theory is usually identified by fixing the gauge, thereby eliminating the redundant bosonic degrees of freedom associated to \(H\). When the local symmetry is gauge fixed, the U-duality becomes non-linearly realized. Moreover, the fermionic fields now transform under \(G\). Part of this transformation may still be realized as a nontrivial phase shift. Therefore, the gauge fixing translates the U(1) anomaly into an anomaly under the surviving discrete part of \(G\), making the theory ill-defined.
The existence of this anomaly implies that the symmetry group (\(\mathrm{SL}(2;\mathbb{R})\) or \(\mathrm{SO}(2,l;\mathbb{R})\) in \(\mathcal{N}=2\) and \(\mathcal{N}=1\) theories respectively) may not be continuously maintained in the quantum theory. For the theory to be consistent, a cancellation mechanism should be figured out, in the process deciding to what extend the symmetry survives. The question is if it can be done by the addition of a local counterterm with appropriate modular properties under the transformation of the discrete version of \(G\). Originally such counterterm was discussed in the context of ten-dimensional IIB string theory [18], but the formalism is adapted to 8D theories as well [18; 19]. An (in)ability of finding such a counterterm is the reason why the value of \(l\) and the lattice structure of the gauge group in 8D get restricted.2
Footnote 2: The construction of the counterterms naturally introduces modular forms. A review of modular forms and their important applications in physics can be found in [20].
Let us outline the anomaly cancellation mechanism, up to a point trying to keep the discussion general and applicable to both \(\mathcal{N}=1\) and \(\mathcal{N}=2\) theories. Denoting the anomalous composite U(1) connection by \(Q\) and its curvature by \(F^{Q}\), the anomaly is given by the descent formula from the ten-form anomaly polynomial
\[I_{10}=\frac{F^{Q}}{2\pi}\wedge X_{8}\,, \tag{2}\]
where \(X_{8}=X_{8}(R)\) is an eight-form polynomial in curvature two-form \(R\) for the \(\mathcal{N}=2\) theory (whose precise form will be very important in our discussion) and \(X_{8}=X_{8}(R,\mathcal{F})\) a polynomial in \(R\) and the non-abelian gauge field strength \(\mathcal{F}\) for \(\mathcal{N}=1\) case (the exact form of the polynomial in this case on the contrary is not going to play any role in our discussion). The resulting anomalous phase variation in the partition function \(\Delta=-\int\Sigma\,X_{8}\) can locally
be cancelled by adding a term to the action
\[\mathcal{S}_{\phi}=\int\phi\,X_{8}\,, \tag{3}\]
where \(\phi\) is a scalar degree of freedom transforming under \(\mathrm{U}(1)\): \(\phi\to\phi+\Sigma\). This \(\phi\) can be set to zero by gauge fixing (think of the third scalar in \(\mathrm{SL}(2;\mathbb{R})\)), but since \(\delta_{M}\phi\neq 0\) under the \(G\)-valued transformation \(M\), the local counterterm is not \(G\)-invariant. As shown explicitly for \(\mathrm{SL}(2,\mathbb{R})\) in [18] and will be extended to \(\mathrm{SO}(2,l;\mathbb{R})\) here, one can design a counterterm \(S\) such that under the \(G\)-valued transformation \(M\)
\[\delta_{M}\mathcal{S}=-\delta_{M}\mathcal{S}_{\phi}+\arg\chi(M)\int X_{8}\,, \tag{4}\]
where \(\chi(M)\) is a phase factor and \(\delta_{M}\mathcal{S}_{\phi}=\int\Sigma\,X_{8}\). If this phase factor \(\chi(M)\equiv 1\) for any \(M\in G\) there will be a complete anomaly cancellation but that does not always happen. Note that in general it should suffice that the partition function is well-defined, and hence \(\delta_{M}\) of the entire action integrates to an integer (times \(2\pi\)).
At this point, the situation becomes drastically different for \(\mathcal{N}=1\) and \(\mathcal{N}=2\) cases.
* For \(\mathcal{N}=2\), i.e. \(G=\mathrm{SL}(2,\mathbb{R})\) it is shown in [18] that \(\chi(M)\) cannot be equal to \(1\). However (4) is not the only part of the action that is not invariant under \(\delta_{M}\). It might appear somewhat counter-intuitive but the reduction of the higher dimensional Chern-Simons terms also yields a non-invariant term. The result is that there are no particular intergrality condition imposed on \(\int X_{8}\) in generic backgrounds. Instead, turning non-trivial four-form fluxes is required.
* For \(\mathcal{N}=1\), there are no extra non-invariant terms. Thus, the value of \(\chi(M)\) depends on \(l\) and on the details of the lattice of signature \((2,l)\), which will naturally appear during the construction of counterterms. So at the first glance this presents a dilemma: either one should be imposing case-by-case integrality conditions on \(\int X_{8}(R,\mathcal{F})\) or, as we shall argue, opt for a universal consistency condition and require that \(\chi(M)=1\) for every \(\mathcal{N}=1\) theory.
Regardless of philosophy, let us turn to the details of how (4) works. The first important point is the precise form of \(\delta_{M}\phi\). For instance, in 10D Type IIB supergravity or in \(\mathcal{N}=2\) theory in 8D, the coset element of \(\mathrm{SL}(2)/\mathrm{U}(1)\) is parametrized by the modular parameter \(\tau\) and the compensating \(\mathrm{U}(1)\) transformation under the \(\mathrm{SL}(2;\mathbb{R})\) takes the form
\[e^{-i\Sigma(M,\tau)}=\left(\frac{c\tau+d}{c\bar{\tau}+d}\right)^{\frac{1}{2}} \,,\quad M\in\begin{pmatrix}a&b\\ c&d\end{pmatrix},\quad M\in\mathrm{SL}(2,\mathbb{R})\,,\quad\tau\in\mathbb{H }\,. \tag{5}\]
The second crucial point is that there exists a function of \(\tau\), the Dedekind eta function \(\eta(\tau)\), that under \(\mathrm{SL}(2,\mathbb{Z})\) transformation picks a factor \(\sim(c\tau+d)^{1/2}\). As a consequence, a ratio of \(\eta(\tau)\) and its complex conjugate can be used in constructing the counterterm [18]. As mentioned there can be a phase factor \(\chi(M)\), and the consequences of for \(\mathcal{N}=2\) theory, where it is necessarily nontrivial, will be discussed in section 2.
In this paper we are mainly interested in the eight-dimensional supergravity with 16 supercharges, and the moduli space of the theory is given by \(\mathcal{M}\) (1). The moduli space \(\mathcal{M}\) is a realization of the hermitian symmetric space. Moreover, the tube domain, called the generalized upper-half plane \(\mathbb{H}_{l}\), can be realized in this space [21; 22]. We find that the generalized upper-half plane \(\mathbb{H}_{l}\) provides the correct framework to describe the gauge transformations. By introducing Calabi-Vesentini coordinates [23], we explicitly compute the \(\mathrm{U}(1)\) gauge potential and its field strength, and show how the \(\mathrm{U}(1)\) compensating transformation generalizes equation (5). It is formed by the so called automorphy factor \(j(M,Z)\) (where \(M\) is an \(\mathrm{SO}(2,l;\mathbb{R})\) transformation, and \(Z\) denotes the coordinates on the generalized upper half plane):
\[e^{-i\Sigma(M,Z)}=\frac{j(M,Z)}{|j(M,Z)|}\,. \tag{6}\]
Equivalently we have \(-\Sigma=\arg j(M,Z)=\mathrm{Arg}\,j(M,Z)+2k\pi\) for \(k\in\mathbb{Z}\) and \(\mathrm{Arg}\,\) denotes the principal branch of the argument taking the value from \([-\pi,\pi)\). Finding a function \(\Psi(Z)\) such that
\[\Psi(M\langle Z\rangle)=\chi(M)j(M,Z)^{r}\Psi(Z) \tag{7}\]
would allow to construct the counterterm \(\mathcal{S}\) as
\[\mathcal{S}=\frac{1}{r}\int\arg\Psi(Z)X_{8}\,. \tag{8}\]
Indeed such functions, or more precisely meromorphic modular forms on the orthogonal group \(\mathrm{O}(2,l)\) of weight \(r\) and multiplier system \(\chi\), can be found by using the Borcherds products [24]. The original discovery that the automorphic forms on \(\mathrm{O}^{+}(2,s+2)\) (\(l=s+2\)) can be written as infinite products was made in the context of unimodular lattices. Following the use of theta correspondence, which gave an alternative approach to these results [25], the generalisation of the constructions of modular forms to non-unimodular lattices was provided [26].
The case \(l=2\) case requires special treatment. Strictly speaking, the Borcherds product does not apply and an alternative derivation of the counterterms is needed.
As we shall see, the requirement that the modular form \(\Psi(Z)\) has a trivial \(\chi(M)\equiv 1\), ensuring the complete anomaly cancellation, is not particularly restrictive. However there is an additional consideration: the local counterterms constructed from the meromorphic \(\Psi(Z)\) are not well defined at its zeros or poles. On any Borcherds product these points lie on the so-called rational quadratic divisors (RQD). In fact some of these divisors have physical interpretation and correspond to the symmetry enhancement points in the moduli space [25].3 For these, the theory will continue being consistent even if the counterterm is not well-defined. Moreover the gauge symmetry enhancement should be in agreement with the symmetries of the lattice. We will show that these physical constrains lead to the
requirement that the lattice is reflective (defined in section 5). The number of reflective lattices is finite and their rank is bounded by \(l=26\). These bounds are less stringent than those imposed by swampland.
The organisation of this paper is as follows. In section 2, we discuss the anomaly cancellation in eight-dimensional theory with 32 supercharges and the constraints imposed on consistent backgrounds. Other than brief comments in the last section of the paper, this is the only part concerned with the maximally supersymmetric theory. The bulk of the paper is about the theories with 16 supercharges. In section 3 we spell out the anomaly that needs to be cancelled, and introduce the necessary mathematical preliminaries needed for the construction of the counterterms (with further details collected in appendix B). Section 4 is devoted to the derivation of the compensating \(\mathrm{U}(1)\) transformation (equation 6). The construction of counterterms is presented in section 5. In this section we also consider the implications of zeros and poles of the modular forms and the ensuing constraints on admissible lattices. This discussion is suitable only if \(l\geq 3\). The \(l=2\) case requires a separate discussion that is presented in section 6. A brief summary and discussion of some open questions are presented in section 7.
## 2 Anomaly cancellation in maximal supergravity
We first discuss the mechanism in \(\mathcal{N}=2\) and the implications of \(\chi(M)\neq 1\). The moduli space of the theory is
\[\frac{\mathrm{SL}(2,\mathbb{R})}{\mathrm{U}(1)}\times\frac{\mathrm{SL}(3, \mathbb{R})}{\mathrm{SO}(3)}\,, \tag{1}\]
and the first factor (the only one relevant for the anomaly) is parametrized by \(\tau\). The theory was originally obtained by reducing the 11D supergravity [27]. The conversion into an \(\mathrm{SL}(2)\) covariant formalism requires taking \(\tau=-2C_{8910}+iV_{T^{3}}\). Alternatively, the reduction on \(T^{2}\) of Type IIB theory can be considered, and there \(\tau\) is identified as the complex structure of the torus.
The counterterm derivation would follow closely the discussion of [18] for 10D type IIB theory, and details of the 8D can be found in [19], so we will be brief.
The field content of this 8D theory is given by a single supermultiplet which contains one graviton, two gravitini (doublet under \(\mathrm{Spin}(3)=\mathrm{SU}(2)\)), six vectors, six dilatini (doublet + quadruplet under \(\mathrm{Spin}(3)=\mathrm{SU}(2)\)), seven real scalars, three two-forms and one three-form. The \(\mathrm{U}(1)\) charges of the gravitini, of the doublet of dilatini and of the quadruplet of dilatini are respectively (they are all positive chiral): \(\frac{1}{2}\), \(\frac{3}{2}\) and \(-\frac{1}{2}\). Finally, the 4-form field strength can be split in self-dual and anti-self-dual part, carrying charges \(1\) and \(-1\) respectively under \(\mathrm{U}(1)\). Hence, the 10-form anomaly polynomial is given by
\[I_{10}=\frac{F^{Q}}{2\pi}\wedge\left[2\times\frac{1}{2}I_{3/2}^{d=8}-4\times \frac{1}{2}I_{1/2}+2\times\frac{3}{2}I_{1/2}+2\times I_{\mathrm{SD}}\right]_{ \mathrm{8-form}}\,, \tag{2}\]
where, \(F^{Q}\) is the composite field strength built out of \(\tau:F=dQ=\frac{d\tau\wedge d\bar{\tau}}{4i\tau_{2}^{2}}.\) The anomalous
phase variation of the path integral is given by [19; 28]
\[\Delta=-12\int\Sigma\:X_{8}(R)=-12\int\frac{\Sigma}{192(2\pi)^{4}}\left(\operatorname {tr}R^{4}-\frac{1}{4}(\operatorname{tr}R^{2})^{2}\right)\,. \tag{3}\]
As discussed, gauge fixing translates the U(1) anomaly into an \(\operatorname{SL}(2,\mathbb{Z})\) anomaly.
Maximal eight-dimensional supergravity has a 1-loop UV divergence which breaks local non-linear supersymmetry. This is in agreement with the local U(1) composite anomaly since the commutator of two local non-linear supersymmetries contains this local U(1) [29].4
Footnote 4: We thank Renata Kallosh for bringing this to our attention.
The form of the the compensating U(1) transformation is given in (5), and \(-\delta_{M}\phi=\arg(c\tau+d)\) and in the notation of (7), \(j(M,\tau)=c\tau+d\). Hence ideally one would require a well-defined modular form \(f(\tau)\) satisfying
\[f(M\tau)=(c\tau+d)^{\tau}f(\tau)\,,\quad M\tau=\frac{a\tau+b}{c\tau+d}\,, \tag{4}\]
for arbitrary transformation \(M\in\operatorname{SL}(2,\mathbb{Z})\) in order to cancel the anomaly. In addition, we also require the counterterm built from \(f(\tau)\) to have correct decompactification limit. This condition, as we shall see momentarily, is satisfied if and only if the function \(f\) is a cusp form. As already discussed, the counterterm can be constructed from the well-known Dedekind eta function \(\eta(\tau)\), which is the weight \(\frac{1}{2}\) cusp form. However, the Dedekind eta function has non-trivial multiplier system (see appendix A), which can be given in terms of the standard \(T\) and \(S\) generators, \(\chi_{\eta}(T)=e^{\frac{\pi i}{12}}\) and \(\chi_{\eta}(S)=e^{-\frac{\pi i}{4}}\).
To recap, a counterterm built solely from \(\eta(\tau)\) and curvatures;
\[12\int\arg(\eta^{2}(\tau))X_{8}\,, \tag{5}\]
will not cancel the \(\operatorname{SL}(2,\mathbb{Z})\) anomaly unless \(\int X_{8}\in\mathbb{Z}\). It is not hard to come up with examples of consistent supersymmetry-preserving string backgrounds where such a condition does not hold. We propose that the correct, much milder, requirement is
\[\int\left[X_{8}+\frac{1}{2}G\wedge G\right]\in\mathbb{Z}\,. \tag{6}\]
As the first step towards justifying this claim let us recall that at the large volume limit, i.e. when \(\operatorname{Im}\tau\to\infty\), the cusp form \(\eta(\tau)\) has limit
\[\lim_{\operatorname{Im}\tau\to\infty}\arg\left(\eta^{2}(\tau)\right)=\frac{ \pi\tau_{1}}{6} \tag{7}\]
Recalling that there is a term \(\sim\tau\) among the couplings of the classical 8D supergravity, obtained from the reduction of the 11D Chern-Simons term, the large volume limit of the
anomalous couplings (5) becomes:
\[\lim_{\text{Im}\,\tau\rightarrow\infty}{\cal S}=2\pi\int\tau_{1}\left[X_{8}+\frac {1}{2}G\wedge G\right]\,. \tag{8}\]
Further decompactification to nine dimensions yields
\[2\pi\int A_{1}\wedge\left[X_{8}+\frac{1}{2}G\wedge G\right]. \tag{9}\]
For type IIA \(A_{1}=B_{\mu 9}dx^{\mu}\), and further lift recovers the full set of 10D Chern-Simons couplings [30; 31]. Note that the two term combination - with an importantly fixed relative coefficient - is needed for the simultaneous cancellation of failure of the diffeomorphism invariance and \(C_{3}\to C_{3}+d\Lambda_{2}\) transformation in the presence of fivebranes. For IIB, \(A_{1}=(\alpha^{\prime}/R_{9}^{2})\,g_{\mu 9}dx^{\mu}\), where \(R\) is the radius of the circle, and the coupling is suppressed in the IIB ten-dimensional limit. The origin of the two terms is respectively the winding mode one-loop contribution and the self-duality of the IIB five-form field-strength [32; 33].
One may also note that (6) is the condition that appears in the tadpole cancellation in M-theory compactifications to 3D and lower. It would be natural that when considering M-theory on backgrounds that involve a product of 3D and 8D spaces, the integrality condition that is imposed on 8D part involves the same players regardless of 8D space being compact or not (up to possibly boundary modifications).
Should one try to think of (8) as a large volume limit not only for the first term, but also the second? Indeed, having \(G\wedge G\) multiplied by a nontrivial modular function of \(\tau\) of weight zero (that does not pick up a \(j(M,\tau)^{r}\) factor) with the same phase factor as \(\eta^{2}(\tau)\) would result in a complete cancellation of the \(\text{SL}(2,\mathbb{Z})\) anomaly without imposing any ad hoc conditions on the 8D space-time. Given that the coupling \(\tau_{1}G\wedge G\) is not \(\text{SL}(2)\) invariant (the existing 8D supergravity action constructed from reduction of 11D theory has only explicit \(\text{SL}(3)\) symmetry), such a modification would appear not unwelcome. From other side, if the eight-dimensional spacetime has an isometry it should be possible to recast the maximally supersymmetric theory in an explicitly \(\text{SL}(5)\) invariant form. Note that upon reduction the four-form \(G\) gives rise to a pair of fields with 3-form field strength which together with three 8D 3-forms will form a quintet of \(\text{SL}(5)\), and the Chern-Simons coupling becomes part of kinetic term for this quintet. A nontrivial function in \(\tau\) will obstruct promoting \(\text{SL}(2)\times\text{SL}(3)\) to \(\text{SL}(5)\). At the same time, existence on an isometry would render the 8-form on 8D manifold \(X_{8}(R)\) trivial, and its integral will vanish. Putting all these considerations together, we arrive a modification
\[\int\tau_{1}\frac{1}{2}G\wedge G\quad\mapsto\quad\int\left[\tau_{1}+x(6\arg(g (\tau))-\tau_{1})\right]\frac{1}{2}G\wedge G\,, \tag{10}\]
where \(x\) can take only two values, \(x=0\) for trivial \(X_{8}\) and \(x=1\) otherwise.5 To obtain
(6) while agreeing with (8), we should require that
\[g(M\tau)=\chi_{\eta}^{2}(M)g(\tau)\,. \tag{11}\]
and has the same large volume limit as \(\arg\left(\eta^{2}(\tau)\right)\) (7).
A natural way for finding \(g(\tau)\), is to use the quotient \(\eta^{2}(\tau)\) by the well-defined modular form of weight \(1\) that does not have an extra phase (and goes to \(1\) in \(\operatorname{Im}\tau\to\infty\) limit). The theta function \(\theta(\tau)\) transforms under the congruence subgroup \(\Gamma_{0}(4)\) as shown in equation (12) and has the limit \(\theta^{2}(\tau)\to 1\) under \(\operatorname{Im}\tau\to\infty\). \(\Gamma_{0}(4)\) has three generators
\[-\mathbb{1}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\,,\quad T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\,,\quad W=\begin{pmatrix}1&0\\ 4&1\end{pmatrix}\,. \tag{12}\]
Only under the \(-\mathbb{1}\), \(\left(\frac{-1}{d}\right)=-1\). In other words, if we consider the physical symmetry \(\operatorname{P}\Gamma_{0}(4)=\Gamma_{0}(4)/\{\pm\mathbb{1}\}\), \(\theta^{2}(\tau)\) acts as weight \(1\) modular form with trivial character. With these considerations, we see that restricting the symmetry group to \(\Gamma_{0}(4)\) the function
\[g(\tau)=\frac{\eta^{2}(\tau)}{\theta^{2}(\tau)} \tag{13}\]
satisfies all necessary requirements. Hence for
\[\mathcal{S}(x=1)=12\int\left[\arg(\eta^{2}(\tau))X_{8}+\arg\left(\frac{\eta^{2 }(\tau)}{\theta^{2}(\tau)}\right)\frac{1}{2}G\wedge G\right]\,, \tag{14}\]
under the transformation \(M\in\operatorname{P}\Gamma_{0}(4)\),
\[\delta_{M}\mathcal{S}+\delta_{M}\mathcal{S}_{\phi}=12\arg\left(\chi_{\eta}^{2 }(M)\right)\int\left(X_{8}+\frac{1}{2}G\wedge G\right)\,. \tag{15}\]
The multiplier system (or character) \(\chi_{\eta}^{2}(M)\) is of order \(12\) with respect to the group \(\operatorname{SL}(2,\mathbb{Z})\) (or \(\Gamma_{0}(4)\)), and imposing the integrality condition (6) on \(\left[X_{8}+\frac{1}{2}G\wedge G\right]\) (rather than on \(X_{8}(R)\)) leaves the partition function invariant and is sufficient for the anomaly cancellation.
While the counterterm (14) works only for \(M\in\operatorname{P}\Gamma_{0}(4)\), this does not signify the breaking of actual symmetry of the theory but rather a problem of supergravity action, which we recall once more displays \(\operatorname{SL}(2)\) symmetry only at the level of equations of motion.
It would be interesting to verify the coupling (14) directly by string theory calculation, which for \(x=1\) has to be done in a nontrivial gravitational background.
## 3 Minimal supergravity and lattices of \((2,l)\) signature
We can now turn to the minimal supergravity in 8D and its possible anomaly counterterms. Generically such theory comprises a single gravity multiplet and \(l\) vector multiplets. The field content of \(\mathcal{N}=1,D=8\) supergravity is given by
\[\left(e_{\mu}^{\phantom{\mu}m},\psi_{\mu},\chi,B_{\mu\nu},A_{\mu}^{\phantom{ \mu}i},\sigma\right)\,,\quad i=1,2, \tag{16}\]
where \(e_{\mu}^{\ m}\) is the graviton, \(\psi_{\mu}\) is the gravitino, \(\chi\) is dilatino, \(B_{\mu\nu}\) is the antisymmetric tensor (background field), \(\sigma\) is the dilaton. Both \(\psi_{\mu}\) and \(\chi\) are pseudo-Majorana spinors. \(A_{\mu}^{\ i}\) and the scalar \(\sigma\) are real. Coupling \(l\) vector multiplets of the form \((\lambda,A_{\mu},\phi^{i})\) and combining the field content together we obtain
\[\left(e_{\mu}^{\ m},\psi_{\mu},\chi,B_{\mu\nu},A_{\mu}^{\ I},\phi^{\alpha}, \sigma\right), \tag{3.2}\]
where \(I=0,\ldots,l+1\), \(\alpha=1,\ldots,2l\). Here we adopt the metric convention
\[\eta_{AB}=\eta_{IJ}=(+1,+1,-1,\ldots,-1). \tag{3.3}\]
The \(2l\) real scalars \(\phi^{\alpha}\) parameterize the moduli space
\[\mathcal{M}=\frac{\mathrm{SO}(2,l)}{\mathrm{SO}(2)\times\mathrm{SO}(l)}\cong \frac{\mathrm{SO}(2,l)}{\mathrm{U}(1)\times\mathrm{SO}(l)}\,. \tag{3.4}\]
The fermions of the theory have chiral couplings to one of the composite \(\mathrm{U}(1)\) in the denominators of the coset [6]. The \(\mathrm{U}(1)\) charges of the gravitino (positive chirality), the dilatino (negative chirality) and the gaugini (positive chirality) are all \(\frac{1}{2}\).6 Hence, the anomaly polynomial is
Footnote 6: Notice that the same charge assignment is valid in dual formulation of the theory where the two-form \(B\) is replaced by a four-from [34], and the discussion of the counterterms applies to both.
\[I_{8D}=I_{3/2}-I_{1/2}^{\mathrm{dilatino}}+I_{1/2}^{\mathrm{gaugini}}\,. \tag{3.5}\]
If the gauge group is given by \(G\) (\(\mathrm{rank}(G)=l\)), the gaugini couple both to \(G\) (the fields strength of the gauge field will be denoted by \(\mathcal{F}\)) and the composite \(\mathrm{U}(1)\) (whose field strength is again denoted by \(F^{Q}=dQ\)). The resulting polynomial is of the form (1.2) with (see [19] for details)
\[\begin{split}& X_{8}(R,\mathcal{F})=\\ &\frac{1}{32(2\pi)^{3}}\left[(248+\dim G)\left[\frac{\mathrm{tr} \,R^{4}}{360}+\frac{(\mathrm{tr}\,R^{2})^{2}}{288}\right]-(\mathrm{tr}\,R^{2} )^{2}+\frac{1}{6}\,\mathrm{tr}\,R^{2}\,\mathrm{Tr}\,\mathcal{F}^{2}+\frac{2}{ 3}\,\mathrm{Tr}\,\mathcal{F}^{4}\right]\,,\end{split} \tag{3.6}\]
and given the variation \(\delta Q=d\Sigma\) the anomalous phase is
\[\Delta_{G}=-\int\Sigma\,X_{8}(R,\mathcal{F})\,, \tag{3.7}\]
The precise form of \(X_{8}(R,\mathcal{F})\) in (3.7) is not important for our discussion. The idea is to constrain the admissible theories and their lattices rather than try to cancel (3.7) by imposing case by case conditions on the integrality properties of \(X_{8}(R,\mathcal{F})\). In addition, the counterterm can be changed by adding massive states and integrating them out. While the role of the massive completions presents interesting questions, here we are concerned by the possibility of writing a counterterm that will lead to an anomaly cancellation for any (discrete) \(\mathrm{SO}(2,l)\) transformation.
The construction of the counterterm will be following the discussion of the \(\mathcal{N}=2\) case
in Sec. 2, but now we will be interested in modular forms for orthogonal group \(\mathrm{SO}(2,l)\). We will have to construct the compensating \(\mathrm{U}(1)\) with respect to this group (see Sec. 4) and find the modular forms with right properties to serve as counterterms (1.8).
In the rest of this section we will discuss some of the necessary background and set up the notation. In order to make the presentation self-contained, we will include some of the basic definitions. Further details can be found in Appendix B, where the presentation follows closely [21, 22].
### Lattices of \((2,l)\) signature and generalized upper half plane
A typical lattice \(L\) in \(\mathbb{R}^{b}\) has the form \(L:=\left\{\sum_{i=1}^{b}a_{i}v_{i}|a_{i}\in\mathbb{Z}\right\}\), where \(\{v_{1},\ldots v_{b}\}\) is the basis set. Usually the lattice is equipped with a quadratic form \(q:L\to\mathbb{R}\), which defines the norm of the vector \(x\) in the lattice as \(q(x)\) and naturally induces a symmetric bilinear form \((\cdot,\cdot):\Lambda\times\Lambda\to\mathbb{R}\)
\[(x,y):=q(x+y)-q(x)-q(y)\,,\quad\text{for}\quad x,y\in L\,. \tag{3.8}\]
It is easy to see that \(q(x)=\frac{1}{2}(x,x)\) since \(q\) is a quadratic bilinear form. The lattice is called even if \(q(x)\in\mathbb{Z}\) for arbitrary \(x\in L\). The dual lattice \(L^{\prime}\) is defined as
\[L^{\prime}:=\left\{y\in L\otimes\mathbb{Q}|\left(y,x\right)\in\mathbb{Z}\text { for }\forall x\in L\right\}\,. \tag{3.9}\]
A lattice \(L\) is called self-dual or unimodular if it is equal to its dual \(L=L^{\prime}\). The quadratic form \(q\) has signature, denoted by \((b^{+},b^{-})\) and \(b^{+}+b^{-}=b\), where \(b^{+}\) (\(b^{-}\)) denotes the number of the \(+\) (\(-\)) signs. An important theorem states that there are no indefinite even unimodular lattices unless \(b^{+}-b^{-}\equiv 0\mod 8\).
Let us denote the lattice and its quadratic form by a pair \((L,q)\). Suppose \((L,q)\) is a lattice that has a signature \((2,l)\). Consider the Grassmannian of 2-dimensional subspaces of \(V=L\otimes\mathbb{R}\) on which the quadratic form is positive definite
\[\mathrm{Gr}_{2}(V):=\left\{v\subset V|\dim v=2\,\text{and}\,q|_{v}>0\right\}\,, \tag{3.10}\]
where \(q|_{v}>0\) means that for every element \(x\in v\), \(q(x)>0\)7. We define the orthogonal group and the special orthogonal group as
Footnote 7: We have defined \(q\) on the lattice \(L\), i.e. \(q(v_{i})\) has a clear definition for \(1\leq i\leq l+2\). With the help of the induced bilinear form \((x,y)=q(x+y)-q(x)-q(y)\), we can safely extend the quadratic form to the space \(V=L\otimes\mathbb{R}\).
\[\mathrm{O}\left(V;\mathbb{R}\right):=\left\{\sigma\in\mathrm{Aut}(V)|\,\sigma \,\text{is an isometry of }V\right\},\ \mathrm{SO}\left(V;\mathbb{R}\right):=\left\{\sigma\in\mathrm{O}(V;\mathbb{R}) |\det\sigma=1\right\}\,.\]
Since \(V\) is a usual linear space on \(\mathbb{R}^{l+2}\), one can think of these two as matrix groups. If two spaces \(V_{1},V_{2}\) have the same signature, it can be proved that the orthogonal groups are isomorphic, i.e. \(\mathrm{O}(V_{1};\mathbb{R})\cong\mathrm{O}(V_{2};\mathbb{R})\). Thus we can denote the (special) orthogonal group by using the signature like \(\mathrm{O}(2,l;\mathbb{R})\) (\(\mathrm{SO}(2,l;\mathbb{R})\)). One can prove that \(\mathrm{O}(2,l;\mathbb{R})\) acts transitively on \(\mathrm{Gr}_{2}(V)\). If \(v_{0}\in\mathrm{Gr}_{2}(V)\) is fixed, the stabilizer \(K\) of \(v_{0}\) is a maximal
compact subgroup of \({\rm O}(2,l;\mathbb{R})\) and \(K\cong{\rm O}(2)\times{\rm O}(l)\). This constructs an isomorphism \({\rm Gr}_{2}(V)\cong{\rm O}(2,l;\mathbb{R})/K\), which is a realization of the hermitian symmetric space.
To see the complex structure, we consider the complexification \(V(\mathbb{C})=V\otimes\mathbb{C}\) of \(V\). Since \(V\) is has negative signatures, there exists non-trivial isotropic vector \(x\), which satisfies \(q(x)=0\) and \(x\neq 0\). The isotropic subspace (or called the zero quadric) is
\[{\cal I}:=\{Z_{L}\in V(\mathbb{C})\backslash\{0\}|\,(Z_{L},Z_{L})=0\}. \tag{3.11}\]
We consider the projective space \(P{\cal I}:={\cal I}/\sim\), where the equivalence relation is \(Z_{L}\sim tZ_{L}\) for arbitrary \(t\in\mathbb{C},\,t\neq 0\). The equivalence class can be denoted as \([Z_{L}]\). Consider the subset
\[{\cal K}:=\left\{[Z_{L}]\in P{\cal I}|\,(Z_{L},Z_{L})=0,\,(Z_{L},\overline{Z_{ L}})>0\right\}\,, \tag{3.12}\]
\({\cal K}\) is a complex manifold of dimension \(l\) consisting of two connected components. The subgroup \({\rm O}^{+}(2,l;\mathbb{R})\) of elements whose spinor norm equals the determinant preserves the components of \({\cal K}\), whereas \({\rm O}(2,l;\mathbb{R})\backslash{\rm O}^{+}(2,l;\mathbb{R})\) interchanges them. We can denote the components \({\cal K}^{+}\) and \({\cal K}^{-}\) respectively.
For arbitrary \(Z_{L}\in V(\mathbb{C})\), we can write \(Z_{L}=X_{L}+iY_{L}\), \(X_{L},Y_{L}\in V\), and construct a map between \({\cal K}^{+}\) and \({\rm Gr}_{2}(V)\)
\[[Z_{L}]\longmapsto v(Z_{L})=\{aX_{L}+bY_{L}|\,a,b\in\mathbb{R}\}. \tag{3.13}\]
This map is an analytic isomorphism. Thus, the set \({\cal K}^{+}\) and the map indeed provide a complex structure on \({\rm Gr}_{2}(V)\). We refer the reader to [22] for detailed proof.
As mentioned, for the \({\cal N}=1\) theory with a duality group \({\rm SO}(2,l;\mathbb{R})\) we shall try to replicate the discussion of \({\cal N}=2\) case with the duality group \({\rm SL}(2;\mathbb{R})\). The moduli space is much more complicated however, and some generalisations are not straightforward. An important concept, useful for us, is that of the generalized upper-half plane corresponding to the \({\rm SO}(2,l;\mathbb{R})\) transformation. We will first describe the formal way of constructing the generalized upper-half plane \(\mathbb{H}_{l}\)[22]. A specific method for achieving the generalization [35, 36], which is more appropriate for our discussion, will be discussed later.
Suppose \(z\in L\) is a primitive norm zero vector, i.e. \(q(z)=0\) and \(\mathbb{Q}z\cap L=\mathbb{Z}z\). Let \(z^{\prime}\in L^{\prime}\) be another vector which satisfies \((z,z^{\prime})=1\). We define the sub-lattice \(K\)
\[K:=L\cap z^{\perp}\cap z^{\prime\perp}\,, \tag{3.14}\]
where \(z^{\perp}\) denotes the orthogonal subspace of \(z\), such that that all vectors \(x\) in this subspace satisfy \((x,z)=0\). Then \(K\) is Lorentzian, i.e. of the signature is \((1,l-1)\). The space \(V\) can be decomposed into
\[V=\mathbb{R}z\oplus(K\otimes\mathbb{R})\oplus\mathbb{R}z^{\prime}\,,\quad V( \mathbb{C})=\mathbb{C}z\oplus(K\otimes\mathbb{C})\oplus\mathbb{C}z^{\prime}\,. \tag{3.15}\]
For arbitrary vector \(Z_{L}\in V(\mathbb{C})\), there exists a unique combination \((a,Z,b)\) such that \(Z_{L}=az+Z+bz^{\prime}\), \(Z\in K\otimes\mathbb{C}\), \(a,b\in\mathbb{C}\), which means that we can use the combination
\((a,Z,b)\) to represent a vector. We define a set \(\widehat{\mathbb{H}}_{l}\)
\[\widetilde{\mathbb{H}}_{l}=\{Z=X+iY\in K\otimes\mathbb{C}\,|\,X,Y\in K\otimes \mathbb{R},\,q(Y)>0\}. \tag{3.16}\]
Since the lattice \(K\) has signature \((1,l-1)\), the set of positive-norm vectors in \(K\otimes\mathbb{R}\) splits into two connected components, \(\mathcal{K}^{\pm}\). We define the map \(f\)
\[\begin{split} f:\widetilde{\mathbb{H}}_{n}& \longrightarrow\mathcal{K}\\ Z&\longmapsto f(Z)=\big{[}\big{(}-q(Z)-q(z^{ \prime}),Z,1\big{)}\big{]}\,\end{split} \tag{3.17}\]
where \((-q(Z)-q(z^{\prime}),Z,1)\) is an \(l+2\) dimensional vector is space \(V(\mathbb{C})\). As the references [21; 22] show, \(f\) is a biholomorphic map. Under the map \(f\), two connected components of \(\widetilde{\mathbb{H}}_{l}\) map into the two connected components \(\mathcal{K}^{\pm}\) of \(\mathcal{K}\) separately. We choose \(\mathbb{H}_{l}\) to be the component of \(\widetilde{\mathbb{H}}_{l}\) that maps \(\mathcal{K}^{+}\). This realization of \(\mathcal{K}^{+}\) as a tube domain can be viewed as generalized upper-half plane \(\mathbb{R}^{l}+i\Omega^{l}\), where \(\Omega^{l}\) is the positive-norm cone.
### Action of \(\mathbf{O}(2,l)\) on generalized upper-half plane
To construct the generalized upper-half we will split the lattice \((L,q)\) into \((L,q)=(L_{0},q_{0})\oplus\Pi_{1,1}\) where \((L_{0},q_{0})\) is a lattice of signature \((1,l-1)\), equipped with the quadratic form \(q_{0}\) and \(\Pi_{1,1}\) is the unimodular lattice with signature \((1,1)\) equipped with the quadratic form \(q((a,b))=ab\). The generalized upper-half plane can be defined in the following way
\[\mathbb{H}_{l}=\{Z=X+iY\in L_{0}\otimes\mathbb{C}|X,Y\in L_{0}\otimes\mathbb{ R},Y\in P\}\, \tag{3.18}\]
where \(P\) denotes the future light cone of the Minkowski space \(L_{0}\otimes\mathbb{R}\).
We set \(l=s+2\) and take \(s\geq 0\) and even throughout the discussion. Suppose \(\hat{S}\) is an \(s\times s\) symmetric positive definite real matrix (when \(s=0\), \(\hat{S}\) collapses, but the discussion bellow still applies) and define
\[\begin{split} S_{0}&=\begin{pmatrix}&1\\ &-\hat{S}\\ 1&\end{pmatrix}\in\operatorname{Sym}\left(s+2;\mathbb{R}\right)\,,\\ S&=\begin{pmatrix}&1\\ &S_{0}\\ 1&\end{pmatrix}\in\operatorname{Sym}\left(s+4;\mathbb{R}\right)\,,\end{split} \tag{3.19}\]
where \(\operatorname{Sym}(s+2;\mathbb{R})\) denotes the set of \((s+2)\times(s+2)\) symmetric real matrix. We then define for \(Z_{L}\in L\) (or \(L\otimes\mathbb{C}\)), \(q(Z_{L})=\frac{1}{2}Z_{L}^{T}SZ_{L}\) and for \(Z\in L_{0}\) (or \(L_{0}\otimes\mathbb{C}\)), \(q_{0}(Z)=\frac{1}{2}Z^{T}S_{0}Z\). Here the superscript \(T\) means the transpose operation.
In the following we will frequently use the notation \(L(N)\) for a positive integer \(N\) (Some references use the notation \(\sqrt{N}L\)). \(L(N)\) indicates the lattice with the same basis as \(L\) but equipped with the scaled quadratic form \(NS\) (or vectors scaled by factor \(\sqrt{N}\) equivalently). We consider explicitly only the lattices \(L=\Pi_{1,1}(1)\oplus\Pi_{1,1}(1)\oplus\hat{L}\) with lattice \(\hat{L}\) equipped
with the quadratic form \(\hat{S}\). The discussion can be extended to \(L=\Pi_{1,1}(N_{1})\oplus\Pi_{1,1}(N_{2})\oplus\hat{L}\) for arbitrary positive integer \(N_{1,2}\) and the results still apply.
With the help of these quadratic forms, by setting \(z=(1,0,\ldots,0)^{T}\in L\) and \(z^{\prime}=(0,\ldots,0,1)^{T}\in L^{\prime}\), the map \(f\) (3.17) can be written as
\[\begin{split} f:\mathbb{H}_{l}&\longrightarrow \mathcal{K}^{+}\\ Z&\longmapsto[Z_{L}]=\left[(-q_{0}(Z),Z,1)^{T} \right]\,.\end{split} \tag{3.20}\]
The (special) orthogonal transformation \(\mathrm{O}(2,l;\mathbb{R})\)\((\mathrm{SO}(2,l;\mathbb{R}))\)8 now is
Footnote 8: The definition here, while using a different metric, is isomorphic to the definition using \(\eta=(+1,+1,-1,\ldots,-1)\).
\[\begin{split}\mathrm{O}(2,l;\mathbb{R})&=\left\{M \in\mathrm{Mat}(l+4;\mathbb{R})|M^{T}SM=S\right\}\,,\\ \mathrm{SO}(2,l;\mathbb{R})&=\left\{M\in\mathrm{SL}( l+4;\mathbb{R})|M^{T}SM=S\right\}\,.\end{split} \tag{3.21}\]
Since \(\mathcal{K}^{+}\) and \(\mathbb{H}_{l}\) are isomorphic, the action of \(\mathrm{O}^{+}(2,l;\mathbb{R})\) on the set \(\mathcal{K}^{+}\) naturally induces the action on the generalized upper-half plane \(\mathbb{H}_{l}\). If \(M\in\mathrm{O}^{+}(2,l;\mathbb{R})\), the action on \(\mathcal{K}^{+}\) is defined as
\[\begin{split} M:\mathcal{K}^{+}&\longrightarrow \mathcal{K}^{+}\\ [Z_{L}]&\longmapsto[MZ_{L}]\,,\end{split} \tag{3.22}\]
where \(MZ_{L}\) is the usual linear transformation of a vector in \(\mathbb{C}^{l+2}\). Because the real orthogonal transformation will not change the the norm \((Z_{L},Z_{L})\) and \((Z_{L},\overline{Z_{L}})\), the element \([MZ_{L}]\) still stays in the set \(\mathcal{K}^{+}\). Then we can define the action of \(M\) on the generalized upper-half plane \(\mathbb{H}_{l}\), \(Z\mapsto M\langle Z\rangle\), such that the following diagram commutes
(3.23)
In the equation form we have
\[[Mf(Z)]=[f\left(M\langle Z\rangle\right)]\,. \tag{3.24}\]
For convenience, we decompose the matrix \(M\) in the following way
\[M=\begin{pmatrix}\alpha&a^{T}&\beta\\ b&P&c\\ \gamma&d^{T}&\delta\end{pmatrix}\in\mathrm{O}^{+}(2,l;\mathbb{R})\,,\quad \begin{cases}\alpha,\beta,\gamma,\delta\in\mathbb{R},\\ a,b,c,d\in\mathbb{R}^{l},\\ P\in\mathrm{Mat}\left(l;\mathbb{R}\right)\,.\end{cases} \tag{3.25}\]
Expanding the equation (3.24), we have
\[\begin{pmatrix}\alpha&a^{T}&\beta\\ b&P&c\\ \gamma&d^{T}&\delta\end{pmatrix}\begin{pmatrix}-q_{0}(Z)\\ Z\\ 1\end{pmatrix}=\begin{pmatrix}-\alpha q_{0}(Z)+a^{T}Z+\beta\\ -bq_{0}(Z)+PZ+c\\ -\gamma q_{0}(Z)+d^{T}Z+\delta\end{pmatrix}=j(M,Z)\begin{pmatrix}-q_{0}(W)\\ W\\ 1\end{pmatrix}\,, \tag{3.26}\]
where \(W=M\langle Z\rangle\), \(j(M,Z)\in\mathbb{C}\). From this equation we extract the definition the action of (special) orthogonal group on the generalized upper-half plane \(\mathbb{H}_{l}\) directly
\[\begin{split}& W=M\langle Z\rangle:=\left(-bq_{0}(Z)+PZ+c \right)\left(-\gamma q_{0}(Z)+d^{T}Z+\delta\right)^{-1}\,,\\ & j(M,Z):=-\gamma q_{0}(Z)+d^{T}Z+\delta\,.\end{split} \tag{3.27}\]
With such definition, the equality (3.26) clearly holds for last two components. The first components of the vector on two sides of (3.26) are equal due to the norm-zero condition.
### Modular forms on generalized upper-half plane
As proposed in the section 1, a specific function transforming in the particular way under the modular group (1.7) will play a central role in the construction of the counterterm. We turn now to the modular forms on generalized upper-half plane, commonly referred to as orthogonal modular forms. We will restrict for now to \(l\geq 3\), where the application of the results of Borcherds [24; 26] apply. The case \(l=2\) will be considered in section 6.
The pivotal observation is that the group \(\mathrm{SL}(2;\mathbb{R})\) and \(\mathrm{O}(2,l;\mathbb{R})\) form a dual reductive pair. To construct the modular forms of orthogonal group \(\mathrm{O}(2,l)\), the modular forms on \(\mathrm{SL}(2,\mathbb{Z})\) can be lifted by integrating against the Siegel theta function \(\Theta(\tau,Z)\)[25; 26]. A brief review of the relevant background, following the presentation of [21] is given in appendix B.
Suppose \(\mathrm{O}(L)\) is the orthogonal group of a even lattice \(L\) with signature \((2,l)\) defined by
\[\mathrm{O}(L):=\{M\in\mathrm{O}(2,l;\mathbb{R})|\,ML=L\}. \tag{3.28}\]
The orthogonal group of the discriminant group \(D(L):=L^{\prime}/L\) can be defined similarly and will be denoted as \(\mathrm{O}(L^{\prime}/L)\). We then denote by \(\mathrm{O}_{d}(L)\) the discriminant kernel of \(\mathrm{O}(L)\), which is the subgroup of finite index of \(\mathrm{O}(L)\) consisting of all elements which act trivially on the discriminant group \(L^{\prime}/L\), i.e.
\[\mathrm{O}_{d}(L):=\mathrm{Ker}\left(\mathrm{O}(L)\to\mathrm{O}(L^{\prime}/L) \right)\,. \tag{3.29}\]
We define the intersection with \(\mathrm{O}^{+}(V)\), \(V=L\otimes\mathbb{R}\) as the modular group
\[\Gamma(L):=\mathrm{O}^{+}(V)\cap\mathrm{O}_{d}(L)\,. \tag{3.30}\]
Recalling the definition of \(j(M,Z)\), we can rewrite it as \(j(M,Z)=(MZ_{L},z)\) with \(l+2\)-dimensional vectors \(z=(1,0,\ldots,0)^{T}\) and \(Z_{L}=(-q_{0}(Z),Z,1)^{T}\). Following Theorem 13.3 in [26] (Theorem B.1), we can lift a nearly holomorphic modular form
\(\mathbb{H}\to\mathbb{C}[L^{\prime}/L]\) (see Definition B.3) of weight \(1-l/2\) with Fourier expansion
\[f(\tau)=\sum_{\gamma\in L^{\prime}/L}\sum_{n\in\mathbb{Z}+q(\gamma)}c(\gamma,n) \mathfrak{e}_{\gamma}(n\tau)\,, \tag{3.31}\]
to the meromorphic function \(\Psi(Z):\mathbb{H}_{l}\to\mathbb{C}\) with the following transformation property
\[\Psi(M\langle Z\rangle)=\chi(M)j(M,Z)^{c(0,0)/2}\Psi(Z)\,,\quad M\in\Gamma(L)\,. \tag{3.32}\]
\(\chi(M)\) is called the multiplier system (see Definition B.4) and \(\Psi(Z)\) is a modular form on generalized upper-half plane (also called Borcherds product) of weight \(c(0,0)/2\) with the multiplier system (or character if the weight is integer) \(\chi\) and modular group \(\Gamma(L)\). This modular group contains some elements that do not preserve orientation. Since our symmetry group is \(\mathrm{SO}(2,l;\mathbb{R})\), the modular group we use is actually \(\mathrm{S}\Gamma(L):=\Gamma(L)\cap\mathrm{SO}(L)\). We will be interested in the logarithm (the argument) of such modular forms. Hence we also need the information of its poles and zeros, where the argument at these points is not well defined. Remarkably, the positions of zeros and poles are totally determined by the principal part, consisting of all the terms with \(n\) (in equation (3.31)) negative
\[\sum_{\beta\in L^{\prime}/L}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+q(\beta) \\ n<0\end{subarray}}c(\beta,n)\mathfrak{e}_{\beta}(n\tau)\,. \tag{3.33}\]
By Theorem 13.3 in [26] (Theorem B.2), zeros and poles of \(\Psi(Z)\) lie in the divisor \((\Psi)\), which is the linear combinations of rational quadratic divisors \(H(\beta,m)\) (Heegner divisors). The rational quadratic divisors \(H(\beta,m)\) are unions of orthogonal subspaces \(H_{\lambda}\) with respect to the vector \(\lambda\in\beta+L\), for \(\beta\in L^{\prime}/L\) and rational negative norm \(m\),
\[H_{\lambda}=\left\{\left[Z_{L}\right]\in\mathcal{K}^{+}|\left(Z_{L},\lambda \right)=0\right\}\,. \tag{3.34}\]
A rational quadratic divisor \(H(\beta,m)\) is defined as
\[H(\beta,m)=\sum_{\begin{subarray}{c}\lambda\in\beta+L\\ q(\lambda)=m\end{subarray}}H_{\lambda}\,. \tag{3.35}\]
The zeros and poles of \(\Psi(Z)\) are contained in the divisor \((\Psi)\) which is given by
\[(\Psi)=\frac{1}{2}\sum_{\beta\in L^{\prime}/L}\sum_{\begin{subarray}{c}m\in \mathbb{Z}+q(\beta)\\ m<0\end{subarray}}c(\beta,m)H(\beta,m)\,. \tag{3.36}\]
These rational quadratic divisors are closely related to the gauge symmetry enhancement [25]. We will return to this in section 5.
## 4 Composite U\((1)\) in minimal supergravity
We can now turn to computing the compensating U\((1)\) transformation. As the first step a suitable parametrization of the coset space
\[\mathcal{M}=\frac{\mathrm{SO}(2,l)}{\mathrm{SO}(2)\times\mathrm{SO}(l)}\cong \frac{\mathrm{SO}(2,l)}{\mathrm{U}(1)\times\mathrm{SO}(l)} \tag{4.1}\]
is needed. The coset construction in [6] provides a good starting point. At the level of Lie algebra the representative of the coset \(\mathfrak{so}(2,l)/\left(\mathfrak{so}(2)\oplus\mathfrak{so}(l)\right)\) can be written as a matrix
\[\begin{pmatrix}0_{2\times 2}&H_{2\times l}\\ (H^{T})_{l\times 2}&0_{l\times l}\end{pmatrix}\,,\quad H\in\mathrm{Mat}(2\times l,\mathbb{R})\,. \tag{4.2}\]
Here the \(2l\) real scalars \(\phi^{\alpha}\) in \(\mathcal{N}=1\) vector multiplets (see (3.2)) are packaged in \(H\), and \(\mathrm{Mat}(2\times l,\mathbb{R})\) is the set of the real \(2\times l\) matrices. An element \(\Lambda\in\mathcal{M}\), \(\Lambda^{T}\eta\Lambda=\eta\), can be represented as
\[\Lambda=\exp\begin{pmatrix}0&H\\ H^{T}&0\end{pmatrix}=\begin{pmatrix}\sqrt{1+qq^{T}}&q\\ q^{T}&\sqrt{1+q^{T}q}\end{pmatrix}\,, \tag{4.3}\]
where \(q\in M(2\times l,\mathbb{R})\) is given by
\[q=H\left(\frac{\sinh H^{T}H}{H^{T}H}\right)^{\frac{1}{2}}\,. \tag{4.4}\]
By direct matrix multiplication we can see that
\[\sqrt{1+qq^{T}}=\cosh\!\left(HH^{T}\right)^{1/2},\quad\sqrt{1+q^{T}q}=\cosh\! \left(H^{T}H\right)^{1/2}. \tag{4.5}\]
The matrices \(\sqrt{1+qq^{T}}\) and \(\sqrt{1+q^{T}q}\) satisfy the relation
\[\sqrt{1+q^{T}q}=\mathbb{1}+q^{T}\left(\sqrt{1+qq^{T}}-\mathbb{1}\right)(qq^{T })^{-1}q\,, \tag{4.6}\]
which can be checked by squaring the expression on the both sides. The negative power of the matrix in this expression should be considered in the sense of Taylor expansion since the matrix \(H^{T}H\) might not be invertible. Based on this parametrization, we can further simplify the expression by introducing the so called Calabi-Vesentini coordinates [23; 37].
The matrix elements can be labeled by \(\Lambda_{I}^{\ A}\), with \(I\) being the row index and \(A\) represents the column index. All the capital Latin indices take integer value from \(0\) to \(l+1\) (\(I,A=0,1,\ldots,l+1\)), and the metric-preserving property can be written in components as
\[\Lambda_{I}^{\ A}\Lambda_{J}^{\ B}\eta_{AB}=\eta_{IJ}\,. \tag{4.7}\]
The inverse matrix element of \(\Lambda^{-1}\) is denoted as \(\Lambda^{I}_{\ A}\) and satisfies
\[\Lambda_{I}^{\ A}\Lambda^{I}_{\ B}=\delta^{A}_{\ B}\,,\quad\Lambda^{I}_{\ A}= \eta^{IJ}\eta_{AB}\Lambda_{J}^{\ B}\,. \tag{4.8}\]
We can now define
\[\Phi^{A}=\frac{1}{\sqrt{2}}\left(\Lambda_{0}{}^{A}+i\Lambda_{1}{}^{A}\right), \tag{4.9}\]
which can be verified to satisfy
\[\begin{split}\bar{\Phi}^{A}\Phi^{B}\eta_{AB}&=\frac{1 }{2}\left(\Lambda_{0}{}^{A}-i\Lambda_{1}{}^{A}\right)\left(\Lambda_{0}{}^{B}+i \Lambda_{1}{}^{B}\right)\eta_{AB}&=\frac{1}{2}(\eta_{00}+\eta_{1 1})=1\,,\\ \Phi^{A}\Phi^{B}\eta_{AB}&=\frac{1}{2}\left(\Lambda_ {0}{}^{A}+i\Lambda_{1}{}^{A}\right)\left(\Lambda_{0}{}^{B}+i\Lambda_{1}{}^{B} \right)\eta_{AB}&=\frac{1}{2}(\eta_{00}-\eta_{11})=0\,.\end{split} \tag{4.10}\]
A natural Ansatz for \(\Phi^{A}\) satisfying these constraints takes the form
\[\Phi^{A}=\frac{X^{A}}{\sqrt{\overline{X}^{A}X^{B}}\eta_{AB}}\,. \tag{4.11}\]
where \(X^{A}\) are components of a \(l+2\) dimensional complex vector \(\vec{X}\) such that \(\vec{X}^{T}\eta\vec{X}=0\). In terms of \(X^{A}\) the matrix (4.3) can be written as
\[\Lambda=\frac{1}{\sqrt{2\overline{X}^{A}X^{B}}\eta_{AB}}\left(\begin{array}[] {ccc}X^{0}+\bar{X}^{0}&-i(X^{0}-\bar{X}^{0})&\ldots\\ X^{1}+\bar{X}^{1}&-i(X^{1}-\bar{X}^{1})&\ldots\\ \vdots&\vdots&\ast\\ X^{l+1}+\bar{X}^{l+1}&-i(X^{l+1}-\bar{X}^{l+1})\end{array}\right)\,. \tag{4.12}\]
Notice that \(-i(X^{0}-\bar{X}^{0})=X^{1}+\bar{X}^{1}\), \(\Lambda\) is a symmetric real matrix.
One way to parametrize \(X^{A}\) in terms of of \(l\) independent complex scalars is
\[X^{A}=\left(\frac{1+y^{2}}{2},\frac{i}{2}(1-y^{2}),y_{i}\right),\quad i=1, \ldots,l\,, \tag{4.13}\]
where \(y_{i}\) is a complex scalar and \(y^{2}:=y_{i}y_{i}\). Here, and in the rest of the discussion, a summation over all repeated indices is implied. In addition, \(y_{i}\) should satisfy [23]
\[\overline{X}^{A}X^{B}\eta_{AB}>0\quad\Rightarrow\quad 1-2\bar{y}_{i}\bar{y}_{i} +y^{2}\bar{y}^{2}>0,\quad\bar{y}_{i}y_{i}<1\,, \tag{4.14}\]
which is the bounded choice of the region for \(y_{i}\), known as Calabi-Vesentini coordinates. In terms of \(y^{i}\)
\[\Lambda=\frac{1}{\sqrt{1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}}}\left[\begin{array} []{ccc}1+\frac{1}{2}\left(y^{2}+\bar{y}^{2}\right)&-\frac{i}{2}(y^{2}-\bar{y}^ {2})&\ldots\\ -\frac{i}{2}(y^{2}-\bar{y}^{2})&1-\frac{1}{2}(y^{2}+\bar{y}^{2})&\ldots\\ y_{1}+\bar{y}_{1}&-i(y_{1}-\bar{y}_{1})&\\ \vdots&\vdots&\ast\\ y_{l}+\bar{y}_{l}&-i(y_{l}-\bar{y}_{l})\end{array}\right]. \tag{4.15}\]
Under the parametrization (4.12), it is easy to see that \(\vec{X}\) is equivalent with \(t\vec{X}\) if \(t\in\mathbb{R}\). Also, recall that \(\Lambda\) is a coset representative, i.e. \(\Lambda\sim\Lambda U\) where \(U\) is a \(\mathrm{SO}(2)\times\mathrm{SO}(l)\)
transformation parametrized by a real \(\theta\):
\[\Lambda\sim\Lambda U =\frac{\sqrt{2}}{\sqrt{\vec{X}^{\dagger}\eta\vec{X}}}\left(\text{Re} \left(\vec{X}\right)\,\text{Im}\left(\vec{X}\right)\,*\right)\begin{pmatrix} \cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\\ &U_{\text{SO}(l)}\end{pmatrix} \tag{4.16}\] \[=\frac{\sqrt{2}}{\sqrt{\vec{X}^{\dagger}\eta\vec{X}}}\left(\text{ Re}\left(\vec{X}e^{-i\theta}\right)\,\text{Im}\left(\vec{X}e^{-i\theta} \right)\,*\right)\,,\]
which means that \(\vec{X}\sim\vec{X}e^{-i\theta}\). Combined with the scaling transformation, this leads to a conclusion that \(\vec{X}\) lives in the projective space and \(\vec{X}\sim\alpha\vec{X}\) for an arbitrary non-zero complex number \(\alpha\).
### U\((1)\) connection, gauge transformations and gauge fixing
We are now ready to construct explicitly the composite connection associated with the local U\((1)\) gauge symmetry. It can be expressed in terms of the Maurer-Cartan form [6] as
\[Q=\left(\Lambda^{-1}d\Lambda\right)_{0}^{1},\quad Q_{\mu}=\left(\Lambda^{-1} \partial_{\mu}\Lambda\right)_{0}^{1}\,, \tag{4.17}\]
where \(d\) is the exterior derivative defined on the spacetime manifold and \(\partial_{\mu}\) is the partial derivative with respect to the \(\mu\) spacetime coordinates. Using the expression of \(\Lambda\) of \(q\) (4.3), we have
\[(\Lambda^{-1}\partial_{\mu}\Lambda)_{2\times 2}=\sqrt{1+qq^{T}}\partial_{\mu} \sqrt{1+qq^{T}}-q\partial_{\mu}q^{T}\,, \tag{4.18}\]
where the subscript indicates the \(2\times 2\) upper left corner of the matrix \(\Lambda^{-1}\partial_{\mu}\Lambda\). Since \(y_{i}\) are unconstrained variables, expressing \(Q\) in terms of these avoids ambiguities and we have:
\[Q_{\mu}=2i\frac{\bar{y}_{i}-\bar{y}^{2}y_{i}}{1-2\bar{y}_{k}y_{k}+y^{2}\bar{y} ^{2}}\partial_{\mu}y_{i}-\frac{i}{2}\partial_{\mu}\ln\left(1-2\bar{y}_{k}y_{k} +y^{2}\bar{y}^{2}\right)\,. \tag{4.19}\]
Notice that
\[\frac{1}{2}\left(1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}\right)=\bar{X}^{A}X^{B} \eta_{AB}=\vec{X}^{\dagger}\eta\vec{X}\,, \tag{4.20}\]
the denominator is naturally invariant under the transformation \(\vec{X}\to\vec{X}e^{-i\Sigma}\). Besides,
\[\vec{X}^{\dagger}\eta\partial_{\mu}\vec{X} =\left(\frac{1+\bar{y}^{2}}{2},-\frac{i}{2}(1-\bar{y}^{2}),\vec{y }^{\dagger}\right)\begin{pmatrix}\mathbb{1}&0\\ 0&-\mathbb{1}\end{pmatrix}\begin{pmatrix}y_{i}\partial_{\mu}y_{i}\\ -iy_{i}\partial_{\mu}y_{i}\\ \partial_{\mu}\vec{y}\end{pmatrix} \tag{4.21}\] \[=\bar{y}^{2}y_{i}\partial_{\mu}y_{i}-\vec{y}^{\dagger}\partial_{ \mu}\vec{y}\]
allows to write \(Q_{\mu}\) compactly in terms of \(\vec{X}\),
\[Q_{\mu}=-i\frac{\vec{X}^{\dagger}\eta\partial_{\mu}\vec{X}}{\vec{X}^{\dagger} \eta\vec{X}}-\frac{i}{2}\partial_{\mu}\ln\left(2\vec{X}^{\dagger}\eta\vec{X} \right)\,. \tag{4.22}\]
Under the \(\mathrm{U}(1)\) gauge transformation \(\vec{X}\to\vec{X}^{\prime}=\vec{X}e^{-i\Sigma}\) we have
\[\delta Q_{\mu}=Q^{\prime}_{\mu}-Q_{\mu}=-i\frac{\vec{X}^{\prime}{}^{\dagger}\eta \partial_{\mu}\vec{X}^{\prime}}{\vec{X}^{\prime}{}^{\dagger}\eta\vec{X}^{ \prime}}+i\frac{\vec{X}^{\dagger}\eta\partial_{\mu}\vec{X}}{\vec{X}^{\dagger} \eta\vec{X}}=-\partial_{\mu}\Sigma\,, \tag{4.23}\]
as expected (\(Q^{\prime}_{\mu}\) here denotes the \(U(1)\) transformed connection).
Notice that in the expression for the coset element \(\Lambda\) (4.3) we have chosen the gauge \(\phi=0\), where \(\phi\) represents the variable parametrizing local \(\mathrm{U}(1)\) gauge symmetry. In order to maintain the gauge (\(\phi=0\)), a left action on \(\Lambda\in\mathcal{M}\) by an \(\mathrm{SO}(2,l)\) transformation should be compensated by a right action of a \(\mathrm{SO}(2)\times\mathrm{SO}(l)\) transformation, i.e.
\[\Lambda\to\Lambda^{\prime}=R\Lambda U^{-1},\ U\in\mathrm{SO}(2)\times\mathrm{ SO}(l)\,, \tag{4.24}\]
which leads to
\[\begin{split}\frac{1}{\sqrt{\vec{X}^{\dagger}\eta\vec{X}}}R \left(\mathrm{Re}\left(\vec{X}\right)\;\mathrm{Im}\left(\vec{X}\right)\right) &=\frac{1}{\sqrt{\vec{Y}^{\dagger}\eta\vec{Y}}}\left( \mathrm{Re}\left(\vec{Y}\right)\;\mathrm{Im}\left(\vec{Y}\right)\right) \begin{pmatrix}\cos\Sigma&-\sin\Sigma\\ \sin\Sigma&\cos\Sigma\end{pmatrix}\\ &=\frac{1}{\sqrt{\vec{Y}^{\dagger}\eta\vec{Y}}}\left(\mathrm{Re} \left(\vec{Y}e^{-i\Sigma}\right)\;\mathrm{Im}\left(\vec{Y}e^{-i\Sigma}\right) \right)\,,\end{split} \tag{4.25}\]
where the complex vector \(\vec{Y}\) parametrizes the new coset representative \(\Lambda^{\prime}\). The equation is obviously equivalent to
\[\frac{1}{\sqrt{\vec{X}^{\dagger}\eta\vec{X}}}R\vec{X}=\frac{1}{\sqrt{\vec{Y}^ {\dagger}\eta\vec{Y}}}\vec{Y}e^{-i\Sigma}\,. \tag{4.26}\]
In the next subsection we will proceed to formally solving this equation and obtaining an analytic expression of \(\Sigma=\Sigma(R,\vec{X})\). Before doing so we should recall that in the familiar case of \(\mathrm{SL}(2,\mathbb{R})/\mathrm{U}(1)\), the compensating \(\mathrm{U}(1)\) transformation (the phase factor) is given by (1.5) in terms of the modular variable \(\tau\) which lives in the complex upper-half plane \(\mathbb{H}\). As we shall see, the relation between the \(\mathrm{U}(1)\) anomaly and modular variables is universal.
### Compensating \(\mathrm{U}(1)\) transformation
We start by recalling that the vector \(\vec{X}\), which lives in the projective space, satisfies the constraints
\[\vec{X}^{T}\eta\vec{X}=0,\quad\vec{X}^{\dagger}\eta\vec{X}>0\,. \tag{4.27}\]
This matches the condition (3.12) on the generalized upper-half plane for the group \(\mathrm{O}^{+}(2,l;\mathbb{R})\) (see section 3.2). The Calabi-Vesentini coordinates (4.13) are not very convenient for solving the equation (4.26) and determining \(\Sigma(R,\vec{X})\). Instead, we should rotate to the reference frame with basis already discussed in section 3, and notably use matrices \(M\in\mathrm{O}^{+}(2,l;\mathbb{R})_{S}\). The subscript here emphasizes that the orthogonal group is defined with respect to a metric \(S\). This definition applies throughout our discussion, and we shall often omit the subscript.
Since \(\hat{S}\), defining the metric \(S\), introduced in (3.19) is a symmetric positive-definite real matrix, there must exist a orthogonal matrix \(\hat{P}\) such that \(\hat{P}\hat{S}\hat{P}^{T}=\hat{V}\), where \(\hat{V}\) is
the diagonal matrix with positive diagonal elements. One can define the square root of the inverse \(\sqrt{\hat{V}^{-1}}\) such that
\[\mathbb{1}=\hat{Q}\hat{S}\hat{Q}^{T}\,,\quad\hat{Q}=\sqrt{\hat{V}^{-1}}\hat{P}\,. \tag{4.28}\]
There exists a orthogonal matrix \(U\) such that \(USU^{T}=\eta\), explicitly we have
\[U=\begin{pmatrix}\frac{1}{\sqrt{2}}J&\frac{1}{\sqrt{2}}\mathbb{1}_{2}\\ &\hat{Q}&\\ -\frac{1}{\sqrt{2}}J&\frac{1}{\sqrt{2}}\mathbb{1}_{2}\end{pmatrix},\quad J= \begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad UU^{T}=\begin{pmatrix}\mathbb{1}_{2}&\\ &\hat{V}\\ &&\mathbb{1}_{2}\end{pmatrix}\,. \tag{4.29}\]
Inserting \(U\) into the equation (4.26) we have
\[\frac{1}{\sqrt{\vec{Z}^{\dagger}S\vec{Z}}}M\vec{Z}=\frac{1}{\sqrt{\vec{W}^{ \dagger}S\vec{W}}}\vec{W}e^{-i\Sigma},\quad\vec{Z}=U^{T}\vec{X},\quad\vec{W}=U ^{T}\vec{Y}\quad M=U^{T}R(U^{T})^{-1}\,. \tag{4.30}\]
It is not difficult to verify that \(M\in\mathrm{O}^{+}(2,l;\mathbb{R})_{S}\). To further demonstrate that such a choice of coordinates would be realized as the generalized upper-half plane, we explicitly expand the equation and derive the constraints satisfied by \(\vec{Z}\)[38]. After the rotation we have
\[\vec{Z}=U^{T}\vec{X}=\begin{pmatrix}\frac{-i}{2\sqrt{2}}(1-y^{2})-\frac{1}{ \sqrt{2}}y_{s+2}\\ \frac{1}{2\sqrt{2}}(1+y^{2})-\frac{1}{\sqrt{2}}y_{s+1}\\ \hat{Q}^{T}\vec{y}_{s}\\ \frac{1}{2\sqrt{2}}(1+y^{2})+\frac{1}{\sqrt{2}}y_{s+1}\\ \frac{i}{2\sqrt{2}}(1-y^{2})+\frac{1}{\sqrt{2}}y_{s+2}\end{pmatrix}=\begin{pmatrix} \frac{\beta_{0}}{\beta_{1}}\\ \vdots\\ \beta_{s+2}\\ \tilde{\beta}_{s+3}\end{pmatrix}\,, \tag{4.31}\]
satisfying the constraint
\[\begin{cases}\vec{Z}^{T}S\vec{Z}=0&\Rightarrow&2\beta_{0}\beta_{1}+2\beta_{s +2}\beta_{s+3}-\vec{\beta}^{T}_{s}\hat{S}\vec{\beta}_{s}=0,\\ \vec{Z}^{\dagger}S_{1}\vec{Z}>0&\Rightarrow&\overline{\beta_{0}}\beta_{1}+ \overline{\beta_{1}}\beta_{0}-\vec{\beta}^{\dagger}_{s}\hat{S}\vec{\beta}_{s }+\overline{\beta_{s+2}}\beta_{s+3}+\beta_{s+2}\overline{\beta_{s+3}}>0\,,\end{cases} \tag{4.32}\]
where \(\vec{\beta}_{s}\) is the vector with components \((\beta_{2},\beta_{3},\dots,\beta_{s+1})\). First, let us verify that \(\beta_{s+3}\neq 0\). Indeed, assuming \(\beta_{s+3}=0\), would yield
\[\begin{cases}2\beta_{0}\beta_{1}-\vec{\beta}^{T}_{s}\hat{S}\vec{\beta}_{s}=0, \\ \overline{\beta_{0}}\beta_{1}+\overline{\beta_{1}}\beta_{0}-\vec{\beta}^{ \dagger}_{s}\hat{S}\vec{\beta}_{s}>0.\end{cases} \tag{4.33}\]
However, by absolute inequality, we have
\[|2\beta_{0}\beta_{1}|=|\vec{\beta}^{T}_{s}\hat{S}\vec{\beta}_{s}|\leq\vec{ \beta}^{\dagger}_{s}\hat{S}\vec{\beta}_{s}<\overline{\beta_{0}}\beta_{1}+ \overline{\beta_{1}}\beta_{0}\,. \tag{4.34}\]
Since both sides of the equation have positive signs, we can square without changing the direction of the inequality:
\[4\overline{\beta_{0}}\beta_{0}\overline{\beta_{1}}\beta_{1}<\left(\overline{ \beta_{0}}\beta_{1}\right)^{2}+\left(\overline{\beta_{1}}\beta_{0}\right)^{2} +2\overline{\beta_{0}}\beta_{0}\overline{\beta_{1}}\beta_{1}\quad\Longleftrightarrow \quad\left(\mathrm{Im}\,\overline{\beta_{0}}\beta_{1}\right)^{2}<0\,, \tag{4.35}\]
leading to contradiction. Thus \(\beta_{s+3}\neq 0\) and we can safely normalize the vector \(\vec{Z}\) by dividing the final component,
\[\vec{Z}=\alpha(z)\begin{pmatrix}-q_{0}(Z)\\ Z\\ 1\end{pmatrix}=\alpha(Z)Z_{L},\,\alpha(Z)=\beta_{s+3},\,Z\in\mathbb{C}^{s+2},\,Z_ {j}=\frac{\beta_{j}}{\beta_{s+3}},\,j=1,\ldots,s+2. \tag{4.36}\]
Here \(q_{0}(Z)=Z^{T}S_{0}Z/2\) as defined in section 3.2. With the definition of the quadratic form \(q(Z_{L})=\frac{1}{2}Z_{L}^{T}SZ_{L}\) and \((A,B)=q(A+B)-q(A)-q(B)\), we can rewrite the constraints (4.32) of \(Z_{L}\) as
\[(Z_{L},Z_{L})=0\,,\quad\big{(}Z_{L},\overline{Z_{L}}\big{)}>0\,, \tag{4.37}\]
so we conclude that \(Z_{L}\in\mathcal{K}\) defined by equation (3.12). Without loss of generality we assume that \(Z_{L}\in\mathcal{K}^{+}\), and can check that the only constraint on th orange of \(Z\) is given by \(q_{0}\left(\text{Im}(Z)\right)>0\). If we assume \(\text{Im}(Z)\) lives in the future light cone of the Minkowski space, \(Z\) indeed lives in the generalized upper-half plane \(\mathbb{H}_{l}\). With these setting we can rewrite the equation (4.30) as
\[\frac{e^{i\hat{\phi}(Z)}}{\sqrt{Z_{L}^{\dagger}SZ_{L}}}MZ_{L}=\frac{e^{i\hat{ \phi}(W)}}{\sqrt{W_{L}^{\dagger}SW_{L}}}W_{L}e^{-i\Sigma}, \tag{4.38}\]
where \(Z_{L}=\left(-q_{0}(Z),Z,1\right)^{T}\), \(W_{L}=\left(-q_{0}(W),W,1\right)^{T}\), and
\[e^{i\hat{\phi}(Z)}=\frac{\alpha(Z)}{|\alpha(Z)|}\,,\quad e^{i\hat{\phi}(W)=} \frac{\alpha(W)}{|\alpha(W)|}\,. \tag{4.39}\]
Recalling the discussion of the action of the orthogonal group on generalized upper-half plane (equation (3.25) and (3.26)), we conclude that \(W=M\langle z\rangle\) and
\[e^{-i\Sigma(M,Z)}=e^{i\hat{\phi}(Z)-i\hat{\phi}(W)}\frac{\sqrt{W_{L}^{ \dagger}SW_{L}}}{\sqrt{Z_{L}^{\dagger}SZ_{L}}}\left(-\gamma q_{0}(Z)+d^{T}Z+ \delta\right)\,. \tag{4.40}\]
Recall that
\[MZ_{L}=\begin{pmatrix}-\alpha q_{0}(Z)+a^{T}Z+\beta\\ -bq_{0}(Z)+PZ+c\\ -\gamma q_{0}(Z)+d^{T}Z+\delta\end{pmatrix}=\left(-\gamma q_{0}(Z)+d^{T}Z+ \delta\right)W_{L}\,, \tag{4.41}\]
with the property that the real orthogonal transformation doesn't change the norm, i.e. \(\sqrt{Z_{L}^{\dagger}SZ_{L}}=\sqrt{\left(MZ_{L}\right)^{\dagger}S\left(MZ_{L} \right)}\), we conclude that
\[e^{-i\Sigma(M,Z)}=e^{i\hat{\phi}(Z)-i\hat{\phi}(W)}\frac{-\gamma q_{0}(Z)+d^{T }Z+\delta}{|-\gamma q_{0}(Z)+d^{T}Z+\delta|}=e^{i\hat{\phi}(Z)-i\hat{\phi}(W)} \frac{j(M,Z)}{|j(M,Z)|}\,. \tag{4.42}\]
By choosing the specific gauge, the compensating U(1) transformation is given by
\[e^{-i\Sigma(M,Z)}=\frac{j(M,Z)}{|j(M,Z)|}\,. \tag{108}\]
This is the direct generalization of the compensating U(1) transformation for \(\mathrm{SL}(2,\mathbb{R})/\mathrm{U}(1)\) given in (5) to the generalized upper-half plane \(\mathbb{H}_{l}\).
## 5 Constructing the counterterm
In this section we shall examine the anomaly cancellation for \(l\geq 3\), while leaving the treatment of \(l=2\) to section 6. As already discussed in the beginning of section 3, eight-dimensional \(\mathcal{N}=1\) theories suffer from a composite U(1) anomaly. The anomalous phase raised in the local U(1) gauge transformation (\(\phi\to\phi+\Sigma\)) is
\[\Delta_{G}=-\int\Sigma X_{8}(R,\mathcal{F})\,. \tag{109}\]
A direct way to cancel the anomalous phase is to add the local counterterm
\[\mathcal{S}_{\phi}=\int\phi X_{8}(R,\mathcal{F})\,, \tag{110}\]
where \(\phi\) parametrizes the local U(1) gauge symmetry. When we apply the U(1) gauge transformation \(\phi\to\phi+\Sigma\), \(\delta\mathcal{S}_{\phi}\) can cancel the anomalous phase above. But the drawback is that the local counterterm is not invariant under \(\mathrm{SO}(2,l;\mathbb{R})\) symmetry transformations, as shown in the equation (108)
\[\delta_{M}\phi=-\arg\left(j(M,Z)\right)\,. \tag{111}\]
Here \(\delta_{M}\) indicates an \(\mathrm{SO}(2,l;\mathbb{R})\) gauge transformation with respect to the element \(M\). Since the compensating U(1) transformation is the argument of the automorphy factor, it is natural to construct the counterterm by using modular forms on generalized upper-half plane and it is of the form
\[\mathcal{S}=\frac{1}{r}\int\arg(\Psi(Z))X_{8}(R,\mathcal{F})\,, \tag{112}\]
where \(\Psi(Z)\) satisfies the modular property (7). We have already seen that the continuous symmetry group should be discretized since no suitable functions that can maintain the continuous symmetry. As mentioned in section 3.3 the analogue of \(\mathrm{SL}(2;\mathbb{Z})\) in \(\mathcal{N}=2\) case is the discrete modular group \(\mathrm{S}\Gamma(L)\) with respect to the lattice \(L\) of signature \((2,l)\). Such a discrete lattice \(L\) will be the root lattice of the gauge group \(G\) (or contain the sublattices which are the root lattices of the gauge group \(G\)). Hence the anomaly cancellation may lead to nontrivial restrictions on the lattices \(L\) (on the gauge groups \(G\)).
As discussed in section 3, the Borcherds products provide necessary tools for constructing \(\Psi(Z)\) with requisite properties to cancel the anomaly (109). As long as a nearly
holomorphic modular form of weight \(1-l/2\) with respect to the lattice \(L\) can be found, one can obtain the modular form \(\Psi(Z)\) on generalized upper-half plane of weight \(r=c(0,0)/2\). However the counterterms needs to satisfy some natural physical conditions leading to constraints that will be outlined bellow.9
Footnote 9: Here we recall another time that throughout this discussion we have taken the lattice \(L\) to be even.
\(\bullet\): **The character of the lattice \(L\) (the modular group \(\mathbb{S}\Gamma(L)\))**
Since the modular form \(\Psi(Z)\) satisfies the modular property (3.32), where the weight \(r=c(0,0)/2\), the counterterm is transformed under the \(\mathrm{S}\Gamma(L)\) transformation as
\[\delta_{M}\mathcal{S}=-\delta_{M}\mathcal{S}_{\phi}+\arg\chi(M)\int X_{8}(R, \mathcal{F})\,. \tag{5.5}\]
In order to completely cancel the anomaly without imposing extra conditions on the background manifold, such as integrality of \(\int X_{8}(R,\mathcal{F})\), \(\chi(M)\equiv 1\) for arbitrary \(M\in\mathrm{S}\Gamma(L)\) is required. To the best of our knowledge, the necessary and sufficient condition for the character to be trivial is not known.10 A sufficient condition is known (Theorem B.4). Moreover, it cannot be weakened too much (see the counter example (Example 1.4) in [39]). More details are in the appendix B. Notably any lattice that contains \(A_{2}\) sublattice has \(\chi(M)\equiv 1\).
Footnote 10: In general \(\chi(M)\) is called the multiplier system and is different from character if the weight of the Borcherds product is not integral. Through suitable normalization we can always obtain the Borcherds product of integral weights so we will not consider the cases of rational weights.
* **Rational quadratic divisor (RQD)**
The counterterm is obviously ill-defined at the zeroes or poles of \(\Psi(Z)\). Fortunately, through Borcherds product (Theorem B.2) we know that all the zeroes and poles lie in the rational quadratic divisors (Definition B.6). To circumvent this issue, one could have required the Borcherds product \(\Psi(Z)\) to be well-defined and have no zeroes on the entire generalized upper-half plane, which is equivalent to requiring \(c(\beta,m)\equiv 0\) if \(m<0\) for all \(m\in\mathbb{Z}+q(\beta)\) and \(\beta\in L^{\prime}/L\). This would in turn mean that the principal of the nearly holomorphic modular form \(f(\tau)\) has zero principal part, so \(f(\tau)\) is actually a holomorphic modular form of \(\mathrm{SL}(2,\mathbb{Z})\). However, no nonzero holomorphic modular form of non positive weight (\(1-l/2\leq 0\) for \(l\geq 2\)) exists and thus the counterterm will always have ill-defined points in moduli space.
As originally explained in the context of 4D \(\mathcal{N}=2\) theories [25], these points in moduli space, corresponding to symmetry enhancements, are contained in the rational quadratic divisors. This is the set of the orthogonal subspaces determined by the negative-norm vectors \(\ell\in L^{\prime}\), such that the reflections orthogonal to them are symmetries of the lattice. Viewing a general even lattice \(L\) as the momentum lattice, it would exclude some rational quadratic divisors. The reason is that \(\ell\) might not be in \(L\). This means that in the general expansion of the divisors in terms of RQDs (3.36), we should take \(c(\beta,m)=0\) if \(\beta\neq 0\) (\(\beta\in L^{\prime}/L\)).11 By using shorthand notation \(H(m)=\frac{1}{2}H(0,m)\) (having suppressed the
vector index), the divisor of modular form \(\Psi(Z)\) can be written as
\[(\Psi)=\sum_{m\in\mathbb{Z},\,m<0}c(m)H(m)\,. \tag{109}\]
Since the Borcherds product comes from the lifting of the nearly holomorphic modular form \(f(\tau)\), such \(f(\tau)\) exists if and only if it satisfies the Theorem B.3, i.e. the coefficients in the principal part (106) need to satisfy
\[\sum_{m\in\mathbb{Z},\,m<0}c(m)a(-m)=0\,. \tag{110}\]
Here \(a(-m)=a_{0,-m}\) is the functional that maps the cusp form \(g\in S_{\kappa,L}\) into its \((0,-m)\) Fourier coefficient and \(S_{\kappa,L}\) is the space of the cusp forms of weight \(\kappa=1+l/2\) for the dual Weil representation (more details can be found in the discussion of Theorem B.3).
The simplest solution to this condition is when the cusp form space \(S_{\kappa,L}\) is trivial (there exists no nonzero cusp form of weight \(1+l/2\) of dual Weil representation). Lattices with such property exist and are called simple lattices. As shown in [40], there are only 15 simple even lattices of signature \((2,l)\), \(l\geq 4\), of square free level12 up to isomorphisms (see the Theorem 2 in [40]). For signature \((2,18)\), only the even unimodular lattice \(\Pi_{2,18}\cong\Pi_{1,1}\oplus\Pi_{1,1}\oplus E_{8}(-1)\oplus E_{8}(-1)\) is simple, and for \((2,10)\) only the even unimodular \(\Pi_{2,10}\cong\Pi_{1,1}\oplus\Pi_{1,1}\oplus E_{8}(-1)\) and \(\Pi_{1,1}\oplus\Pi_{1,1}(2)\oplus E_{8}(-1)\) are simple.
Footnote 12: The level of the lattice \(L\) is a positive integer \(p\) such that \(p=\min\{n\in\mathbb{N}|\,nq(\gamma)\in\mathbb{Z}\text{ for all }\gamma\in L^{\prime}\}\).
* **Reflective lattices**
Requiring that the \((2,l)\) lattice \(L\) is reflective is sufficient for finding a solution of (110). This condition will be discussed in detail in the next subsection. At this point, we only mention that the reflective symmetries of the lattice are directly linked to the enhancement of the gauge symmetry. There is a finite number of such lattices, and their rank is bounded by \(l=26\). There is a complete classification of reflective lattices of prime level. All these lattices are of even rank and hence should be considered. A complete classification for any level is available for a particular subclass, the 2-reflective lattices that have norm \(-2\) roots.13 Here we find lattices of odd rank, which should be discarded due to the global anomalies. Footnote 13: A bibliographical note: In [41] all strongly reflective modular forms of singular weight on lattices of prime level were classified. A proof that there are only finitely many even lattices with \(l\geq 7\) which admit 2-reflective modular forms and the highest rank such lattice is the the even unimodular lattice \(\Pi_{2,26}\) is given by [42]. These were subsequently classified in [43]. In [44], all possible reflective lattices of prime level were classified. In our discussion of reflective modular forms (see subsection 5.1), we will adopt the conventions of [44].
Recall that in ten dimensions, anomaly cancellation allows for not only rank 16 theories (with a unimodular lattice \(E_{8}\oplus E_{8}\)), but also of theories with gauge group \(E_{8}\times\mathrm{U}(1)^{248}\) and \(\mathrm{U}(1)^{496}\). Simple reduction of these theories would produce 8D theories with \(l=258\) and \(l=498\) respectively. The fact that the condition of reflectivity bounds the rank of the lattice to be equal or less than 24 tells us that for these no suitable 8D counterpart can be
found (even if they admit 10D Green-Schwartz term). So it seems these theories can be ruled out purely based on anomaly cancellation, and without swampland considerations.
\(\bullet\): **Counterterms as obstructions to ten-dimensional lifts**
Given the form of a reflective lattice (5.10) it is natural to ask about possible decompactifications to ten dimensions. If such decompactification is possible, i.e. a good "large volume limit" exists, the 8d theory can be considered consistent only if a lifting to the ten-dimensional \(E_{8}\oplus E_{8}\) lattice exists.14
Footnote 14: For \(L\cong\Pi_{1,1}\oplus\Pi_{1,1}\oplus\sum_{i}\hat{L}_{i}\), there always exists a straightforward lift in ten dimensions with \(L(10D)=\sum_{i}\hat{L}_{i}\). Other than for the Narain lattice, all these lifts can be discarded. The CHL lattice \(\Pi_{1,1}\oplus\Pi_{1,1}\oplus D_{8}(-1)\cong\Pi_{1,1}\oplus\Pi_{1,1}(2)\oplus E _{8}(-1)\) also allows for a lift to \(E_{8}\oplus E_{8}\). Very loosely, all lattices that would allow to have central charge \(c_{L}=18\) can be potentially liftable to ten dimensions
We have not done an exhaustive check on which reflective lattices can or cannot be lifted to an \(E_{8}\oplus E_{8}\) lattice in 10D. Any lattice with \(l>18\) clearly does not have such lifting. The rank 8 self-dual lattice \(\Pi_{1,1}\oplus\Pi_{1,1}\oplus E_{8}(-1)\) also does not have such lifting. For such lattices the anomaly cancellation can be validated only if they are "intrinsically eight-dimensional", i.e. if their counterterm obstructs the decompactification to 10D. As we shall see in section 6, we find such example in \(l=2\) case, where the two complex scalars parametrizing the coset cannot be identified with the moduli of a two-torus.15
Footnote 15: Similarly, the function \(F_{1}^{\rm int}\) that appears in the one-loop gravitational couplings in the \({\cal N}=2\) heterotic compactification with two vector multiplets also does not allow such identification [45].
### Reflective modular forms and reflective lattices
Let \(L\) be an even lattice of signature \((2,l)\) and its dual is \(L^{\prime}\). The level of \(L\) is the smallest positive integer \(N\) such that \(N(x,x)\in 2{\mathbb{Z}}\) for all \(x\in L^{\prime}\). The discriminant of \(L\) denoted \(L^{\prime}/L\) can be decomposed by Jordan components and we denote this decomposition by \(D_{L}\). The genus of \(L\) is the set of lattices which have the same signature and the same discriminant form (up to isomorphism) as \(L\). A holomorphic modular form for the modular group \(\Gamma(L)\) is called reflective if its zeroes are contained in the union of rational quadratic divisors \(\ell^{\perp}\) associated to roots of \(L\), namely the reflection
\[\sigma_{\ell}:\alpha\longmapsto\alpha-\frac{2(\alpha,\ell)}{(\ell,\ell)}\ell\,, \quad\alpha\in L \tag{5.8}\]
belongs to \({\rm O}^{+}(L)\). A lattice is called reflective if it has a reflective modular form. A modular form is called symmetric if it is modular for \({\rm O}^{+}(L)\) and it is known that \(L\) is reflective if and only if \(L\) has symmetric reflective modular form. Recall the definition of the modular group \(\Gamma(L)\)
\[\Gamma(L)={\rm O}^{+}(L\otimes{\mathbb{R}})\cap{\rm Ker}\left({\rm O}(L) \rightarrow{\rm O}(L^{\prime}/L)\right)\,, \tag{5.9}\]
\(\Gamma(L)={\rm O}^{+}(L)\) (\({\rm ST}(L)=\Gamma(L)\cap{\rm SO}(L)\)) is the largest modular symmetry group. It is reasonable to require the local counterterms are constructed by symmetric modular form since we want to maintain the discrete symmetry maximally.
We consider the lattices of the same genus \(\Pi_{2J}(p^{\epsilon pLp})\), where \(l\geq 3\),16\(p\) is a prime number, \(\epsilon_{p}=-\operatorname{or}+\), \(1\leq l_{p}\leq l/2+1\) and \(\epsilon_{p}\) is completely determined by \(l,p\) and \(l_{p}\). If two lattices of signature \((2,l)\) and prime level \(p\) have the same determinant then they are isomorphic. We refer the readers to [41] for more details. Let \(L\) be such a lattice. By [46], \(L\) can be represented as17
Footnote 16: As we shall see in section 6, the condition of reflectivity is required for the \(l=2\) case as well.
Footnote 17: For some lattices, such as CHL, both representations are possible. Of course the last factor in two different ways of representing the lattice will also be different.
\[\Pi_{1,1}\oplus\Pi_{1,1}(p)\oplus\hat{L}(-1)\quad\text{or}\quad\Pi_{1,1}\oplus \Pi_{1,1}\oplus\hat{L}(-1)\,, \tag{110}\]
where \(\Pi_{1,1}\) is a hyperbolic plane as we defined above and \(\hat{L}\) is a positive definite lattice. A primitive vector \(v\in L\) is reflective if and only if \((v,v)=-2\) or \((v,v)=-2p\) and \(v/p\in L^{\prime}\). By [47] and Eichler criterion (see e.g. [48]) all the vectors of norm \(-2\) in \(L\) are in the same \(\mathrm{O}^{+}(L)\)-orbit, and all reflective vectors of norm \(-2p\) in \(L\) are also in the same \(\mathrm{O}^{+}(L)\)-orbit. Therefore, for a symmetry reflective modular form, all \(2\)-reflective divisors (the rational quadratic divisors defined by the vector \(v\) of norm \(-2\)) have the same multiplicity, which is denoted by \(c_{1}\). All \(2p\)-reflective divisors (the rational quadratic divisors defined by the vector \(v\) of norm \(-2p\) and \(v/p\in L^{\prime}\)) have the same multiplicity denoted by \(c_{p}\). A symmetric reflective modular form is called \(2\)-reflective (resp. \(2p\)-reflective) if \(c_{p}=0\) (resp. \(c_{1}\) = 0). A lattice \(L\) is called \(2\)-reflective (resp. \(2p\)-reflective) if it has a \(2\)-reflective (resp. \(2p\)-reflective) modular form.
The positions of the zeroes and poles of the modular form \(\Psi(Z)\), where the counterterm (104) is ill-defined, should be interpreted as the symmetry enhancement points. These points corresponds to the rational quadratic divisors, which are defined as the orthogonal subspace with respect to some negative norm vectors (roots of the lattices). The symmetry is enhanced due to the reflective symmetry of the lattice. Requiring that \(\Psi(Z)\) is symmetric reflective modular form, and thus the corresponding lattice \(L\) should be reflective, ensures that the theory is well-defined and anomaly-free throughout the moduli space. This is a strong constraint for the lattice. As shown in [44], only \(55\) possible types of reflective lattices of genus \(\Pi_{2,l}(p^{\epsilon pLp})\) with \(1\leq l_{p}\leq 1+l/2\) exist for prime level \(p>1\). And only three even unimodular lattices (\(p=1\)) \(\Pi_{2,10}\), \(\Pi_{2,18}\) and \(\Pi_{2,26}\) are reflective. Among these lattices, only those with trivial character (a big majority) can provide suitable counterterms (104) and hence lead to theories that are anomaly-free. Further restrictions may be imposed by the consistency of the large volume limits. Since only the Narain and the CHL lattice pass the tests of full quantum consistency, all other lattices which lead to anomaly-free theories constitute the finite swampland of the eight-dimensional minimal supergravity.
### Examples of counterterms
Before turning to specific examples of counterterms, we point out that if we choose the divisor of \(\Psi(Z)\) to be a linear combination of some rational quadratic divisors, \(\Psi(Z)\) must be the same function up to normalization as we construct from Borcherds product. More precisely (see Theorem 1.2 in [49]), assuming that \(L\cong\Pi_{1,1}\oplus\Pi_{1,1}(N)\oplus\hat{L}(-1)\) for some
positive integer \(N\) and \(l\geq 3\), every meromorphic modular form \(F(Z)\) with respect to \(\Gamma(L)\) whose divisor is a linear combination of special divisors \(H(\beta,m)\) is (up to a non-zero constant factor) the Borcherds product \(\Psi(Z)\) of some \(f\in M^{!}_{1-l/2}\).
We can now discuss examples, which include two fully consistent 8D \(\mathcal{N}=1\) supergravities with \(l=18\) and \(l=10\).
#### \(\bullet\) Signature \((2,18)\)
As already mentioned reflectivity imposes an upper bound \(l=26\) on the rank of the gauge group. Moreover, for \(l>18\) there are very few reflective lattices with \(l\) even: the self-dual lattice \(\Pi_{2,26}\) and two lattices at level 2 and 3, \(\Pi_{2,22}(2)\) and \(\Pi_{2,20}(3)\) respectively. If these allow a decompactification limit, they can be ruled out.
The \(l=18\) case, in addition to the self-dual (Narain) lattice, includes five different level 2 reflective lattices. These five will necessarily have enhancement points corresponding to norm \(-4\) root vectors. Their modular forms cannot be decomposed into products of 2-reflective and 4-reflective forms, and they have no string theory realization.
For the theory obtained via compactification of 10D heterotic string on a two-torus [50], the momentum lattice structure is given by the Narain lattice
\[L=\Pi_{1,1}\oplus\Pi_{1,1}\oplus E_{8}(-1)\oplus E_{8}(-1)\,, \tag{11}\]
while the symmetry enhancement appears when \(p^{2}=-2\).18 The symmetry enhancement points are given by the rational quadratic divisor
Footnote 18: The full list of the allowed enhancements with the corresponding gauge algebras is worked out in [51; 11] with the help of the results of elliptic \(K3\) fibrations [52; 53].
\[H(-1)=\bigcup_{(v,v)=-2,\,v\in L}v^{\perp}\,, \tag{12}\]
requiring that the lattice admits a 2-reflective modular form (the lattice \(L\) is 2-reflective). This is the case for the even unimodular lattice \(\Pi_{2,18}\).
A weight 132 modular form \(\Psi_{(2,18)}(Z)\) can be obtained by applying the Borcherds product to the nearly holomorphic modular form [54]
\[f(\tau)=\frac{1728E_{4}}{E_{4}^{3}-E_{6}^{2}}(\tau)=\frac{1}{q}+264+8244q+1395 20q^{2}+\ldots\,, \tag{13}\]
where \(q=e^{2\pi i\tau}\) and \(E_{4},E_{6}\) are Eisenstein series with the constant term normalized to 1
\[\begin{split} E_{4}&=1+240\sum_{n=1}^{\infty}\frac{ n^{3}q^{n}}{1-q^{n}}=1+240q+2160q^{2}+\ldots\,,\\ E_{6}&=1-504\sum_{n=1}^{\infty}\frac{n^{5}q^{n}}{1-q ^{n}}=1-504q-6632q^{2}+\ldots\,.\end{split} \tag{14}\]
From Theorem B.2 we know that the modular form \(\Psi_{(2,18)}(Z)\) is holomorphic (all the coefficient in the principal part is positive) and only has zeroes at the rational quadratic divisor \(H(-1)\). Moreover, since the lattice \(L\) is even unimodular now, the character for the group \(\mathrm{SO}^{+}(L)\) must be trivial.
* **Signature \((2,10)\)** For \(l=10\), if the level of the lattice is prime, there are ten types of reflective lattices. The simplest of these is the self-dual lattice \(L=\Pi_{1,1}\oplus\Pi_{1,1}\oplus E_{8}\), which is 2-reflective. Requiring that the zeroes of \(\Psi(Z)\) are contained in the rational quadratic divisors defined by \((v,v)=-2\), we should look for a nearly holomorphic modular form of weight \(-4\) as an input into the Borcherds product. Such a function exists \[f(\tau)=\frac{1}{q}+504+16404q+\ldots\,,\quad q=e^{2\pi i\tau}\,,\] (109) and the corresponding Borcherds product \(\Psi(Z)\) is of weight \(252\). Due to the unimodularity, the character for this lattice is, as required, trivial. Comparison of the possible gauge symmetry enhancements allowed by this lattice to [10] would exclude this lattice. Since all other lattices are at level \(p>1\), the enhancement points will correspond not only to short roots (vectors with norm \(-2\)) as for even unimodular lattices. Indeed, for reflective lattices roots are not only vectors with norm \(-2\) but also vectors \(v\) with norm \(-2p\) satisfying \((v,u)=0\mod 2\) for all vectors \(u\in L\). In fact, the last condition is equivalent to saying that \(v/p\) is in the dual lattice \(L^{\prime}\). The most interesting class is for \(p=2\). It contains three lattices, all of which have reflective vectors of norm \(-2\) and \(-4\) (\(p=2\)). The CHL lattice [55] of the form \[L=\Pi_{1,1}\oplus\Pi_{1,1}(2)\oplus E_{8}(-1)\cong\Pi_{1,1}\oplus\Pi_{1,1} \oplus D_{8}(-1)\] (110) is among these three. The full list of enhancements and the allowed gauge algebras in the \(8D\) CHL theories [56; 57] is worked out in [12; 58]. By the Theorem 4.1 and Theorem 4.2 in [43], \(L\) admits a 2-reflective modular form \(\Psi_{1}\) of weight \(124\) and a 4-reflective modular form \(\Psi_{2}\) of weight \(4\). These two modular forms are both holomorphic and only have zeros respectively on \[(\Psi_{1})=H(-1)=\bigcup_{(v,v)=-2,\,v\in L}v^{\perp}\quad\text{and}\quad(\Psi _{2})=\bigcup_{(v,v)=-4,\,v/2\in L^{\prime}}v^{\perp}\] (111) The lattice satisfies the condition in Theorem B.4 thus the character of the modular group is trivial. Hence an anomaly-cancelling counterterm can be constructed by direct multiplication of these functions \(\Psi_{(2,10)}(Z)=\Psi_{1}(Z)\Psi_{2}(Z)\). These modular forms are closely related to many interesting results about Enriques surfaces [59; 60; 61; 62]. We define the lattice \(L_{E}=\Pi_{1,1}\oplus\Pi_{1,1}(2)\oplus E_{8}(-2)\). Notice that \[L^{\prime}_{E}(2)\cong\Pi_{1,1}\oplus\Pi_{1,1}(2)\oplus E_{8}(-1)\cong L\,,\] (112)
the orthogonal group has the following relation
\[\mathrm{O}^{+}(L_{E})\cong\mathrm{O}^{+}(L^{\prime}_{E})\cong\mathrm{O}^{+}(L^{ \prime}_{E}(2))\cong\mathrm{O}^{+}(L)\,. \tag{5.19}\]
Hence the (reflective) modular forms with respect to the group \(\mathrm{O}^{+}(L)\) correspond to that of the group \(\mathrm{O}^{+}(L_{E})\). Note that the reflective vectors defined above are in the lattice \(L\). For a modular form with respect to the group \(\mathrm{O}^{+}(L_{E})\) (lattice \(L_{E}\)), we should check the relation of the reflective vectors and RQDs to those for the lattice \(L\). Under the transformation \(L^{\prime}_{E}\to L^{\prime}_{E}(2)\) (one can think that each vector is scaled by \(\sqrt{2}\)), the 2 reflective vectors of \(L_{E}\) and 4 reflective vectors (\(v_{E}\in L_{E}\), \(v_{E}/2\in L^{\prime}_{E}\)) transform to the 4-reflective vectors and the 2-reflective vectors of \(L\) respectively. This correspondence can be summarized as follows:
\[\Psi_{1}(Z):\,\text{2-reflective for lattice $L$} \longrightarrow\text{4-reflective for lattice $L_{E}$}\,,\] \[\Psi_{2}(Z):\,\text{4-reflective for lattice $L$} \longrightarrow\text{2-reflective for lattice $L_{E}$}\,.\]
\(\Psi_{2}(Z)\) is called Borcherds-Enriques modular form \(\Phi_{4}\), first found in [59] and reconstructed as an example in [26] (see Example 13.7). The lattice \(L^{\prime}_{E}/L_{E}\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) so we can use two bits to label the element in \(L^{\prime}_{E}/L_{E}\). There exists a nearly holomorphic modular form of weight \(-4\), written as \(f(\tau)=\sum_{\gamma}\mathfrak{e}_{\gamma}f_{\gamma}\) and the components \(f_{\gamma}\) are
\[\begin{split}& f_{00}(\tau)=-f_{10}(\tau)=-f_{01}(\tau)=8\eta^{ 8}(2\tau)/\eta^{16}(\tau)=8+128q+1152q^{2}+\dots\,,\\ & f_{11}(\tau)=8\eta^{8}(2\tau)/\eta^{16}(\tau)+\eta^{8}(\tau/2) /\eta^{16}(\tau)=q^{-1/2}+36q^{-1/2}+402q^{3/2}+\dots\,.\end{split} \tag{5.20}\]
By Borcherds product (Theorem B.1), the weight of \(\Psi_{2}(Z)\) is 4 (\(c(0,0)=8\)) and it is holomorphic (the coefficient in the principal part is positive). There is only one term in the principal part (\(q^{-1/2}\)) so the RQD is exactly the set of 2-reflective vectors \(v_{E}^{2}=-2\). Hence \(\Psi_{2}\) is a 4-reflective modular form of weight 4 with respect to the lattice \(L\) as we expected. Another equivalent way to construct \(\Psi_{2}(Z)\) is to use Jacobi lifting [60]. Construction of the weight 124 2-reflective modular form \(\Psi_{1}(Z)\) is more complicated, and we refer to Lemma 5.4 in [62] for more detailed explanations.
For the two other lattices in this class (\(p=2\)), \(D_{8}\) factor of the CHL lattice in (5.16) is replaced by respectively \(D_{4}\oplus D_{4}\) and \(D^{\prime}_{8}\).19 The counterterms can again be obtained by a direct multiplication of two different modular forms.20 In these cases, the 2-reflective modular forms are of weight 60 and 28 respectively, and the 4-reflective modular forms are of weight 12 and 28 respectively.
Footnote 19: Notice that further reduction of the CHL strings to seven and six dimensions yields \(\Pi_{1,1}\oplus\Pi_{1,1}\oplus\Pi_{1,1}\oplus D_{4}\oplus D_{4}(-1)\) and \(\Pi_{1,1}\oplus\Pi_{1,1}\oplus\Pi_{1,1}\oplus\Pi_{1,1}\oplus D^{\prime}_{8}(-2)\) lattices respectively [55].
Footnote 20: For any other prime \(p\) the lattices of signature \((2,10)\) do not admit modular forms that can be decomposed into a product of 2-reflective and \(2p\)-reflective modular forms (Theorem 4.3 in [43]). Note that there are four \(l=10\) lattices which are 2-reflective and have non-prime level \(p\)[44]. Only the CHL lattice yields the gauge symmetry enhancement consistent with the swampland considerations.
Anomaly cancellation for \(l=2\)
In the previous discussion we mainly focused on the case \(l\geq 3\). This restriction allowed us to obtain the modular forms, suitable for building counterterms, using the lattice decomposition as \(L=\Pi_{1,1}\oplus\Pi_{1,1}\oplus\hat{L}(-1)\) and the Theorem B.1 to ensure the triviality of the character. The case \(l=2\), where the Borcherds product does not apply generally, needs special considerations.
For \(l=2\) case. we can take advantage of the two to one group homomorphism from \(\mathrm{SL}(2,\mathbb{R})\times\mathrm{SL}(2,\mathbb{R})\) to \(\mathrm{SO}(2,2;\mathbb{R})\), and consider21
Footnote 21: Notice that only one \(\mathrm{SO}(2)\) factor is anomalous, and this identification requires rotation of the two factors in the denominator. As we shall see the moduli \(Z_{1}\) and \(Z_{2}\) cannot be identified as moduli of a two-torus.
\[\mathcal{M}_{l=2}=\frac{\mathrm{SO}(2,2)}{\mathrm{SO}(2)\times\mathrm{SO}(2)} \cong\frac{\mathrm{PSL}(2,\mathbb{R})\times\mathrm{PSL}(2,\mathbb{R})}{\mathrm{ U}(1)\times\mathrm{U}(1)}\,, \tag{6.1}\]
which can be parametrized by a pair of complex scalars \(Z_{1}\) and \(Z_{2}\) with modular-invariant kinetic terms 22
Footnote 22: Notice that in (6.1) only one \(\mathrm{SO}(2)\) factor is anomalous, and this identification requires rotation of the two factors in the denominator. As we shall see the moduli \(Z_{1}\) and \(Z_{2}\) cannot be identified as moduli of a two-torus.
\[\mathcal{L}_{\mathrm{scalars}}=\frac{1}{2}\left(\frac{\partial_{\mu}Z_{1} \partial^{\mu}\overline{Z}_{1}}{|\operatorname{Im}Z_{1}|^{2}}+\frac{\partial_ {\mu}Z_{2}\partial^{\mu}\overline{Z}_{2}}{|\operatorname{Im}Z_{2}|^{2}}\right)\,. \tag{6.2}\]
We can once more use the canonical way to obtain the generalized upper-half plane developed in section 3.2. When \(l=2\), the matrix \(\hat{S}\) collapses. The discretized structure has not yet emerged so we can arbitrarily choose a quadratic form of signature \((2,2)\) since every positive definite symmetric bilinear form of the same signature is equivalent in the vector space \(V=L\otimes\mathbb{R}\). A convenient choice is
\[S_{0}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\,,\quad S=\begin{pmatrix}&1\\ &S_{0}\\ 1&\end{pmatrix}\,, \tag{6.3}\]
with the quadratic forms \(q_{0}\) and \(q\) defined with respect to \(S_{0}\) and \(S\) respectively. Recalling the definition of \(\mathbb{H}_{l}\) (3.18), with \(l=2\) we have
\[\mathbb{H}_{l=2}=\{Z=X+iY\in L_{0}\otimes\mathbb{C}|X,Y\in L_{0}\otimes \mathbb{R},Y\in P\}\, \tag{6.4}\]
where \(P\) denotes the future light cone of the Minkowski space \(L_{0}\otimes\mathbb{R}\) with signature \((1,1)\). Denoting \(Z=\left(Z_{1},Z_{2}\right)^{T}\), we have \(q_{0}(Y)=\operatorname{Im}\left(Z_{1}\right)\operatorname{Im}\left(Z_{2} \right)>0\). \(P\) then picks the connected component that \(\operatorname{Im}(Z_{1})>0\) and \(\operatorname{Im}(Z_{2})>0\), i.e. the generalized upper-half plane is exactly the direct product of the usual upper-half planes, \(\mathbb{H}_{l=2}\cong\mathbb{H}\times\mathbb{H}\).
### The kinetic term
We will start the discussion of the kinetic terms (6.2) recalling the relation between \((Z_{1},Z_{2})\) and \((y_{1},y_{2})\) (4.31) and (4.36),
\[\begin{pmatrix}Z_{1}\\ Z_{2}\end{pmatrix}=\begin{pmatrix}\frac{1+y^{2}-2y_{1}}{i(1-y^{2})+2y_{2}}\\ \frac{1+y^{2}+2y_{1}}{i(1-y^{2})+2y_{2}}\end{pmatrix}=\begin{pmatrix}i\frac{y_ {1}-iy_{2}-1}{y_{1}-iy_{2}+1}\\ i\frac{y_{1}+iy_{2}+1}{y_{1}+iy_{2}-1}\end{pmatrix}\,. \tag{6.5}\]
A specific form of \(\Lambda\) should be considered first. A benefit in \(l=2\) case is that \(q\) in the element \(\Lambda\) (4.3) is a square matrix. We further assume that \(q\) is invertible, i.e.23
Footnote 23: Later we will see that the determinant of \(q\) in the denominator cancels out. Hence the result derived under this assumption can be analytically continued to the set of points in the domain where \(\det q=0\).
\[\det q=\frac{4}{1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}}\left[\operatorname{Re}( y_{1})\operatorname{Im}(y_{2})-\operatorname{Re}(y_{2})\operatorname{Im}(y_{1}) \right]\neq 0\,. \tag{6.6}\]
This is equivalent to requiring that \(y_{2}\neq ay_{1}\) for any real number \(a\). Using the formula (4.6) to express the block \(\sqrt{1+q^{T}q}\) in terms of \(y_{i}\) and the invertibility of \(q\), we have
\[\begin{split}\sqrt{1+q^{T}q}&=\mathbb{1}+q^{T}\left( \sqrt{1+qq^{T}}-\mathbb{1}\right)(qq^{T})^{-1}q\\ &=\mathbb{1}+q^{T}\left(\sqrt{1+qq^{T}}-\mathbb{1}\right)(q^{T})^ {-1}=q^{T}\sqrt{1+qq^{T}}(q^{T})^{-1}\,,\end{split} \tag{6.7}\]
where
\[(q^{T})^{-1}=\frac{\sqrt{1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}}}{2\left[ \operatorname{Re}(y_{1})\operatorname{Im}(y_{2})-\operatorname{Im}(y_{1}) \operatorname{Re}(y_{2})\right]}\begin{pmatrix}\operatorname{Im}(y_{2})&- \operatorname{Im}(y_{1})\\ -\operatorname{Re}(y_{2})&\operatorname{Re}(y_{1})\end{pmatrix}\,. \tag{6.8}\]
Direct manipulations yield
\[\sqrt{1+q^{T}q}=\mathfrak{S}\begin{pmatrix}1+y_{1}\bar{y}_{1}-y_{2}\bar{y}_{2 }&\bar{y}_{1}y_{2}+y_{1}\bar{y}_{2}\\ \bar{y}_{1}y_{2}+y_{1}\bar{y}_{2}&1+y_{2}\bar{y}_{2}-y_{1}\bar{y}_{1}\end{pmatrix},\quad\mathfrak{S}=\frac{1}{\sqrt{1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}}}\,, \tag{6.9}\]
and expression for \(\Lambda\) in terms \(y_{i}\):
\[\Lambda(y_{1},y_{2})=\mathfrak{S}\begin{bmatrix}1+\frac{1}{2}(y^{2}+\bar{y}^{ 2})&-\frac{i}{2}(y^{2}-\bar{y}^{2})&y_{1}+\bar{y}_{1}&y_{2}+\bar{y}_{2}\\ -\frac{i}{2}(y^{2}-\bar{y}^{2})&1-\frac{1}{2}(y^{2}+\bar{y}^{2})&-i(y_{1}-\bar {y}_{1})&-i(y_{2}-\bar{y}_{2})\\ y_{1}+\bar{y}_{1}&-i(y_{1}-\bar{y}_{1})&1+y_{1}\bar{y}_{1}-y_{2}\bar{y}_{2}&\bar {y}_{1}y_{2}+y_{1}\bar{y}_{2}\\ y_{2}+\bar{y}_{2}&-i(y_{2}-\bar{y}_{2})&\bar{y}_{1}y_{2}+y_{1}\bar{y}_{2}&1+y_{ 2}\bar{y}_{2}-y_{1}\bar{y}_{1}\end{bmatrix}\,. \tag{6.10}\]
The Maurer-Cartan form has decomposition
\[\Lambda^{-1}\partial_{\mu}\Lambda=\tilde{Q_{\mu}}+\tilde{P}_{\mu}\,,\quad \tilde{Q_{\mu}}\in\mathfrak{so}(2)\oplus\mathfrak{so}(l)\,,\quad\tilde{P}_{ \mu}\in\mathfrak{p}\,, \tag{6.11}\]
where \(\mathfrak{p}\) is the complement of \(\mathfrak{so}(2)\oplus\mathfrak{so}(l)\) (\(\mathfrak{g}=\mathfrak{so}(2,l)=(\mathfrak{so}(2)\oplus\mathfrak{so}(l))\perp \mathfrak{p}\)). We can directly obtain \(\tilde{P}_{\mu}\) through block decomposition (following [6]).
\[\Lambda^{-1}\partial_{\mu}\Lambda=\begin{pmatrix}Q_{\mu}^{\text{SO}(2)}&P_{\mu} \\ P_{\mu}^{T}&Q_{\mu}^{\text{SO}(l)}\end{pmatrix}=\begin{pmatrix}Q_{\mu}^{\text{ SO}(2)}&\\ &Q_{\mu}^{\text{SO}(l)}\end{pmatrix}+\begin{pmatrix}&P_{\mu}\\ P_{\mu}^{T}\end{pmatrix}\,, \tag{6.12}\]
which leads to the scalar Lagrangian
\[\mathcal{L}_{\text{scalars}}=\frac{1}{2}\operatorname{Tr}\left(\tilde{P}_{ \mu}\tilde{P}^{\mu}\right)=\operatorname{Tr}\left(P_{\mu}^{T}P^{\mu}\right)\,, \tag{6.13}\]
with the trace taken over the matrix with \(\Lambda\). For \(l=2\) we can use the explicit form of \(\Lambda\) (equation 6.10) to compute the \(P_{\mu}\) block by the formula:
\[P_{\mu} =\sqrt{1+qq^{T}}\partial_{\mu}q-q\partial_{\mu}\sqrt{1+q^{T}q}\] \[=\mathfrak{S}^{2}\begin{bmatrix}\mathcal{Y}_{1}(\partial_{\mu}y_ {1}+\partial_{\mu}\bar{y}_{1})+\mathcal{Y}_{2}\left(\partial_{\mu}y_{2}- \partial_{\mu}\bar{y}_{2}\right)&\mathcal{Y}_{1}(\partial_{\mu}y_{2}+\partial _{\mu}\bar{y}_{2})-\mathcal{Y}_{2}(\partial_{\mu}y_{1}-\partial_{\mu}\bar{y}_{ 1})\\ i\mathcal{Y}_{1}(-\partial_{\mu}y_{1}+\partial_{\mu}\bar{y}_{1})-i\mathcal{Y} _{2}(\partial_{\mu}y_{2}+\partial_{\mu}\bar{y}_{2})&i\mathcal{Y}_{1}(- \partial_{\mu}y_{2}+\partial_{\mu}\bar{y}_{2})+i\mathcal{Y}_{2}(\partial_{\mu }y_{1}+\partial_{\mu}\bar{y}_{1})\end{bmatrix}\,, \tag{6.14}\]
where \(\mathcal{Y}_{1}=1-y_{k}\bar{y}_{k}\) and \(\mathcal{Y}_{2}=y_{1}\bar{y}_{2}-\bar{y}_{1}y_{2}\). The Lagrangian is given by
\[\operatorname{Tr}\left(P_{\mu}^{T}P^{\mu}\right)=\frac{4(\mathcal{Y}_{1}^{2}- \mathcal{Y}_{2}^{2})\left(\partial_{\mu}y_{1}\partial^{\mu}\bar{y}_{1}+ \partial_{\mu}y_{2}\partial^{\mu}\bar{y}_{2}\right)+8\mathcal{Y}_{2}\mathcal{ Y}_{1}\left(\partial_{\mu}\bar{y}_{1}\partial^{\mu}y_{2}-\partial_{\mu}y_{1} \partial^{\mu}\bar{y}_{2}\right)}{(\mathcal{Y}_{1}^{2}+\mathcal{Y}_{2}^{2})^{ 2}}\,. \tag{6.15}\]
Here we have used that
\[\mathfrak{S}^{4}=\frac{1}{\left(1-2\bar{y}_{k}y_{k}+y^{2}\bar{y}^{2}\right)^{ 2}}=\frac{1}{\left(\mathcal{Y}_{1}^{2}+\mathcal{Y}_{2}^{2}\right)^{2}}\,. \tag{6.16}\]
Regrouping two complex scalars \(y_{1},y_{2}\) into
\[\xi=y_{1}+iy_{2}\,,\quad\varphi=y_{1}-iy_{2}\,, \tag{6.17}\]
the kinetic term further simplifies to
\[\operatorname{Tr}\left(P_{\mu}^{T}P^{\mu}\right)=\frac{2\partial_{\mu}\xi \partial^{\mu}\bar{\xi}}{(1-\xi\bar{\xi})^{2}}+\frac{2\partial_{\mu}\varphi \partial^{\mu}\bar{\varphi}}{(1-\varphi\bar{\varphi})^{2}}\,. \tag{6.18}\]
Recall that the range of \(y_{i}\) is constrained (see (4.14))
\[1-2\bar{y}_{i}\bar{y}_{i}+y^{2}\bar{y}^{2}>0\,,\quad\bar{y}_{i}y_{i}<1\,. \tag{6.19}\]
This translates into requirements that \(|\xi|<1\) and \(|\varphi|<1\). Thus the two separate terms in (6.18) are naturally given by two Poincare metric tensors on a unit disk. Finally, recalling the equation (6.5)
\[\xi=\frac{Z_{2}+i}{Z_{2}-i}\,,\quad\varphi=\frac{Z_{1}+i}{i-Z_{1}}\,, \tag{6.20}\]
we arrive at the canonical kinetic term
\[\mathcal{L}_{\text{scalar}}=\text{Tr}\left(P_{\mu}^{T}P^{\mu}\right)=\frac{1}{2} \left(\frac{\partial_{\mu}Z_{1}\partial^{\mu}\overline{Z_{1}}}{|\text{Im}\,Z_{1 }|^{2}}+\frac{\partial_{\mu}Z_{2}\partial^{\mu}\overline{Z_{2}}}{|\text{Im}\,Z_ {2}|^{2}}\right)\,. \tag{6.21}\]
It is worth noting that the \(\mathbb{Z}_{2}\) symmetry exchanging \(Z_{1}\leftrightarrow Z_{2}\) is not present in the 8 dimensional supergravity. Such a symmetry would be complemented by the matrix
\[\mathcal{R}=\begin{pmatrix}1&&\\ &0&1\\ &1&0\\ &&1\end{pmatrix}\,, \tag{6.22}\]
which has determinant \(-1\), and thus it is not in \(\text{SO}(2,2;\mathbb{R})\).
### The counterterm
The next step towards constructing a counterterm is to calculate the explicit form of the compensating \(\text{U}(1)\) transformation (4.43), namely \(\arg\left[j(M,Z_{1},Z_{2})\right]\) for \(M\in\text{SO}(2,2;\mathbb{R})\) and \(Z_{1},Z_{2}\in\mathbb{H}\). The fact that the generalized upper-half plane is isomorphic to the direct product of the usual complex upper-half plane suggests it should be described in terms of the automorphy factor of \(\text{SL}(2,\mathbb{Z})\). The action of \(\text{SO}(2,2;\mathbb{R})\) on the generalized upper-half plane is (see (3.27) )
\[\begin{split}& W=M\langle Z\rangle:=\left(-bq_{0}(Z)+PZ+c \right)\left(-\gamma q_{0}(Z)+d^{T}Z+\delta\right)^{-1}\,,\\ & j(M,Z):=-\gamma q_{0}(Z)+d^{T}Z+\delta\,,\end{split} \tag{6.23}\]
for \(M\in\text{SO}(2,2;\mathbb{R})\) decomposed in the form (3.25). From the other side, for two \(\text{SL}(2;\mathbb{R})\) matrices,
\[A=\begin{pmatrix}\alpha_{1}&\beta_{1}\\ \gamma_{1}&\delta_{1}\end{pmatrix},\ \alpha_{1}\delta_{1}-\beta_{1}\gamma_{1}=1, \quad B=\begin{pmatrix}\alpha_{2}&\beta_{2}\\ \gamma_{2}&\delta_{2}\end{pmatrix},\ \alpha_{2}\delta_{2}-\beta_{2}\gamma_{2}=1\,, \tag{6.24}\]
we can define the map from \(\text{SL}(2;\mathbb{R})\times\text{SL}(2;\mathbb{R})\) to \(\text{SO}^{+}(2,2;\mathbb{R})\) by [36; 63]
\[\Omega(A,B)=\begin{pmatrix}\alpha_{1}FBF&\beta_{1}FB\\ \gamma_{1}BF&\delta_{1}B\end{pmatrix},\quad F=\begin{pmatrix}-1&0\\ 0&1\end{pmatrix}. \tag{6.25}\]
It is easy to verify that this is a surjective group homomorphism. Moreover, the action of \(\Omega(A,B)\) is
\[\Omega(A,B)\left\langle\begin{pmatrix}Z_{1}\\ Z_{2}\end{pmatrix}\right\rangle=\begin{pmatrix}\frac{\alpha_{1}Z_{1}+\beta_{1 }}{\gamma_{1}Z_{1}+\delta_{1}}\\ \frac{\alpha_{2}Z_{2}+\beta_{2}}{\gamma_{2}Z_{2}+\delta_{2}}\end{pmatrix}, \quad j(M,Z)=(\gamma_{1}Z_{1}+\delta_{1})(\gamma_{2}Z_{2}+\delta_{2})\,. \tag{6.26}\]
By Theorem 3 in [36], the modular group \(\Gamma=\text{SO}^{+}(2,2;\mathbb{Z})\) is formed by the element \(\Omega(A,B)\) where \(A,B\in\text{SL}(2,\mathbb{Z})\). The factorization of the automorphy factor \(j(M,Z)\) allows
to express the counterterm in terms of \(\mathrm{SL}(2,\mathbb{Z})\) modular forms. Notice that two factors \((\gamma_{1}Z_{1}+\delta_{1})\) and \((\gamma_{2}Z_{2}+\delta_{2})\) appear symmetrically, the counterterm must be of the form
\[\mathcal{S}=\frac{1}{r}\int\arg\left(\Psi_{1}(Z_{1})\Psi_{2}(Z_{2})\right)X_{8}\,, \tag{101}\]
where \(\Psi_{1,2}\) are the \(\mathrm{SL}(2,\mathbb{Z})\) modular forms of the same non-trivial weight \(r\). Demanding once more that the zeros and poles of the function \(\Psi_{1}(Z_{1})\Psi_{1}(Z_{2})\) correspond to the symmetry enhancement points in the moduli space leads to
\[\Psi_{1,2}(Z_{1,2})=E_{4}(Z_{1,2})\,, \tag{102}\]
where \(E_{4}\) is the weight 4 Eisenstein series defined in equation (100). \(E_{4}\) has only one simple zero at \(i\) within the fundamental domain, thus at these points (its modular image under \(\mathrm{SL}(2,\mathbb{Z})\)) the symmetry is enhanced. The maximal symmetry enhancement \(\mathrm{SU}(2)\times\mathrm{SU}(2)\) appears when \(Z_{1}=Z_{2}=i\).
Few comments are in order. The choice (102) reflects the knowledge of the moduli spaces of \(l=2\) theories, which notably do not have \(\mathrm{SU}(3)\) enhancement points.
It is worth noting that the Eisenstein function \(E_{4}\to 1\) in (102) for large \(\mathrm{Im}(Z_{1})\) or \(\mathrm{Im}(Z_{2})\) and we do not have suitable ten-dimensional decompactification limit. Hence \(Z_{1}\) and \(Z_{2}\) cannot be identified as moduli of a two-torus. This is indeed the case for the known \(l=2\) 8D theories, none of which comes from compactifications of 10D heterotic string [64; 65; 66; 67]. At least for the theory obtained via perturbative IIB construction [64] may hope to compute this counterterm (101) explicitly.
Unlike the cases with \(l\geq 3\), this construction is not tied to any particular lattice structure and should apply to both known \(l=2\) theories.
An alternative construction using Hilbert modular forms may be considered. In fact it leads to an anomaly-cancelling counterterm for \(l=2\) case. There exists a function \(f\in A_{0}^{+}(5,\chi_{5})\) (see Appendix C)
\[f(\tau)=q^{-1}+5+11q-54q^{4}+O(q^{5})\,,\quad q=e^{2\pi i\tau}\,, \tag{103}\]
yielding the Hilbert modular form \(\Psi(Z)\) of weight 5 with trivial multiplier system. However this function may allow symmetry enhancements, such as \(\mathrm{SU}(3)\), which should not appear in the 8D \((2,2)\) theories [10], and hence is not physical. We discuss the details of this construction outside of the main text in Appendix C.
## 7 Discussion
The moduli space of the eight-dimensional minimal supergravities coupled to \(l\) Yang-Mills multiplets is given by
\[\mathcal{M}=\frac{\mathrm{SO}(2,l)}{\mathrm{U}(1)\times\mathrm{SO}(l)}\,.\]
The composite \(\mathrm{U}(1)\) connection, under which the fermions of the theory are chirally charged, is anomalous. The gauge fixing translates this anomaly into an anomaly under the discrete
part of the coset denominator, which can be shown to coincide with the discrete modular group of the corresponding lattice. The consistency of the theory requires a suitable counterterm to cancel this discrete anomaly.
The counterterms can be constructed with the use of the Borcherds product of the modular forms on the orthogonal group, \(\Psi(Z)\):
\[\mathcal{S}=\frac{1}{r}\int\arg(\Psi(Z))X_{8}(R,\mathcal{F})\,, \tag{110}\]
where \(X_{8}(R,\mathcal{F})\) is the anomaly polynomial and \(r\) is the weight of the modular form satisfying some conditions required by the anomaly cancellation. These conditions can be summarized as
* The character for the modular group \(\Gamma(L)\) (or the lattice \(L\)) must be trivial.
* The zeros and poles of \(\Psi(Z)\) lie on the rational quadratic divisors. If these points can be interpreted as the symmetry enhancement points, it requires \(\Psi(Z)\) should be reflective modular form and \(L\) is the reflective lattice.
For the \(l=2\) case, the homomorphism between \(\mathrm{SL}(2;\mathbb{R})\times\mathrm{SL}(2;\mathbb{R})\) to \(\mathrm{SO}^{+}(2,2;\mathbb{R})\) can be used in order to construct the local counterterm from the usual \(\mathrm{SL}(2;\mathbb{Z})\) modular forms. An alternative way to cancel the anomaly by using Hilbert modular forms at the cost of shrinking the symmetry exists. However it would allow for enhanced gauge symmetry that is not consistent with the string-theoretic constructions.
We will conclude by outlining some open questions and directions for further research.
**Relation to the Swampland** It is not surprising that we find a larger set of theories with a mechanism for anomaly cancellation than what is allowed by swampland considerations. It is however curious, that there are finite number of admissible lattices and they are bounded by 26. In fact, the only two lattices for \(l>2\) that are believed to lead to consistent theories of quantum gravity [66] are even more special and admit 2-reflective modular forms. It would be of great interest to find out if there exist physical requirements that lead to further constraints on the lattice structure.
Notice that we always assume the lattice to be even. This condition enters crucially in the construction of the modular forms on the orthogonal groups, and it is hard to see how a counterterm can be constructed otherwise. We do not know a more direct supergravity (swampland?) argument in support of this condition that arises very naturally in string theory.
**Counterterms and massive sectors** In our \(\mathcal{N}=1\) discussion the precise form of the anomaly polynomial played no role. In fact (10) is computed only by knowing the massless spectrum. On the other hand, the string amplitudes receive contributions from massive states. For a very recent interesting discussion of importance of these see [67]. At the supergravity level one could generate corrections to the counterterm to (10) by adding massive states and integrating them out. It is hard to believe that the choices of massive sector are arbitrary, and as discussed in [19] one expects that reduction on \(\mathbb{P}^{1}\) to
six-dimensional \((1,0)\) would impose strong constraints on the possible massive sectors. The question of whether and when a theory admits different consistent massive completions is certainly of great interest.
#### \(K3\) reductions and 4D physics
It is also of interest to explore the implications of the 8D counterterms discussed here for compactifications, particularly 4D couplings. There are very direct parallels between 8D maximally and minimally supersymmetric theories and 4D \(\mathcal{N}=4\) and \(\mathcal{N}=2\) respectively.
4D \(\mathcal{N}=4\) supergravity (coupled to YM) also has composite anomalies, recently discussed in e.g. [68; 69; 70]. The moduli space is given by \(\mathrm{SL}(2)/\mathrm{U}(1)\times\mathrm{SO}(6,n_{V})/\mathrm{SO}(6)\times \mathrm{SO}(n_{V})\). As in maximally supersymmetric 8D theory and consistently with the supersymmetry algebra, the \(\mathrm{U}(1)\) composite anomaly is also an anomaly of a nonlinear local supersymmetry [70]. Putting the maximally supersymmetric theory 8D on \(K3\) yields 4D theory coupled to 22 vector multiplets.24 It is not hard to see that the \(\mathrm{SL}(2)/\mathrm{U}(1)\) factor directly descents from 8D. It can be checked the \(K3\) reduction of the counterterm (14) in the large volume limit agrees with the one computed in [69] for \(n_{V}=22\). The reduction closely follows that of type IIA Chern-Simons couplings to six dimensions [30; 31]. In fact a generic 4D \(\mathcal{N}=2\) supergravity coupled to arbitrary number of vectors, provided \(n_{V}\geq 2\) can be seen as coming from a torus reduction of 6D \((1,1)\) theory, with a relation between the 6D CS couplings and 4D counterterms identical to that between their 10D and 8D counterparts as discussed in section 2.
Footnote 24: For other constructions of \(4D\)\(\mathcal{N}=4\) theories from Type II strings see e.g. [71]
The \(K3\) reduction of 8D theory with 16 supercharges to a 4D \(\mathcal{N}=2\) theory parallels the reduction of 10D heterotic strings on \(K3\). There, a separate integration of the Bianchi identity (with the constraints that the instanton numbers should sum up to 24) and of the Green-Schwarz term yield two different four-forms that agree with those obtained in the factorised anomaly polynomial in the resulting 6D \((1,0)\) theory (see e.g. [72]). So one could wonder about similar reduction of the counterterm in 8D.
Choosing an instanton in group \(H\subset G\) (\(\mathrm{rank}(G)=l\)) breaks the gauge group to \(G_{0}\) stabilised by \(H\) in \(G\). The Bianchi identity can be written in general as (following the notation of [10])
\[dH_{3}=\kappa\,\mathrm{tr}\,R^{2}+\ell\cdot\mathrm{tr}\,\mathcal{F}^{2}\]
where \(\kappa\) can take values 1 or 0 (only for \(l=2\)), and \(\ell\) is the level of the current algebra (for a product gauge group, summation over different gauge factors is implied), and hence \(\ell\cdot c_{2}(H)=24\kappa\).25 Denoting \(\mathrm{rank}(H)=h\),
Footnote 25: For \(\kappa=0\), there cannot be nontrivial gauge configurations over \(K3\). The reduction yields a 4D \(\mathcal{N}=2\) theory with three vector multiplets and 20 neutral hypermultiplets.
\[\mathrm{SO}(2,l)\,\longrightarrow\,\mathrm{SO}(2,l-h)\,.\]
But in 4D, \(n_{V}=l-h+1\), and the extra multiplet comprises one of the vectors in 8D gravity multiplet, and the dilaton-axion. Notice that while in 8D the counterterm must have nontrivial modular properties, the 4D threshold corrections \(\sim\mathrm{tr}\,R^{2}\) involve automorphic
functions on \(\mathrm{SO}(2,n_{V})\). The addition of the extra scalar ("conformal compensator" in vector moduli space) should be responsible for this change. It would be of some interest to understand how this works in more detail.
It has been argued that the \(K3\) reduction of \(\mathcal{N}=1\) theories in 8D provides a good framework for studying 4D \(\mathcal{N}=2\) compactifications since it encompasses not only the \(K3\times T^{2}\) but also the heterotic flux backgrounds [73]. Considering the space of all 8D \(\mathrm{SO}(2,l)\) for \(l=2,10,18\) would enlarge this space and hopefully cover all \(\mathcal{N}=2\) theories of heterotic type, i.e. those for which the dilaton is in the vector multiplets. This raises an interesting possibility that all threshold corrections in these theories would in some way be governed and be derivable from the special \(\mathrm{SO}(2,l)\) modular forms from which the counterterms (102) are built.
## Acknowledgements
We thank Peng Cheng, Jonathan Heckman, Renata Kallosh, Ilarion Melnikov, Nikita Nekrasov, Valentin Reys, Raffaele Savelli, Yi Shan, Stefan Theisen and Yu-Xiao Xie for useful communications and conversations. Special thanks are due to Jim Liu and Hector Parra De Freitas. The work of RM is partially supported by ERC grants 772408-Stringlandscape and 787320-QBH Structure.
## Appendix A Dedekind eta function its multiplier system and theta function
In this appendix we collect some relevant facts about the Dedekind eta function \(\eta(\tau)\) and the theta function \(\theta(\tau)\), used in section 2. Under the modular transformation, both will pick a square root of \(c\tau+d\), and the branch of the square root needs to be specified. In the main text we have already defined the argument of \(z\in\mathbb{C}\) as \(\mathrm{Arg}\,z\in[-\pi,\pi)\). Thus the square root for \(z\in\mathbb{C}\) is
\[z^{\frac{1}{2}}=\sqrt{|z|}e^{\frac{i}{2}\mathrm{Arg}\,z}\,, \tag{103}\]
and this convention will be used throughout the discussion.
The Dedekind eta function can be written in the form of infinite products,
\[\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n})\,. \tag{104}\]
Since two \(\mathrm{SL}(2,\mathbb{Z})\)-matrices generate the whole group, its modular properties can be captured by
\[\eta(T\tau)=\eta(\tau+1)=e^{\frac{\pi i}{12}}\eta(\tau)\,,\quad\eta(S\tau)= \eta\left(-\frac{1}{\tau}\right)=\sqrt{-i\tau}\eta(\tau)\,, \tag{105}\]
where
\[T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\,,\quad S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\,. \tag{106}\]
More generally, the modular properties of \(\eta(\tau)\) under \(\mathrm{SL}(2,\mathbb{Z})\) can be written as [74]
\[\eta(M\tau)=\chi_{\eta}(M)(c\tau+d)^{1/2}\eta(\tau) \tag{100}\]
with a nontrivial multiplier system \(\chi_{\eta}(M)\). Let \(c\) and \(d\) be integers such that \(\gcd(c,d)=1\), \(d\) is odd and \(c\neq 0\). Let \(\mathrm{sgn}(x)=\frac{x}{|x|}\) be the sign of a real number \(x\neq 0\). Then
\[\left(\frac{c}{d}\right)^{*}=\left(\frac{c}{|d|}\right)\,,\quad\text{and} \quad\left(\frac{c}{d}\right)_{*}=\left(\frac{c}{|d|}\right)\cdot\left(-1 \right)^{\frac{1}{4}(\mathrm{sgn}(c)-1)(\mathrm{sgn}(d)-1)}\,, \tag{101}\]
where \(\left(\frac{c}{d}\right)\) is the Legendre symbol and we set
\[\left(\frac{0}{1}\right)^{*}=\left(\frac{0}{-1}\right)^{*}=1\,,\quad\left( \frac{0}{1}\right)_{*}=1\,,\quad\left(\frac{0}{-1}\right)_{*}=-1\,. \tag{102}\]
For arbitrary element in \(M=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\mathrm{SL}(2;\mathbb{Z})\), the multiplier system of the Dedekind eta function is given by
\[\chi_{\eta}(M) =\left(\frac{d}{c}\right)^{*}q\left(\frac{1}{24}\left[(a+d)c-bd( c^{2}-1)-3c\right]\right)\] if \[c\] is odd, (103) \[\chi_{\eta}(M) =\left(\frac{c}{d}\right)_{*}q\left(\frac{1}{24}\left[(a+d)c-bd( c^{2}-1)+3d-3-3cd\right]\right)\] if \[c\] is even,
where \(q(z)=e^{2\pi iz}\). It should be noted that \(\chi_{\eta}(M)\) cannot form the homomorphism from \(\mathrm{SL}(2;\mathbb{Z})\) to \(\mathrm{U}(1)\). Since the \(S=\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\) transformation satisfy \(S^{2}=-\mathbb{1}\), we have
\[\eta(\tau)=\eta((-\mathbb{1})\tau)=\chi_{\eta}(-\mathbb{1})(-1)^{ 1/2}\eta(\tau)=\chi_{\eta}(-\mathbb{1})(-i)\eta(\tau) \tag{104}\] \[\qquad\qquad\Rightarrow\chi_{\eta}(-\mathbb{1})=i\neq\chi_{\eta} (S)^{2}=-i\,,\]
In the main text, the congruence subgroup \(\Gamma_{0}(4)\) of \(\mathrm{SL}(2;\mathbb{Z})\) was introduced:
\[\Gamma_{0}(N)=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z})\Bigg{|}\,c\equiv 0\mod N\right\} \tag{105}\]
for any positive integer \(N\). Within the congruence subgroup \(\Gamma_{0}(4)\) the weight \(1/2\) modular form is well-defined [75; 76]. For element \(M\in\Gamma_{0}(4)\), the transformation of the square of the theta function, given as
\[\theta(\tau)=\sum_{n\in\mathbb{Z}}q^{n^{2}}=1+2q^{2}+2q^{4}+\ldots\,, \tag{106}\]
takes the form
\[\theta^{2}(M\tau)=\left(\frac{-1}{d}\right)(c\tau+d)\theta^{2}(\tau)\,, \tag{107}\]
where \(\left(\frac{-1}{d}\right)\) denotes the Legendre symbol, \(\left(\frac{-1}{d}\right)=(-1)^{\frac{d-1}{2}}\).
Orthogonal modular forms
Some necessary properties of orthogonal modular forms were reviewed in subsection 3.3. In order to make the paper more self-contained, more background material is collected in this Appendix. Definitions and theorems are given without proofs. Our presentation follows closely [21], which can be consulted for detailed explanations.
Throughout this section, as in the main text, we denote by \(L\) an even lattice of signature \((2,l)\) and assume \(l\geq 3\).
### The Weil representation
We denote the complex upper-half plane \(\mathbb{H}=\{\tau\in\mathbb{C};\,\mathrm{Im}\,\tau>0\}\). \(\tau\) is the standard variable on \(\mathbb{H}\) and we use \(x\) and \(y\) for its real and imaginary parts respectively (\(\tau=x+iy\)). For \(z\in\mathbb{C}\) we define \(e(z)=e^{2\pi iz}\) and denote by \(\sqrt{z}=z^{1/2}\) the principal branch of the square root. For arbitrary \(b\in\mathbb{C}\), we define \(z^{b}=e^{b\mathrm{Ln}\,z}\) where \(\mathrm{Ln}\,z\) denotes the principal branch of the logarithm. We denote by \(\mathrm{Mp}(2;\mathbb{R})\) the metapletic group, i.e. the double cover of group \(\mathrm{SL}(2;\mathbb{R})\), realized by the two choices of holomorphic square roots of \(\tau\to c\tau+d\) for arbitrary element \(M\in\mathrm{SL}(2;\mathbb{R})\),
\[M=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\quad a,b,c,d\in\mathbb{R},\quad\det M=ad-bc=1\,. \tag{120}\]
Any element in \(\mathrm{Mp}(2;\mathbb{R})\) can be written as \((M,\phi(\tau))\) where \(M\in\mathrm{SL}(2,\mathbb{R})\) and \(\phi(\tau)^{2}=c\tau+d\). The multiplication in the group \(\mathrm{Mp}(2;\mathbb{R})\) is defined as
\[\left(M_{1},\phi_{1}(\tau)\right)\left(M_{2},\phi_{2}(\tau)\right)=\left(M_{1 }M_{2},\phi_{1}(M_{2}\tau)\phi_{2}(\tau)\right)\,, \tag{121}\]
where \(M\tau=(a\tau+b)/(c\tau+d)\) denotes the usual action of \(\mathrm{SL}(2;\mathbb{R})\). By fixing the choice \(\phi(\tau)=\sqrt{c\tau+d}\), we actually define a locally isomorphic embedding \(\mathrm{SL}(2;\mathbb{R})\hookrightarrow\mathrm{Mp}(2;\mathbb{R})\)
\[M\mapsto\widetilde{M}=\left(M,\sqrt{c\tau+d}\right). \tag{122}\]
\(\mathrm{Mp}(2;\mathbb{Z})\) is generated by two elements \(T,S\)
\[T=\left(\begin{pmatrix}1&1\\ 0&1\end{pmatrix},1\right)\,,\quad S=\left(\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\sqrt{\tau}\right)\,. \tag{123}\]
One has the relation \(S^{2}=(ST)^{3}=Z\), where
\[Z=\left(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},i\right) \tag{124}\]
is the standard generator of the center of \(\text{\rm Mp}(2;\mathbb{Z})\). For convenience we define \(\Gamma_{1}=\text{\rm SL}(2;\mathbb{Z})\),
\[\begin{split}\Gamma_{\infty}=\left\{\begin{pmatrix}1&n\\ 0&1\end{pmatrix};\,n\in\mathbb{Z}\right\}\leq\Gamma_{1}\,,\\ \widetilde{\Gamma}_{\infty}=\langle T\rangle=\left\{\left( \begin{pmatrix}1&n\\ 0&1\end{pmatrix},1\right);\,n\in\mathbb{Z}\right\}\,,\end{split} \tag{104}\]
where \(\langle T\rangle\) denotes the group generated by \(T\).
Suppose \(L\) is an even lattice equipped with a symmetric \(\mathbb{Z}\)-valued bilinear form \((z_{1},z_{2})\) for \(z_{1},z_{2}\in L\) and the associated quadratic form \(q(z)=(z,z)/2\) is integer for arbitrary \(z\in L\). We denote by \(L^{\prime}\) the dual lattice. The quotient \(L^{\prime}/L\) is a finite Abelian group, the so-called discriminant group. Since the quadratic form can be extended to the dual lattice, we can define the quadratic form on \(L^{\prime}/L\), which takes values in \(\mathbb{Q}/\mathbb{Z}\). There is a unitary representation \(\varrho\) of \(\text{\rm Mp}(2;\mathbb{Z})\) on the algebra \(\mathbb{C}[L^{\prime}/L]\). If we denote the standard basis of \(\mathbb{C}[L^{\prime}/L]\) by \(\{\mathfrak{e}_{\gamma}|\gamma\in L^{\prime}/L\}\), then \(\varrho\) can be defined by the action of the generators \(S,T\in\text{\rm Mp}(2;\mathbb{Z})\) as follows
\[\begin{split}\varrho(T)\mathfrak{e}_{\gamma}&=e(q( \gamma))\,,\\ \varrho(S)\mathfrak{e}_{\gamma}&=\frac{\sqrt{t}^{b^{ -}-b^{+}}}{\sqrt{|L^{\prime}/L|}}\sum_{\delta\in L^{\prime}/L}e(-(\gamma, \delta))\mathfrak{e}_{\delta}\,.\end{split} \tag{105}\]
This is the so-called Weil representation. Based on the relation \(S^{2}=Z\), we have
\[\begin{split}\varrho(Z)\mathfrak{e}_{\gamma}&=\frac{ i^{b^{-}-b^{+}}}{|L^{\prime}/L|}\sum_{\delta,\lambda\in L^{\prime}/L}e(-( \gamma,\delta))e(-(\delta,\lambda))\mathfrak{e}_{\lambda}\\ &=i^{b^{-}-b^{+}}\mathfrak{e}_{-\gamma}\,.\end{split} \tag{106}\]
We denote by \(\langle\cdot,\cdot\rangle\) the standard product of \(\mathbb{C}[L^{\prime}/L]\), i.e.
\[\left\langle\sum_{\gamma\in L^{\prime}/L}\lambda_{\gamma}\mathfrak{e}_{\gamma },\sum_{\gamma\in L^{\prime}/L}\mu_{\gamma}\mathfrak{e}_{\gamma}\right\rangle =\sum_{\gamma\in L^{\prime}/L}\lambda_{\gamma}\bar{\mu}_{\gamma}\,. \tag{107}\]
For \(\gamma,\delta\in L^{\prime}/L\), we can define the representation matrix element \(\varrho_{\gamma\delta}(M,\phi)=\langle\varrho(M,\phi)\mathfrak{e}_{\delta}, \mathfrak{e}_{\gamma}\rangle\).
### Vector-valued modular forms
**Definition B.1**: _(Petersson slash operator) Let \(\kappa\in\frac{1}{2}\mathbb{Z}\) and \(f\) be a \(\mathbb{C}[L^{\prime}/L]\)-valued function on \(\mathbb{H}\). For \((M,\phi)\in\text{\rm Mp}(2;\mathbb{Z})\) we define the Petersson slash operator \(|_{\kappa}(M,\phi)\) by_
\[\left(f|_{\kappa}(M,\phi)\right)(\tau)=\phi(\tau)^{-2\kappa}\varrho(M,\phi)^{ -1}f(M\tau)\,. \tag{108}\]
We denote by \(\varrho^{*}\) the dual representation of \(\varrho\). If we think of \(\varrho(M,\phi)\) as a matrix with entries in \(\mathbb{C}\), then \(\varrho^{*}(M,\phi)\) is simply the complex conjugate of \(\varrho(M,\phi)\). The "dual operation" of
\(\mathrm{Mp}(2;\mathbb{Z})\) on functions \(f:\mathbb{H}\to\mathbb{C}[L^{\prime}/L]\) is given by
\[\left(f|_{\kappa}^{*}(M,\phi)\right)(\tau)=\phi(\tau)^{-2\kappa}\varrho^{*}(M, \phi)^{-1}f(M\tau)\,.\] (B.11)
If we assume that the function \(f:\mathbb{H}\to\mathbb{C}[L^{\prime}/L]\) is a holomorphic function which is invariant under the \(|_{\kappa}^{*}\) operation of \(T\in\mathrm{Mp}(2;\mathbb{Z})\). Since \(f\) can be expanded by the basis \(\mathfrak{e}_{\gamma}\) of \(L^{\prime}/L\), we have \(f=\sum_{\gamma}f_{\gamma}\mathfrak{e}\gamma\). The invariance is satisfied if and only if
\[\begin{split}& f_{\gamma}(\tau)=f_{\gamma}|_{\kappa}^{*}T(\tau)=e^{ *}(q(\gamma))^{-1}f_{\gamma}(\tau+1)\\ \Leftrightarrow& e(q(\gamma)\tau)f_{\gamma}(\tau)=e \left(q(\gamma)(\tau+1)\right)f_{\gamma}(\tau+1)\,,\end{split}\] (B.12)
which means the invariance of \(f\) under \(T\) implies that the function \(e(q(\gamma)\tau)f_{\gamma}(\tau)\) is periodic with period 1. We can directly Fourier expand \(f\) by
\[f(\tau)e(q(\gamma)\tau)=\sum_{\gamma\in L^{\prime}/L}\sum_{n\in\mathbb{Z}}c( \gamma,n)e(n\tau)\mathfrak{e}_{\gamma}\,.\] (B.13)
To have a compact expression, we define \(\mathfrak{e}_{\gamma}(n\tau)=e(n\tau)\mathfrak{e}_{\gamma}\) and write
\[f(\tau)=\sum_{\gamma\in L^{\prime}/L}\sum_{n\in\mathbb{Z}-q(\gamma)}c(\gamma, n)\mathfrak{e}_{\gamma}(n\tau)\,,\] (B.14)
with Fourier coefficients
\[c(\gamma,n)=\int_{0}^{1}\langle f(\tau),\mathfrak{e}_{\gamma}(n\bar{\tau}) \rangle dx\,.\] (B.15)
**Definition B.2**: _(holomorphic modular form of dual Weil representation) Let \(\kappa\in\frac{1}{2}\mathbb{Z}\). A function \(f:\mathbb{H}\to\mathbb{C}[L]\) is called a modular form of weight \(\kappa\) with respect to \(\varrho^{*}\) and \(\text{Mp}(2;\mathbb{Z})\) if_
* \(f|_{\kappa}^{*}(M,\phi)=f\) _for all_ \((M,\phi)\in\text{Mp}(2;\mathbb{Z})\)_,_
* \(f\) _is holomorphic on_ \(\mathbb{H}\)_,_
* \(f\) _is holomorphic at the cusp_ \(\infty\)_. If_ \(c(\gamma,0)\equiv 0\)_,_ \(f\) _is called a cusp form._
The condition \((iii)\) requires \(f\) has a Fourier expansion of the form
\[f(\tau)=\sum_{\gamma\in L^{\prime}/L}\sum_{\begin{subarray}{c}n\in\mathbb{Z}- q(\gamma)\\ n\geq 0\end{subarray}}c(\gamma,n)\mathfrak{e}_{\gamma}(n\tau)\,.\] (B.16)
The \(\mathbb{C}\)-vector space of modular forms of weight \(\kappa\) with respect to \(\varrho^{*}\) and \(\text{Mp}(2;\mathbb{Z})\) is denoted by \(M_{\kappa,L}\) and the subspace of cusp forms is denoted by \(S_{\kappa,L}\). Similar to the usual complex valued modular form of \(\text{SL}(2;\mathbb{Z})\), the linear space \(M_{\kappa,L}\) is finite dimensional.
### Nearly holomorphic modular forms
**Definition B.3**: _(nearly holomorphic modular form) A function \(f:\mathbb{H}\to\mathbb{C}[L]\) is called a nearly holomorphic modular form of weight \(k\) (with respect to \(\varrho\) and \(\text{Mp}(2;\mathbb{Z})\)), if_
* \(f|_{k}(M,\phi)=f\) _for all_ \((M,\phi)\in\text{Mp}(2;\mathbb{Z})\)_,_
* \(f\) _is holomorphic on_ \(\mathbb{H}\)_,_
* \(f\) _has a pole in_ \(\infty\)_, i.e._ \(f\) _has a Fourier expansion of the form_ \[f(\tau)=\sum_{\gamma\in L^{/}L}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+q( \gamma)\\ n\gg-\infty\end{subarray}}c(\gamma,n)\mathfrak{e}_{\gamma}(n\tau)\,.\] (B.17)
_The space of these nearly holomorphic modular forms is denoted by \(M^{!}_{k,L}\). The summation \(n\gg-\infty\) indicates that there exists a finite negative number \(n_{0}\) such that all \(n\geq n_{0}\). This condition implies that the pole at the cusp (\(\infty\)) has finite order. The Fourier polynomial_
\[\sum_{\gamma\in L^{/}L}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+q(\gamma)\\ n<0\end{subarray}}c(\gamma,n)\mathfrak{e}_{\gamma}(n\tau)\] (B.18)
_is called the principal part of \(f\)._
As shown in [21], the space of nearly holomorphic modular form is generated by the Poincare series, thus is finite dimensional. The principal part should satisfy the Theorem B.3.
### Modular forms on generalized upper-half plane
The orthogonal modular forms and Borcherds product were introduced in the main text. Recall the definition of \(j(M,Z)\) in (3.27)). More generally we can rewrite it as
\[j(M,Z)=(MZ_{L},z)\,\] (B.19)
where \(z=(1,0,\ldots,0)^{T}\) is the \(l+2\) vector and \(Z_{L}=(-q_{0}(Z),Z,1)^{T}\). Suppose \(L\) is an even lattice of signature \((2,l)\) and \(V=L\otimes\mathbb{R}\). The function \(j(M,Z)\) on \(\mathrm{O}^{+}(V)\times\mathbb{H}_{l}\) is an automorphy factor for \(\mathrm{O}^{+}(V)\), i.e. it satisfies the cocycle relation
\[j(M_{1}M_{2},Z)=j(M_{1},M_{2}\langle Z\rangle)j(M_{2},Z)\,.\] (B.20)
For an arbitrary \(a\in\mathbb{C}\), we have already specified \(\text{Arg}\,a\in[-\pi,\pi)\), which is the principal value of argument of \(a\). We denote by \(\text{Ln}\,\) the logarithm of the principal branch, which is defined as \(\text{Ln}\,a=\ln|a|+i\text{Arg}\,a\). For an arbitrary \(a,b\in\mathbb{C}\), we define \(a^{b}=e^{b\,\text{Ln}\,a}\). Let \(r\in\mathbb{Q}\), if \(M\in\mathrm{O}^{+}(V)\) and \(Z\in\mathbb{H}_{l}\), then \(j(M,Z)^{r}=e^{r\text{Ln}\,j(M,Z)}\). There exists a map \(w_{r}\) from \(\mathrm{O}^{+}(V)\times\mathrm{O}^{+}(V)\) to the set of roots of unity (of order bounded by the denominator of \(r\)) such that
\[j(M_{1}M_{2},Z)^{r}=w_{r}(M_{1},M_{2})j(M_{1},M_{2}\langle Z\rangle)^{r}j(M_{ 2},Z)^{r}\,.\] (B.21)
**Definition B.4**: _(multiplier system) Let \(\Gamma\leq O^{+}(V)\) be a subgroup and \(r\in\mathbb{Q}\) as above. By a multiplier system of weight \(r\) we mean a map_
\[\chi:\Gamma\longrightarrow S^{1}=\{t\in\mathbb{C}\,|\,|t|=1\}\] (B.22)
satisfying_
\[\chi(M_{1}M_{2})=w_{r}(M_{1},M_{2})\chi(M_{1})\chi(M_{2})\,,\quad M_{1},M_{2}\in \Gamma\,. \tag{111}\]
_If \(r\in\mathbb{Z}\), then \(\chi\) is actually a character of \(\Gamma\), then \(\chi(M)j(M,Z)^{r}\) is a cocycle of \(\Gamma\)._
**Definition B.5**: _(modular form on generalized upper-half plane) Let \(\Gamma\leq\Gamma(L)\) be a subgroup of finite index and \(\chi\) a multiplier system for \(\Gamma\) of weight \(r\in\mathbb{Q}\). A meromorphic function \(F\) on \(\mathbb{H}_{l}\) is called a meromorphic modular from of weight \(r\) and multiplier system \(\chi\) with respect to \(\Gamma\), if_
\[\Psi(M\langle Z\rangle)=\chi(M)j(M,Z)^{r}\Psi(Z) \tag{112}\]
_for all \(M\in\Gamma\). If \(\Psi\) is even holomorphic on \(\mathbb{H}_{l}\) then it is called a holomorphic modular form._
The Borcherds product can lift a nearly holomorphic modular form \(f(\tau)=\sum_{\gamma\in L^{\prime}/L}f_{\gamma}\mathfrak{e}_{\gamma}:\mathbb{ H}\to\mathbb{C}[L^{\prime}/L]\) (see Definition B.3) of weight \(1-l/2\) with Fourier expansion
\[f(\tau)=\sum_{\gamma\in L^{\prime}/L}\sum_{n\in\mathbb{Z}+q(\gamma)}c(\gamma, n)\mathfrak{e}_{\gamma}(n\tau)\,, \tag{113}\]
to the meromorphic function \(\Psi(Z):\mathbb{H}\to\mathbb{C}\) of weight \(c(0,0)/2\). The precise theorem is stated as follows.
**Theorem B.1**: _(Theorem 13.3 (1) in [26] or Theorem 3.22 (i) in [21]) Let \(L\) be an even lattice of signature \((2,l)\) with \(l\geq 3\), and \(z\in L\) a primitive isotropic vector. Let \(z^{\prime}\in L^{\prime}\) and \(K=L\cap z^{\perp}\cap z^{\prime\perp}\). Moreover, assume that \(K\) also contains an isotropic vector. Let \(f\) be a nearly holomorphic modular form of weight \(k=1-l/2\) whose Fourier coefficients \(c(\gamma,n)\) are integral for \(n<0\). Then_
\[\Psi(Z)=\prod_{\beta\in L^{\prime}/L}\prod_{\begin{subarray}{c}m\in\mathbb{Z} +q(\beta)\\ m<0\end{subarray}}\Psi_{\beta,m}(Z)^{c(\beta,m)/2} \tag{114}\]
_is a meromorphic function on \(\mathbb{H}_{l}\) of (rational) weight \(c(0,0)/2\) for the modular group \(\Gamma(L)\) with some multiplier systems \(\chi\) of finite order. If \(c(0,0)\in 2\mathbb{Z}\), then \(\chi\) is the character of group \(\Gamma(L)\)._
For functions \(\Psi_{\beta,m}(Z)\) see Definition 3.14 in [21].
We can turn to the zeros and poles of \(\Psi(Z)\). A nowhere-vanishing holomorphic modular forms \(\Psi(Z)\) obtained through Borcherds product cannot exist since there is no input nonzero holomorphic modular form \(f(\tau)\) of negative weight \(1-l/2\). Before determining the positions of poles and zeroes, it is necessary to explain the concept of rational quadratic divisors (Heegner divisors).
Let \(z\in L\) be a primitive norm \(0\) vector, \(z^{\prime}\in L^{\prime}\) with \((z,z^{\prime})=1\). Let \(N\) be unique positive integer with such that \((z,L)=N\mathbb{Z}\). Then we have \(z/N\in L^{\prime}\). Denote by \(K\) the lattice
\[K=L\cap z^{\perp}\cap z^{\prime\perp}\,. \tag{115}\]
\(K\) has signature \((b^{+}-1,b^{-}-1)=(1,l-1)\). For an arbitrary vector \(n\in V=L\otimes\mathbb{R}\), \(n_{K}\) denotes the orthogonal projection \(n\) to \(K\otimes\mathbb{R}\) and
\[n_{K}=n-(n,z)z^{\prime}+(n,z)(z^{\prime},z^{\prime})z-(n,z^{\prime})z\,.\] (B.28)
If \(n\in L^{\prime}\), then \(n_{K}\) lies in the dual lattice \(K^{\prime}\) of \(K\). Let \(\zeta\in L\) be a lattice vector with \((\zeta,z)=N\). Let \(n\in L\), then the vector
\[\tilde{n}=n-(n,z/N)\zeta-(n,z^{\prime})z+(n,z/N)(\zeta,z^{\prime})z\] (B.29)
lies in \(L\) and easy to verify that \(\tilde{n}\perp z\) and \(\tilde{n}\perp z^{\prime}\). Hence \(\tilde{n}\in K\) and each element \(n\in L\) can be uniquely decomposed in this way, or equivalently, \(L=K\oplus\mathbb{Z}\zeta\oplus\mathbb{Z}z\). Now let \(\lambda\in L^{\prime}\) be a vector of negative norm, i.e. \(q(\lambda)<0\). Then the orthogonal complement \(\lambda^{\perp}\subset L\otimes\mathbb{R}\) is a rational quadratic space of type \((2,l-1)\). With these settings we can define the rational quadratic divisors
**Definition B.6**: _(rational quadratic divisor or Heegner divisor) Let \(\lambda\in L^{\prime}\) be a vector of negative norm \(m\), we set_
\[H_{\lambda}=\left\{\left[Z_{L}\right]\in\mathcal{K}^{+}|\left(Z_{L},\lambda \right)=0\right\}\,.\] (B.30)
_Moreover, due to the decomposition \(Z_{L}=(-q(Z)-q(z^{\prime}))z+Z+z^{\prime}\) (recall the equation (3.17)) and \(\lambda=bz+\lambda_{K}+az^{\prime}\), expanding the inner product \((Z_{L},\lambda)\) yields_
\[H_{\lambda}\cong\left\{Z\in\mathbb{H}_{l}|\,aq(Z)-(Z,\lambda_{K})-aq(z^{\prime })-b=0\right\}\] (B.31)
_in coordinates on \(\mathbb{H}_{l}\). This set defines a prime divisor on \(\mathbb{H}_{l}\). Suppose \(\beta\in L^{\prime}/L\) and \(m\) is a negative rational number; the sum_
\[H(\beta,m)=\sum_{\begin{subarray}{c}\lambda\in\beta+L\\ q(\lambda)=m\end{subarray}}H_{\lambda}\] (B.32)
_is called the rational quadratic divisor (or Heegner divisor) of discriminant \((\beta,m)\), which is a \(\Gamma(L)\)-invariant divisor on \(\mathbb{H}_{l}\). When \(\beta=0\), we usually denote \(H(m)=\frac{1}{2}H(0,m)\)._
This definition is suitable for lattices of signature \((2,l)\) with arbitrary Gram matrix. If we specify the Gram matrix of \(L=\Pi_{1,1}\oplus L_{0}\) as defined in the equation (3.19) and the vector \(z,z^{\prime}\), equivalently we have
\[H_{\lambda}=\left\{Z\in\mathbb{H}_{l}|\,aq_{0}(Z)-(Z,\lambda_{K})_{0}-b=0 \right\}\,,\] (B.33)
where the subscript emphasizes that the inner product is associated with the quadratic form \(S_{0}\). With this definition we can describe the position of the zeros and poles by the following theorem.
**Theorem B.2**: _(Theorem 13.3 (2) in [26] or Theorem 3.22 (ii) in [21]) The zeros and poles of \(\Psi(Z)\) lies on the divisor of \(\Psi(Z)\) on \(\mathbb{H}_{l}\), which is the linear combinations of Heegner
divisors determined by the principal part of the nearly holomorphic modular form \(f\)_
\[(\Psi)=\frac{1}{2}\sum_{\beta\in L^{\prime}/L}\sum_{\begin{subarray}{c}m\in \mathbb{Z}+q(\beta)\\ m<0\end{subarray}}c(\beta,m)H(\beta,m)\,.\] (B.34)
_The multiplicities of \(H(\beta,m)\) are \(2\), if \(2\beta=0\) in \(L^{\prime}/L\), and \(1\), if \(2\beta\neq 0\) in \(L^{\prime}/L\)._
As we saw from the above theorems, the properties of the Borcherds product \(\Psi(Z)\) are completely captured by the nearly holomorphic modular form \(f(\tau)\), in particular by the principal part of \(f(\tau)\):
\[\sum_{\gamma\in L^{\prime}/L}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+q(\gamma )\\ n<0\end{subarray}}c(\gamma,n)\mathfrak{e}_{\gamma}(n\tau)\,.\] (B.35)
Pairing the form \(f(\tau)\) with a vector valued cusp form of weight \(1+l/2\) for the dual Weil representation (see Definition B.2) gives a meromorphic elliptic modular form of weight \(2\) for \(\mathrm{SL}(2;\mathbb{Z})\), hence its constant term must vanish by the residue theorem (no nonzero \(\mathrm{SL}(2;\mathbb{Z})\) modular form of weight \(2\)) and this gives the conditions on the principal part on \(f\), stated as the following theorem. By setting \(\kappa=1+l/2\) and denoting the space of the vector valued modular cusp form of weight \(\kappa\) with respect to lattice \(L\) as \(S_{\kappa,L}\), we have
**Theorem B.3**: _(Theorem 1.17 in [21]) There exists a nearly holomorphic modular form \(f\in M^{!}_{k,L}\) with prescribed principal part_
\[\sum_{\beta\in L^{\prime}/L}\sum_{\begin{subarray}{c}m\in\mathbb{Z}+q_{1}( \beta)\\ m<0\end{subarray}}c(\beta,m)\mathfrak{e}_{\beta}(m\tau)\] (B.36)
_(\(c(\beta,m)\in\mathbb{C}\) with \(c(\beta,m)=c(-\beta,m)\)), if and only if the functional_
\[\sum_{\beta\in L^{\prime}/L}\sum_{\begin{subarray}{c}m\in\mathbb{Z}+q_{1}( \beta)\\ m<0\end{subarray}}c(\beta,m)a_{\beta,-m},\] (B.37)
_equals zero in \(S^{*}_{\kappa,L}\). For \(\gamma\in D(L)\) and \(n\in\mathbb{Z}-q(\gamma)\) with \(n>0\), \(a_{\gamma,n}:S_{\kappa,L}\to\mathbb{C}\) denote the functional in the dual space \(S^{*}_{\kappa,L}\) of \(S_{\kappa,L}\) which maps a cusp form \(f\) to its \((\gamma,n)\)-th Fourier coefficient \(a_{\gamma,n}(f)\)._
Obviously this imposes non-trivial condition on the principal part of the nearly holomorphic modular form \(f\).
### Character of the lattice
If the weight of the modular form is integer, which is the case of interest, the multiplier system is actually the character of the modular group \(\Gamma(L)\), or the character of the lattice \(L\). This forms a homomorphism from the modular group to \(\mathrm{U}(1)\). From the well-known Pontryagin duality, the abelianisation \(G^{ab}=G/[G,G]\) of the group \(G\) is isomorphic to the character group \(\mathrm{Hom}(G,\mathbb{C}^{\times})\). Thus to obtain the character we need to consider the
abelianisation of the modular group \(\Gamma(L)\). As discussed in section 5, the anomaly cancellation imposes the triviality of the character for the admissible lattices. To the best of our knowledge, the sufficient and necessary conditions for a lattice of signature \((2,l)\) to have trivial characters are not known. A sufficient condition is known:
**Theorem B.4**: _(Theorem 1.7 in [39]) Let \(L\) be an even integral lattice containing at least two hyperbolic planes (\(\Pi_{1,1}\)), such that \(\text{rank}_{3}(L)\geq\mathfrak{S}^{26}\) and \(\text{rank}_{2}(L)\geq 6\), then the \(\Gamma(L)^{ab}\cong\mathbb{Z}/2\mathbb{Z}\) and \(S\Gamma(L)^{ab}\) is trivial, where \(S\Gamma(L)\) is the modular group intersect with the special orthogonal group of lattice \(L\), i.e. \(S\Gamma(L):=\Gamma(L)\cap SO(L)\)._
An immediate corollary is that if \(L=\Pi_{1,1}\oplus\Pi_{1,1}\oplus\hat{L}\) and \(\hat{L}\) contains a sublattice isomorphic to \(A_{2}\), it satisfies the so-called Kneser conditions [77; 39] and the character for group \(S\Gamma(L)\) is trivial. Notably, if the lattice \(L=\Pi_{1,1}\oplus\Pi_{1,1}\oplus\hat{L}\) is an even unimodular lattice of rank at least 6, we have the same conclusion that the \(\Gamma(L)^{ab}\cong\mathbb{Z}/2\mathbb{Z}\) and \(S\Gamma(L)^{ab}\) is trivial.
## Appendix C Alternative \(l=2\) counterterm from Hilbert modular forms
In this appendix, we will provide a brief overview of the alternative construction for the case \(l=2\) mentioned at the end of section 6, which follows a similar path to the procedure for \(l\geq 3\) cases. For this we need to introduce Hilbert modular forms, following closely to these good references [78; 79; 22].
Let \(\mathbb{K}=\mathbb{Q}(\sqrt{p})\), \(p\in\mathbb{N}\), \(p>1\) squarefree, be a real quadratic number field with the ring of integers and discriminant
\[\mathcal{O}_{\mathbb{K}}=\mathbb{Z}+\mathbb{Z}\omega_{\mathbb{K}},\,\omega_{ \mathbb{K}}=\begin{cases}(1+\sqrt{p})/2\,,&\quad d_{\mathbb{K}}=\begin{cases}p,&p\equiv 1\,\text{mod}\,4\,,\\ 4p,&\text{else}\,.\end{cases}\] (C.1)
The non-trivial automorphism \(\mathbb{K}\to\mathbb{K}\) is given by
\[\alpha=\alpha_{0}+\alpha_{1}\sqrt{p}\longmapsto\alpha^{*}=\alpha_{0}-\alpha_{ 1}\sqrt{p}\,,\quad\alpha_{0},\alpha_{1}\in\mathbb{Q}.\] (C.2)
The Hilbert modular group is given by \(\Gamma_{\mathbb{K}}=\text{SL}(2;\mathcal{O}_{\mathbb{K}})\). With respect to this group, we can define the Hilbert modular form
**Definition C.1**: _(Hilbert modular form) Let \(\mu:\text{SL}(2;\mathcal{O}_{\mathbb{K}})\to\mathbb{C}\) be a map of finite order (multiplier system). A Hilbert (Blumenthal) modular form for \(\mathbb{K}\) of weight \(r=(r_{1},r_{2})\in\mathbb{Q}^{2}\) with multiplier system \(\mu\) is a holomorphic function \(f:\mathbb{H}^{2}\to\mathbb{C}\) with the properties_
* \(f(M\tau)=\mu(M)(c\tau+d)^{r_{1}}(c^{*}\tau_{2}+d^{*})^{r_{2}}f(\tau)\) _for all_ \(\tau\in\mathbb{H}^{2},M\in\text{SL}(2;\mathcal{O}_{\mathbb{K}})\)_, where_ \[M\tau:=\left(\frac{a\tau_{1}+b}{c\tau_{1}+d},\frac{a^{*}\tau_{2}+b^{*}}{c^{*} \tau_{2}+d^{*}}\right)\,,\quad\tau=(\tau_{1},\tau_{2})\,,\quad M=\begin{pmatrix} a&b\\ c&d\end{pmatrix}\,.\] (C.3)
_ii)_ \(f\) _is regular at cusps of_ \(\text{SL}(2;\mathcal{O}_{\mathbb{K}})\)_._
_If \(f\) vanishes at all cusps, we call \(f\) a cusp form. If \(f\) has homogeneous weight \(r=(k,k)\in\mathbb{Q}^{2}\) we will also say that \(f\) has weight \(k\in\mathbb{Q}\)._
If we want to use such Hilbert modular forms to cancel the anomaly, several things need to be clarified. First, the action of Hilbert modular group \(\Gamma_{\mathbb{K}}\) on the generalized upper-half plane \(\mathbb{H}_{2}\) is obtained through the homomorphism \(\Omega\) in (6.25). For arbitrary \(M\in\text{SL}(2;\mathbb{K})\) (or \(\Gamma_{\mathbb{K}}\)),
\[\begin{split}\Omega(M,M^{*})\langle Z\rangle&= \left(\frac{\alpha Z_{1}+\beta}{\gamma Z_{1}+\delta},\frac{\alpha^{*}Z_{2}+ \beta^{*}}{\gamma^{*}Z_{2}+\delta^{*}}\right)^{T},\quad M=\begin{pmatrix} \alpha&\beta\\ \gamma&\delta\end{pmatrix}\,,\\ j(M,Z)&=(\gamma Z_{1}+\delta)(\gamma^{*}Z_{2}+\delta^{*})\,.\end{split}\] (C.4)
In other words, the symmetry group now is the Hilbert modular group \(\Gamma_{\mathbb{K}}\), different with the previous \(\text{SL}(2,\mathbb{Z})\times\text{SL}(2,\mathbb{Z})\). Theorem 2 of [63] proves the group \(\Gamma_{\mathbb{K}}\) is actually isomorphic to the discriminant kernel of the orthogonal group, so the symmetry further shrinks to \(\Gamma_{\mathbb{K}}\).
Furthermore, non-trivial lattice structure emerges. The modular group \(\Gamma_{\mathbb{K}}\) is the spin group \(\text{Spin}(L)\) of lattice \(L\) (section 2.7 of Chapter 2 in [22]), where \(L\) can be written as \(L=\mathbb{Z}\oplus\mathbb{Z}\oplus\mathcal{O}_{\mathbb{K}}\) with quadratic form \(q((a,\nu,b))=ab-\nu\nu^{*}\) for \(a,b\in\mathbb{Z}\) and \(\nu\in\mathcal{O}_{\mathbb{K}}\). For example, suppose \(p\equiv 1\mod 4\), and we write the lattice explicitly in terms of matrices
\[L=\left\{\left.\begin{pmatrix}a&\nu\\ \nu^{*}&b\end{pmatrix}\right|\,a,b\in\mathbb{Z},\,\nu\in\mathcal{O}_{\mathbb{K }}\right\}\,,\quad q(X)=\det(X)\text{ for }X\in L\,.\] (C.5)
The basis of this lattice is easily written in terms of the matrices
\[e_{1}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix},\quad e_{2}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad e_{3}=\begin{pmatrix}0&\frac{1+\sqrt{p}}{2}\\ \frac{1-\sqrt{p}}{2}&0\end{pmatrix},\quad e_{4}=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\,.\] (C.6)
One can easily obtain the Gram matrix in terms of these basis, i.e. \(S_{ij}=(e_{i},e_{j})=q(e_{i}+e_{j})-q(e_{i})-q(e_{j})\), then (we use the symbol \(S\) as defined in the previous sections)
\[S=\begin{pmatrix}&&1\\ &-2&-1\\ &-1&\frac{p-1}{2}\\ 1&&\end{pmatrix}\,.\] (C.7)
Though \(S\) contains a \(\Pi_{1,1}\) as usual, the rest presents a non-trivial lattice structure, not realized by known string-theoretic constructions.
The next step is to construct the suitable Hilbert modular forms of non-trivial weight. The conditions in section 5 still need to be satisfied, namely:
* The character \(\mu\) of the group \(\Gamma_{\mathbb{K}}\) (of the lattice \(L\)) must be trivial.
* The zeros and poles of the Hilbert modular form correspond to the symmetry enhancement point of the theory.
The first condition can be assured by an appropriate choice of the value of \(p\). However, identifying the rational quadratic divisor in this case is not straightforward. Necessary details can be found in references [78; 79; 22]. It can be verified (see section 1.3 in [79]) that the generalized upper-half plane \(\mathbb{H}_{2}\) (isomorphic to \(\mathcal{K}^{+}\) defined in (3.12)) is exactly of the form of
\[\mathbb{H}_{2}=\left\{\,\delta\left(\begin{matrix}Z_{1}Z_{2}&Z_{1}\\ Z_{2}&1\end{matrix}\right)\Bigg{|}\,\operatorname{Im}(Z_{1})>0,\,\, \operatorname{Im}(Z_{2})>0,\,\,\delta\in\mathbb{C}^{*}\right\}\,.\] (C.8)
Since we are working within the projective space, usually the factor \(\delta\) is ignored. It is also obvious that we can use \((Z_{1},Z_{2})\) to label the element in \(\mathbb{H}_{2}\). In the previous sections we used rational quadratic divisors to describe the position of zeroes and poles of the Borcherds product. The corresponding object here is called Hirzebruch-Zagier divisor [80].
**Definition C.2**: _(Hirzebruch-Zagier divisor) For \((a,h,b)\in L\) and \(Z\in\mathbb{H}\times\mathbb{H}\), we have the innerproduct_
\[\left(\begin{pmatrix}Z_{1}Z_{2}&Z_{1}\\ Z_{2}&1\end{pmatrix},\begin{pmatrix}a&h\\ h^{*}&b\end{pmatrix}\right)=bZ_{1}Z_{2}-h^{*}Z_{1}-hZ_{2}+a\,.\] (C.9)
_The zero locus of the right hand side defines an analytic divisor on \(\mathbb{H}\times\mathbb{H}\). For a positive number \(m\), in the space \(\mathbb{H}\times\mathbb{H}\) we define the set_
\[T(m)=\bigcup_{\begin{subarray}{c}(a,b,h)\in L^{\prime}/\{\pm 1\}\\ q(a,b,h)=ab-h^{*}h=-m/p\end{subarray}}\left\{(Z_{1},Z_{2})\in\mathbb{H}^{2}|\, aZ_{1}Z_{2}+hZ_{1}+h^{*}Z_{2}+b=0\right\}\,.\] (C.10)
\(T(m)\) _is called Hirzebruch-Zagier divisor of discriminant \(m\)._
Before we extend the Borcherds product (Theorem B.1) to this case, we set up some basic notations. In section 2 the congruence subgroup \(\Gamma_{0}(p)\) is defined. Corresponding modular forms may be defined as well.
**Definition C.3**: _(Modular forms for congruence subgroups) Let \(\mu\) be an abelian character \(\Gamma_{0}(p)\to\mathbb{C}^{*}\) and \(k\in\mathbb{N}_{0}\) a non negative integer. A holomorphic map \(f:\mathbb{H}\to\mathbb{C}\) with the transformation law_
\[f(M\tau)=\mu(M)(c\tau+d)^{k}f(\tau)\quad\text{for all }M\in\Gamma_{0}(p)\,,\] (C.11)
_for which \(f(\infty):=\lim_{\operatorname{Im}(z)\to\infty}f(z)\) and \(f(0):=\lim_{z\to 0}z^{k}f(z)\) exist in \(\mathbb{C}\cup\{\infty\}\) (it has finite order at the infinity) is called nearly holomorphic modular form for \(\Gamma_{0}(p)\) of weight \(k\) with character \(\mu\). If \(f(\infty)\) and \(f(0)\) are complex numbers, then \(f\) is called a holomorphic modular from for \(\Gamma_{0}(p)\) of weight \(k\) with character \(\mu\). If \(f(\infty)=f(0)=0\), then \(f\) is called a cusp form._
We define the spaces
\[A_{k}(p,\mu)\quad\text{nearly holomorphic modular forms for $\Gamma_{0}(p)$ of weight $k$ with character $\mu$}\,,\] \[M_{k}(p,\mu)\quad\text{holomorphic modular forms for $\Gamma_{0}(p)$ of weight $k$ with character $\mu$}\,,\] \[S_{k}(p,\mu)=\{f\in M_{k}(p,\mu)|\,f\text{ cusp form}\}\,\] \[A_{k}^{\pm}(p,\chi_{p})=\left\{\left.f(z)=\sum_{n\in\mathbb{Z}}a( n)e^{2\pi inz}\in A_{k}(p,\chi_{p})\right|\,a(n)=0\text{ for $\chi_{p}(n)=\mp 1$}\right\}\,,\] \[S_{k}^{\pm}(p,\chi_{p})=A_{k}^{\pm}(p,\chi_{p})\cap S_{k}(p,\chi_ {p})\,.\]
One can show that \(A_{k}(p,\chi)p)=A_{k}^{+}(p,\chi_{p})\oplus A_{k}^{-}(p,\chi_{p})\). If \(f=\sum_{n\in\mathbb{Z}}a(n)q^{n}\) is a modular form in \(A_{k}^{\epsilon}(p,\chi_{p})\), then we call \(\sum_{n<0}a(n)q^{n}\) the principal part of \(f\) (at \(\tau\to\infty\)). For all integers \(n\) we define
\[s(n)=1+\sum_{j=0}^{p-1}\frac{e^{2\pi inj/p}}{p}=2-\left(\frac{n}{p}\right)^{2} =\begin{cases}2,&\text{if $n\equiv 0\mod p$}\\ 1,&\text{if $n\not\equiv 0\mod p$}\end{cases}\,.\] (C.12)
Similar to the Theorem B.3, the principal part of \(f\) has non trivial restrictions (Theorem 6 in [78]). There exists a nearly holomorphic modular form \(f\in A_{k}^{+}(p,\chi_{p})\) with prescribed principal part \(\sum_{n<0}a(n)q^{n}\) (where \(a(0)=0\) if \(\chi_{p}(n)=-1\)), if and only if
\[\sum_{n<0}s(n)a(n)b(-n)=0\] (C.13)
for every cusp form \(g=\sum_{m>0}b(m)q^{m}\) in \(S_{\kappa}^{+}(p,\chi_{p})\), where \(\kappa=2-k\). The case \(k=0\) and \(\kappa=2\) is of particular interest for us. For prime number \(p\equiv 1\mod 4\), the dimension of \(S_{2}(p,\chi_{p})\) is \(2\left[\frac{p-5}{24}\right]\). Thus the space is empty for \(p=5\) case. Hence for \(p=5\), for all \(m\in\mathbb{N}\) there is a nearly holomorphic modular form \(f\in A_{0}^{+}(p,\chi_{p})\) with prescribed principal part \(\sum_{n<0}a(n)q^{n}\) if and only if \(a(n)=0\) for all \(n\in\mathbb{N}\) with \(\chi_{p}(n)=-1\). Such nearly holomorphic modular form is unique [79], and up to the normalization is the function \(f\) given in (6.29).
**Theorem C.1**: _(Borcherds product for Hilbert modular forms, Theorem 9 in [78]) Let \(f=\sum_{n\in\mathbb{Z}}a(n)q^{n}\in A_{0}^{+}(p,\chi_{p})\) and assume that \(s(n)a(n)\in\mathbb{Z}\) for all \(n<0\). Then there is a meromorphic function \(\Psi\) on \(\mathbb{H}\times\mathbb{H}\) with the following properties:_
* \(\Psi\) _is a meromorphic modular form for_ \(\Gamma_{\mathbb{K}}\) _(the Hilbert modular group defined in section_ 6_) with some multiplier system of finite order. The weight of_ \(\Psi\) _is equal to the constant coefficient_ \(a(0)\) _of_ \(f\)_._
* _The divisor of_ \(\Psi\) _is determined by the principal part of_ \(f\)_. It equals_ \[\sum_{n<0}s(n)a(n)T(-n)\,.\] (C.14)
We can now verify the two required properties. The first is satisfied for \(p=5\), since the multiplier system of Hilbert modular form for \(\Gamma_{\rm K}\) is trivial (Corollary 5.2.1 in [79]). The second condition necessitates a confirmation that the positions of poles and zeros of the Hilbert modular form correspond to the symmetry enhancements. The explicit relationship between these points and the Hirzebruch-Zagier divisor is not yet known. Naively, requiring that the symmetry enhancement appears at the diagonal set \((Z_{1},Z_{2})=\{(\tau,\tau)|\tau\in\mathbb{H}\}\) similarly to the choices in [25], corresponds to the Hirzebruch-Zagier divisor \(T(1)\). There exists a weight 0 modular form \(f\in A_{0}^{+}(5,\chi_{5})\)
\[f(\tau)=q^{-1}+5+11q-54q^{4}+O(q^{5})\,,q=e^{2\pi i\tau}\,,\] (C.15)
that has only one term (\(q^{-1}\)) in the principal part. By Theorem C.1, we arrive at a holomorphic Hilbert modular form of weight 5 vanishing at \(T(1)\). However, as pointed out in the main text, unexpected symmetry enhancements appear. For example, the point \(Z_{1}=Z_{2}=\frac{1}{2}+\frac{\sqrt{3}}{2}i\) may lead to SU(3) symmetry, which does not appear in 8D (2,2) theories. Therefore, Hilbert modular forms do not appear in the counterterms for the \(\mathcal{N}=1\) theories with \(l=2\).
|
2306.11104 | Markovian Embeddings for Coalitional Bargaining Games | We examine the Markovian properties of coalition bargaining games, in
particular, the case where past rejected proposals cannot be repeated. We
propose a Markovian embedding with filtrations to render the sates Markovian
and thus, fit into the framework of stochastic games. | Lucia Cipolina-Kun | 2023-06-19T18:13:16Z | http://arxiv.org/abs/2306.11104v1 | # Markovian Embeddings for Coalitional Bar-Gaining Games
###### Abstract
We examine the Markovian properties of coalition bargaining games, in particular, the case where past rejected proposals cannot be repeated. We propose a Markovian embedding with filtrations to render the sates Markovian and thus, fit into the framework of stochastic games.
## 1 Introduction and Related Literature
Coalitional bargaining games CBG are sequential games where one agent at random proposes a coalition formation while the others provide responses on whether to accept or reject the proposal. If a proposal is accepted, the game terminates and the coalition is formed 1. If a proposal is rejected, the game continues with another proposer at random formulating a different coalition proposal.The goal is to find an agreement over the coalition members. A CBG can we framed as a stochastic game where the game states are configured by two sequential actions: the proposals over coalition members and the corresponding responses. The state dynamics are determined by the agent's preferences and the rewards are assigned once a proposed coalition is accepted. The order of coalition proposals is a crucial factor in determining the convergence of CBG Okada (1996); however, it has not received sufficient attention in the existing literature Rubinstein (1982); Okada (1996); Chatterjee et al. (1993) and a thorough analysis is missing. Given that the speed of convergence of the game can vary significantly depending on whether proposals can be repeated, a comprehensive analysis of this aspect is of great importance. Specifically, the repetition of proposals can result in three different scenarios regarding the speed in which an agreement is reached. First, if rejected proposals are allowed to be repeated in future rounds (implying that agents may reconsider their options later) the CBG process turns into a multi-dimensional random walk bouncing back and forth between states delaying agreement. A second scenario like in Bachrach et al. (2020) allows for repetition of proposals but introduces learning agents _learn_ avoiding the repetition of proposals through a reward signal. In this case, the learning aspect converts the CBG process from a pure random walk to a stochastic game with learned transition dynamics. A third scenario is to restrict the proposals of coalitions already rejected in the past. This is the most natural setting and allows the CGB to converge to an agreement in the most efficient way.
Footnote 1: To simplify and without loss of generality, we leave the coalition payoff aside.
The three scenarios have different implications on the Markovian property of the CBG process. This property requires that the transition probability between states depends solely on the current state and action, regardless of the past trajectory of states. This is the case on the first and second scenarios described above; however, on the third scenario, where rejected proposals can no longer be proposed in the future, the CBG process is no longer Markovian, as the probability distribution of the next proposal depends on the entire history (i.e., past proposals) of the game. The Markovian property of coalitional bargaining games is relevant when using multi-agent reinforcement learning (MARL) to approximate the optimal policies of agents in the game. MARL is based on the theoretical framework of stochastic games, which require the transition dynamics between states to be Markovian Littman (1994). Solution concepts commonly used in stochastic games such as the Markov perfect Nash equilibrium Yang & Wang (2020), require the Markovian property to hold. Thus, understanding the Markovian property of CBG is crucial in designing effective multi-agent reinforcement learning algorithms for optimizing agent's performance.
**Our contributions.** We examine the most natural case of CBG in which proposals, once rejected, cannot be repeated. First, we provide a proof that such a game is non-Markovian. Second, we present a solution to convert it into a Markovian game by proposing a _Markovian embedding_ with _filtrations_. As a result, the resulting Markovian game captures the same information as the original non-Markovian game, thus allowing the application of MARL frameworks.
## 2 CBG as a non-Markovian Stochastic Game
Consider a CBG involving a set of agents denoted by \(N=\{1,2,\ldots,n\}\) and a set of possible coalitions denoted by \(c\subseteq 2^{N}\). At time \(t=0\), a proposer \(i\) is chosen uniformly at random from \(N\), and proposes a coalition \(c_{0}\subseteq N\) to the other agents, on the same time step, the agents in \(c_{0}\) reply "accept" or "reject" in turns. If a proposal is accepted, a coalition is formed and the game terminates; otherwise, if a proposal is rejected, the game proceeds to time \(t=1\), and a new proposer is chosen uniformly at random from the agents who have not yet proposed and proposes a new coalition \(c_{1}\subseteq N\). Again, the agents in \(c_{1}\) reply "accept" or "reject" in turns, and the game proceeds in this way until either a proposal is accepted, or all possible coalitions have been proposed and rejected. Consider the state of the game at time \(t\) as defined by Okada (1996); Chatterjee et al. (1993)\(s_{t}=(p_{t},c_{t})\), where \(p_{t}\) is the current proposer and \(c_{t}\) is the proposed coalition. Defined like this, the current state only contains information on the current proposal; however, the probability of the next state \(P(s_{t+1})\) depends on the set of rejected proposals up to time \(t\), and not just the current state. As such, since the events \(s_{t+1}\) and \((s_{t-1})_{t>0}\) are not independent of each other and we have:
\[P(s_{t+1}|s_{0},s_{1},\ldots,s_{t})\neq P(s_{t+1}|s_{t}) \tag{1}\]
## 3 Markovian Embedding of the Coalition Bargaining Game
In the previous section, we showed that the CBG with the restriction on repeating proposals is non-Markovian. However, we can convert this non-Markovian process into a Markovian one by introducing a _Markovian embedding_ using a _filtration_. In this section, we will show how this can be done. A Markovian embedding with _filtrations_ is a probability space \((\Omega,\mathcal{F},P)\) equipped with a sequence of sub-sigma-algebras \((\mathcal{F}_{t})_{t>0}\), where \(\mathcal{F}_{t}\subseteq\mathcal{F}\) captures the _ordered_ history of the game up to time \(t\) including proposals, acceptances, and rejections i.e., \(\mathcal{F}_{t}=\sigma\big{(}(i_{1},o_{1}),\ldots,(i_{k},o_{k})\mid 1\leq i_{j}<j,1 \leq j\leq k\big{)}\) and \(k\leq t\big{)}\), where \((i_{j},o_{j})\) denotes the outcome of the \(j\)th proposal, with \(i_{j}\) being the proposer and \(o_{j}\) being the outcome (either accepted or rejected).
Let's now define a new state of the game as \(s_{t}=(c_{t},p_{t},\mathcal{F}_{t})\). This sate captures all the relevant information needed to determine the future behavior of the game. Specifically, the next state is obtained by updating the set of proposals \(c_{t}\) based on the action taken and updating the filtration \(\mathcal{F}_{t}\) based on the outcome of the action. The new state is an _adapted_ stochastic process \((s_{t})_{t>0}\) defined on this probability space, such that the Markov property holds. In other words, the conditional distribution of \(s_{t+1}\) given \(\mathcal{F}_{t}\) depends only on \(s_{t}\) and not on any earlier values of the process. With this definition, we can show that the Markov property holds, as follows:
\[P(s_{t+1}\mid\mathcal{F}_{t})=P(s_{t+1}\mid s_{t}) \tag{2}\]
The above Equation 2 holds since the filtration is a sequence of _nested_ sigma-algebras. Hence, the conditional probability given all the sigma-algebras is the same as the conditional probability given the last one in the sequence. A longer proof can be found on Appendix 5.5.
## 4 Conclusions and Future Work
We have analyzed the implications of different state definitions on a CBG showing that while is natural to avoid repetition of proposals to improve convergence, this can render the game non-Markovian, making it difficult to apply MARL/stochastic game results. We have also shown how to embed the non-Markovian process into a Markovian one using a filtration.
### URM Statement
The authors acknowledge that Lucia Cipolina-Kun meets the URM criteria of the ICLR 2023 Tiny Papers Track.
#### Acknowledgements
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) through a Turing AI Fellowship (EP/V022067/1) on Citizen-Centric AI Systems. ([https://ccais.soton.ac.uk/](https://ccais.soton.ac.uk/)). Lucia Cipolina-Kun is funded by British Telecom.
|
2305.08105 | Blockchain Transaction Fee Forecasting: A Comparison of Machine Learning
Methods | Gas is the transaction-fee metering system of the Ethereum network. Users of
the network are required to select a gas price for submission with their
transaction, creating a risk of overpaying or delayed/unprocessed transactions
in this selection. In this work, we investigate data in the aftermath of the
London Hard Fork and shed insight into the transaction dynamics of the net-work
after this major fork. As such, this paper provides an update on work previous
to 2019 on the link between EthUSD BitUSD and gas price. For forecasting, we
compare a novel combination of machine learning methods such as Direct
Recursive Hybrid LSTM, CNNLSTM, and Attention LSTM. These are combined with
wavelet threshold denoising and matrix profile data processing toward the
forecasting of block minimum gas price, on a 5-min timescale, over multiple
lookaheads. As the first application of the matrix profile being applied to gas
price data and forecasting we are aware of, this study demonstrates that matrix
profile data can enhance attention-based models however, given the hardware
constraints, hybrid models outperformed attention and CNNLSTM models. The
wavelet coherence of inputs demonstrates correlation in multiple variables on a
1 day timescale, which is a deviation of base free from gas price. A
Direct-Recursive Hybrid LSTM strategy outperforms other models. Hybrid models
have favourable performance up to a 20 min lookahead with performance being
comparable to attention models when forecasting 25/50-min ahead. Forecasts over
a range of lookaheads allow users to make an informed decision on gas price
selection and the optimal window to submit their transaction in without fear of
their transaction being rejected. This, in turn, gives more detailed insight
into gas price dynamics than existing recommenders, oracles and forecasting
approaches, which provide simple heuristics or limited lookahead horizons. | Conall Butler, Martin Crane | 2023-05-14T08:51:44Z | http://arxiv.org/abs/2305.08105v1 | # Blockchain Transaction Fee Forecasting: A Comparison of Machine Learning Methods
###### Abstract
Gas is the transaction-free metering system of the Ethereum network. Users of the network are required to select a gas price for submission with their transaction, creating a risk of overpaying or delayed/unprocessed transactions involved in this selection. In this work, we investigate data in the aftermath of the London Hard Fork and shed insight into the transaction dynamics of the network after this major fork. As such, this paper provides an update on work previous to 2019 on the link between EthUSD/BitUSD and gas price. For forecasting, we compare a novel combination of machine learning methods such as Direct-Recursive Hybrid LSTM, CNN-LSTM, and Attention-LSTM. These are combined with wavelet threshold denoising and matrix profile data processing toward the forecasting of block minimum gas price, on a 5-min timescale, over multiple lookaheads. As the first application of the matrix profile being applied to gas price data and forecasting that we are aware of, this study demonstrates that matrix profile data can enhance attention-based models; however, given the hardware constraints, hybrid models outperformed attention and CNN-LSTM models. The wavelet coherence of inputs demonstrates correlation in multiple variables on a 1-day timescale, which is a deviation of base free from gas price. A Direct-Recursive Hybrid LSTM strategy is found to outperform other models, with an average RMSE of 26.08 and R2 of 0.54 over a 50-min lookahead window compared to an RMSE of 26.78 and R2 of 0.452 in the best-performing attention model. Hybrid models are shown to have favorable performance up to a 20-min lookahead with performance being comparable to attention models when forecasting 25-50-min ahead. Forecasts over a range of lookaheads allow users to make an informed decision on gas price selection and the optimal window to submit their transaction in without fear of their transaction being rejected. This, in turn, gives more detailed insight into gas price dynamics than existing recommenders, oracles and forecasting approaches, which provide simple heuristics or limited lookahead horizons.
Ethereum; gas; LSTM; CNN-LSTM; Direct-Recursive Hybrid; attention; wavelet denoising; wavelet coherence; matrix profile +
Footnote †: journal: Journal of LaTeX Templates
1
Footnote 1: School of Computing, Dublin City University, Glasnevin, Dublin 9, Ireland; [email protected]
2
Footnote 2: ADAPT Research Centre, Dublin City University, Glasnevin, Dublin 9, Ireland
3
Footnote 3: Correspondence: [email protected]
## 1 Introduction
Blockchain technologies and their applications such as cryptocurrencies, smart contracts, Non-Fungible Tokens (NFTs) and DeFi (Decentralized Finance) show great potential for disruption and innovation, and they are much discussed. The development of these decentralized applications is enabled through the Ether cryptocurrency, the associated blockchain Ethereum, and the Ethereum Virtual Machine. Ether (ETH) is the second largest cryptocurrency by market cap after Bitcoin. Use of the Ethereum network is growing; daily transactions rose from 500,000 to 2,000,000 between 2018 and 2023 [1].
Ethereum network transactions are cryptographically signed instructions between accounts. These instructions can be as simple as a transfer of ETH or more complex contract deployments that enable a variety of decentralized applications. Gas is the unit of computational work used when processing a transaction on the network. The number of
gas units consumed by a transaction is dependent on the computational complexity of the transaction. Gas has a price per unit in ETH, and the price is submitted by the sender with the transaction [2]. The process of packing transactions into blocks proceeds as follows: many transactions can go into a single block in Ethereum with miners carrying out a number of tasks:
The list of pending transactions arranged by gas price -- and hence processing priority--is the first parameter that the miners have to work with. In addition, the amount of transactions that miners can add to a block is restricted. After the miners have decided which transactions should be packed, the Proof of Work procedure starts [3]. If the total amount of gas used by all the transactions is greater than the block's upper limit, the block will not be recognized by the Ethereum network. If this is not the case, the transaction can be included in the block and the associated reward is given to the miner who finds the new block first. The selection of transactions by miners has been shown to be almost exclusively based on the submitted gas price [4].
There is risk associated with gas price selection when submitting a transaction; too high will result in unnecessarily high fees, while selecting too low can incur transaction wait times or failure of the transaction to be processed if not selected by miners. High gas fees are seen as a major impediment to applications on the Ethereum network. The impact of gas fees on applications can be seen in cases such as ConstitutionDAO [5].
It was in part to address such issues that the Ethereum London Hard Fork was introduced on 5 August 2021 [6]. One innovation introduced here is the move from Proof of Work to Proof of Stake. The main motivation behind this was rather than processing power being used for voting, users become validators on the basis of the number of staked coins they have. Proof of Stake is designed to allow for better energy efficiency and a lower bar for entry [7].
Prior to the introduction of Ethereum 2.0 (Serenity) [8] and the switch to the Proof of Stake system, it was necessary to make several preparations, and these were introduced in the five further Ethereum Improvement Proposals (EIPs) [9]. One of these, EIP-1559, was designed to make fees more user-friendly and increase the uniformity of the transactions mined in a block. Additionally, this proposal is aimed at reducing overpayments to miners. EIP-1559 offers a variable block size with a 50% target usage. Thus, the majority of the time slots will be only halfway full. There may still be spikes when there are full blocks for a while, but it is more likely to happen for brief intervals. As the dataset used for this paper post-dates this introduction, we cover the details of EIP-1559 briefly below.
Several gas price recommenders (or oracles) exist to aid the gas price prediction task currently. These recommenders use simple heuristics and past data to generate a number of recommendations. Go-Ethereum (Geth) recommends a gas price to submit for the next block based on a percentile of minimum block gas prices for the past number of blocks, defaulting to the 60th percentile of the last 20 blocks [10]. EthGasStation estimates the number of blocks waited when a transaction is submitted at a specified gas price, which is based on a Poisson regression model using the previous 10,000 blocks of data [11]. GasStation--Express estimates the likelihood of a transaction being included in the next block at a gas price based on proportion of the last 200 blocks with a transaction at that price or lower [12]. The performance of these oracles has, however, not lived up to expectations in many cases (as will be detailed below) [13].
This paper is related to previous gas price forecasting and recommender work by Mars et al. [14] and Werner et al. [15]. The aim of this study is to first investigate the relation between potential model inputs in blockchain and exchange data, using wavelet coherence as seen Garrigan et al. [16], Sun and Xu [17] and Qu et al. [18]. The next stage is development of a forecasting model based on these inputs. Previous approaches have applied Long Short-Term Memory (LSTM) and Attention-Gated Recurrent Unit (GRU) models [3]. This study intends to investigate performance over different forecast horizons, using multiple approaches; a direct-recursive hybrid LSTM forecasting approach, inclusion of an attention mechanism with the matrix profile (as seen applied to low-granularity daily COVID data by Liu et al. [19]) and also Convolutional Neural Networks (CNNs) fed
to LSTM architectures, or CNN-LSTMs. A comparison of these methods has been made recently by Chandra et al. [20].
Wavelet denoising will also be investigated, as seen in Dyllon et al. [21] and Qiu et al. [22]. A combination of wavelet transforms, matrix profile and attention-LSTM methods toward time-series forecasting is a novel approach to our knowledge, particularly in the domain of blockchain transaction fees.
We feel that our paper contributes to the literature through:
1. First and foremost, the time period studied is in the aftermath of the so-called Ethereum London Hard Fork when the immediate aftereffects of this had passed. In particular, we feel that Research Question 3 of our study provides an update on Pirerro and Rocha's work of 2019 [23] on the link between EthUSD/BitUSD and gas price.
2. This study is the first that we have found to investigate performance over different forecast horizons. These time horizons are useful, as a user must select between these and potentially be penalized in terms of cost or missed transactions for choosing one over the other. There is thus a real cost penalty for the user in not choosing correctly here.
3. In our study, we use multiple approaches: a direct-recursive hybrid LSTM forecasting approach, inclusion of an attention mechanism with the matrix profile, as seen applied to low-granularity daily COVID data and also Convolutional Neural Networks (CNNs), fed to LSTM architectures, or CNN-LSTMs. In the case of matrix profiles, this is the first incidence that we could find of the use of the method in gas price prediction.
These, we feel, provide an academic and practical justification for why this research is warranted at the current time. Specifically, the Research Questions and aims of this paper are as follows, to be addressed using data from 26 November 2021 to 31 April 2022:
**RQ1.** What is the best method to forecast minimum block price across multiple lookaheads, comparing several modeling approaches?
**RQ2.** Wavelet transforms and the matrix profile are unstudied methods in this area; can these methods improve forecasting metrics or provide insight into gas price mechanics?
**RQ3.** How do blockchain and ETH cryptocurrency exchange data relate to gas price, and can these data be used to improve forecasting metrics?
The sections contained in this paper are: Section 2. Glossary; Section 3. Gas Mechanics Literature Survey; Section 4. Previous Work on Gas Price Prediction; Section 5. Materials and Methods; Section 6. Methods for Data Modeling; Section 7. Results; Section 8. Discussion; Section 9. Conclusions.
## 2 Glossary
_Ethereum Network Terminology [4]_
* Block: Batch of transactions added to the blockchain.
* Contract/Smart Contract: Complex transaction, with clauses and dependencies for operation; not a simple transfer of ETH. Basis of complex applications.
* ETH: Ether, cryptocurrency of the Ethereum network.
* Gas: Unit of computational work completed when processing transaction on the Ethereum network. The gas required to process transactions increases with transaction complexity.
* Gas Price: Fee paid to miners by transaction sender, per unit of gas, to process a transaction and include it in the blockchain. Operates on priority queuing basis: the highest gas price transactions are selected by miners, the gas price is selected by transaction senders. Price is typically quoted in gwei.
Gwei: The denomination of ETH cryptocurrency. One ETH is equivalent to 1018 wei. A giga-wei, or gwei, is equivalent to \(10^{\circ}\) wei, or \(10^{\circ}\) ETH. All gas price values given in this work are in gwei.
* Mempool: Cryptocurrency nodes that function as a way to store data on unconfirmed transactions, acting as a transaction waiting room prior to inclusion in a block.
* Miner: Third party that performs necessary computations for the inclusion of transaction on the blockchain, at a fee.
* Transaction: Cryptographically signed instruction from one Ethereum network account to another, which includes simple ETH transfer and more complex contract deployments that allow for various applications on the network.
## 3 Gas Price Mechanics Literature Survey
### Economics of Ethereum Gas Price
Economic determinants of gas price based on blockchain and cryptocurrency exchange data are investigated by Donmez et al. [24]. A strong non-linear association is found between block utilization and both marginal and median daily gas prices. Gas price is found to be highly influenced by block utilization above 90%, with minimal impact below 90%. ETH transfer transactions are found to be more urgent than smart contact transactions, and a higher proportion of transfers is found to be associated with higher gas prices. Gas price is found to be negatively associated with ETH value. This is consistent with the principle of network users being concerned with network usage costs in term of real currency value [23].
The inclusion of transactions in the next mined block operates on a priority queuing mechanism and is shown to comply with economic predictions from queueing theory and supply/demand theory [23]. Basic ETH transfer-type transactions are observed to have higher urgency and thus typically higher gas price submission. This is because miners select transactions for inclusion based almost solely on gas price [5]. It is assumed that the observed minimum gas price variable will begin to rise when sufficient numbers of high-priority, higher paying transactions are available to fill mining capacity, and transactions close to the base fee are no longer selected. We can observe that the min-gas price rarely deviates from the base fee; however, cases do occur where there is significant deviation. It is possible that lower and upper percentiles of gas prices within blocks may contain some predictive information as to these events, as mining capacity is gradually filled.
The block base fee, the minimum gas price for a submitted transaction in order for it to be eligible for inclusion in the block, is related to block size through a process known as _tatonement_. Blocks have a target size of 15 million gas, and the size is adjusted to meet network demands up to a maximum of 30 million gas worth of transactions per block. The base fee is increased by up to 12.5% of the previous blocks base fee, when the previous block is above the target, continually increasing until the block size has returned to the target [4]. The process from transaction submission to inclusion in the blockchain is shown in Figure 1.
### Influencing Factors on Ethereum Gas Price
Influencing factors of gas price, and also the reliability of gas price oracles, are investigated by Pierro et al. [24]. Ref. [24] defines gas price as the Etherchain "Fast" price, which is defined as the price where 90% of the previous 200 mined blocks contain a transaction at this price. Transactions submitted at this price are expected to be processed by miners within 1-2 min. The gas price is indicated to have pairwise Granger causation with miner count and unconfirmed transaction count at \(p\) = 0.05. Both cases have negative Pearson correlation. Gas price was not found to share Granger causality with the other tested variables: hash rate, bloc time, block difficulty, ETH/US Dollar, and ETH/Bitcoin. Strangely, although some (Werner et al. [15] and Liu et al. [25]) have used the ETH price as an input for their models, and Donmez et al. [24] also talk about a negative association between ETH and gas price, this current work is the first we have found to investigate the relationship between them as opposed to just modeling based on the ETH price. It is to address this\(-\)particularly in light of EIP-1559\(-\)that we relook at this issue here.
Liu et al. also looked at influencing factors on gas price [25]. They present a Machine Learning Regression (MLR)-based approach to predicting gas prices with the goal of locating the next block's lowest transaction gas price for conducting cost-effective Ethereum transactions. Specifically, they identify five influencing parameters from the Ethereum transaction process (i.e., difficulty, block gas limit, transaction gas limit, ether price, and miner reward) and use a traditional machine learning regression to develop the predictive model. The proposed MLR technique appears to function effectively and can lead to considerable potential savings for all transactions with a 74.9% accuracy, according to their empirical analysis on 194,331 blocks.
### Experiences around The Ethereum Hard Fork
The lack of clarity around Ethereum gas fees was in part the reason that Ethereum-London Hard Fork was introduced on 5 August 2021. Prior to the introduction of Ethereum 2.0 (Serenity) and the switch to the Proof of Stake system (PoS), it was necessary to make several preparations, and these were introduced in the five further Ethereum Improvement Proposals (Eips). The element of the Ethereum protocol that establishes the cost for every transaction added to the blockchain is the transaction fee mechanism. Historically, Ethereum employed a first-price auction fee mechanism. EIP-1559 suggested
Figure 1: Ethereum Blockchain Flow (reproduced with permission from Mars et al. [14]).
making several changes to this, e.g., introducing variable-size blocks, a history-dependent reserve price, and the burning of a large part of the transaction fees, to conserve the value of the currency [26]. EIP-1559's influence on user experience and market performance in the immediate aftermath of its launch was assessed by Reijsbergen et al. [27] using on-chain data. Empirical results indicate that although EIP-1559 generally succeeds in achieving its objectives, its short-term behavior is characterized by severe, chaotic oscillations in block sizes (as predicted by the authors' most recent theoretical dynamical system analysis) and sluggish adjustments during demand spikes (such as NFT drops). Unwanted inter-block fluctuation in mining rewards is caused by both occurrences. An alternate base fee adjustment method is suggested that uses an additive increase, multiplicative decrease (AIMD) updating strategy to account for this. Simulations demonstrate that under various demand scenarios, the latter robustly beats EIP-1559. Results show that variable learning rate methods may be a viable alternative to EIP-1559, advancing ongoing talks on the creation of transaction fee marketplaces with higher levels of efficiency.
Liu et al. also looked at the impact at the impact of the introduction of EIP-1559 [28] using the available data from the Ethereum blockchain, the Mempool, and exchanges. To investigate its causal impact on blockchain transaction cost patterns, transaction waiting times, and consensus security, they found that EIP-1559 enhances user experience by minimizing intra-block variations in gas prices paid and cutting down on user wait times. However, they also discover that waiting time is substantially longer when Ether's price is more erratic.
Lan et al. [29] propose a machine learning-based method to forecast the gas price of upcoming blocks paired with a dynamic feature also explored from Mempool. In particular, they took into account pending transactions and their gas cost in the Mempool and used them for the first time as a machine learning feature. For prediction, they mix the Mempool features with machine learning models with results showing good prediction ability, especially in the two indices MAE and RMSE.
## 4 Previous Work on Gas Price Prediction
### The Role and Performance of Gas Price Oracles
The role of gas price recommenders or oracles in the prediction of the gas price has been discussed by a number of authors [30, 31, 32, 33, 34]. In brief, the gas price oracle attempts to predict the future gas price on the basis of previous block utilization. If the oracle indicates a lower than 100% utilization, this tends to show that there was spare capacity and hence there could be an opportunity to reduce gas price bid. Conversely, utilization at more than 100% would indicate that a reduced bid would incur the risk of its transaction not being selected by the miners. To help set the right gas price, the Gas Oracle categorizes the gas price into categories based on the interval of time the user might be willing to wait and for each of them suggests a gas price to set [31].
Empirical analysis of historic gas price data, proposal of a gas price recommendation algorithm and driving GRU-network based gas price forecast can be seen in Werner et al. Implementing an additional wait time of 4.8 blocks (~60 s) with the proposed approach resulted in a saving of 75% on gas fees when compared to the popular Go-Ethereum (Geth) recommender. Forecast evaluation metrics are not discussed. The recommendation algorithm was fed ground truth gas price data, which showed further improvement on the GRU-driven forecast, indicating room for improvement on the forecasting model. Empirical analysis of the gas price data shows high volatility with mean maximum gas price exceeding mean minimum gas price by orders of magnitude and the average block gas price having a mean of 113.96 and standard deviation of 46.46. The autocorrelation of 1 h interval gas price averages indicates daily seasonality. A pre-processing approach of down-sampling gas price data to 5 min resolution, deletion of outliers above 2 standard deviations, and Fourier transform based denoising is employed [15].
Pierro [31; 32] looked at Gas Oracles' forecasts, finding they are less accurate than claimed and user-defined categories for these prices are incorrect. To evaluate the accuracy of current Gas Oracles, the authors propose a user-oriented model based on two gas price categories that correspond to user preferences and a new way to estimate the gas price. Their method used Poisson regression at more frequent intervals, forecasting the price of gas with a narrower margin of error than the real one, giving users a more useful gas price to set.
Turksonmez et al. [33] developed a new gas prediction accuracy metric to assess oracle performance. They showed that oracles overprice transactions, leading them to reach the delay target but at a larger cost than necessary, as well as underprice transactions, causing them to miss the delay target. The authors compared five gas price oracles with results demonstrating relative accuracy, transaction accept rates, price stability, and discussion of factors that affect oracle accuracy. They noted that the ETHGasStation oracle generated the most precise and consistent pricing forecasts.
In an attempt to improve on oracle performance, particularly during times when transaction volumes are increasing rapidly and gas price oracles can underperform, Chuang and Lee [34] showed that Gaussian process models can accurately forecast the distribution of the lowest price in an upcoming block in the face of such increasing transaction volumes. Using the GasStation-Express and Geth gas price oracles, a hybrid model combining the two was proposed, providing a superior estimate when transaction volume fluctuates significantly.
Several modeling approaches are compared by Mars et al. [14]. Sliding windows of 300 previous blocks are used as input to forecast the next block ahead. GRU and LSTM models are found to have similar performance. Geth recommendations and Facebook Prophet forecasts are found to have similar performance, and they are outperformed by the RNN models. Down-sampling and outlier deletion pre-processing steps as found in Werner et al. are also employed before RNN modelling [15]. It is on the latter forms that we will concentrate for our study.
Laurent et al. provided a system of equations for calculating the probability a transaction is mined in a given period, given a gas price and knowledge of all transactions. The system was extended to predicting the probability of transactions being mined by the inclusion of a model for arrival or future transactions. The optimal gas price for a transaction to yield a specified probability of the transaction being accepted, in a given time frame, was achieved using a binary search of the transaction position within the set of modeled transactions. The authors state that comparison was difficult with previous works, as the probability estimate is a fundamentally different output to existing oracles or machine learning forecasts [35].
### Time Series Signal Processing and Data Mining
Dyllon et al. demonstrates wavelet transforms for denoising and signal frequency-time density visualization. Of particular relevance is that a wavelet decomposition-based denoising approach is able to reduce noise in the high granularity, high noise signal while preserving seasonal elements. Continuous Wavelet Transform (CWT) is also used to visualize changing the signal frequency content over time [21].
Barry and Crane [36] showed that motifs and matrix profiles can be effective in improving the performance of LSTMs and the prediction of Bitcoin through the use of an LSTM neural network, yielding an 8% decrease in RSME for one test case. Sun and Xu applied wavelet coherence for the analysis of co-movement and lead-lag effect in multiple stock markets. Wavelet coherence allows a three-dimensional analysis of two signals on the axes of time, frequency and strength of correlation. Phase difference analysis is used to provide information on co-movement sign and lead-lag relationships [17]. Wavelets have application to a wide variety of time-series data, as shown by application to wideband power signals [16], widespread use in financial market studies [17] and Geodetic signals [18].
### Deep Learning Models
An Encoder-Decoder LSTM with attention guided by a matrix profile, as seen in Liu et al., can outperform other RNN models on low granularity data [19]. Fajge et al. [37] used a number of machine learning methods to determine if a transaction with offered gas fees is likely to be added to the blockchain within the anticipated period or not. Their results (evaluated on almost one million actual transactions from the Ethereum MainNet) showed that the proposed model outperformed existing ones at the time with an achievement of 90.18% accuracy and 0.897 F1-score when the model is trained with Random Forest on the dataset balanced with SMOTETomek. Qiu et al. apply an Attention-LSTM, with the degree of matching of each input element used to generate the attention distribution, and wavelet denoising is also applied; both wavelets and the attention mechanism improve performance compared to a standard LSTM [22].
CNN-LSTM models, LSTM models with pre-LSTM convolution filtering and feature pooling layers have seen widespread use in time-series forecasting. An attractive feature of CNN-LSTM models is the ability to effectively handle multiple inputs. Livieris et al. apply a CNN-LSTM toward gold price forecasting [38]. Widiputra et al. apply a single-headed convolution layer, fed into a two layer LSTM network, toward multiple output predictions of stock indices of Shanghai, Japanese, Singaporean and Indonesian markets [39].
Ferenczi and Badica [40] investigated the prediction of Ethereum gas price with Amazon SageMaker DeepAR [41] and found that the choice of covariates had a large effect on model performance. They found that gas prices were impacted by various factors viz including seasonality, volume of transactions, transaction values, number of token transactions and amount of gas used per block.
### Research Gaps and Innovations
We feel that our paper contributes to the literature through the following:
1. While a number of authors have covered the time period following the Ethereum London Fork (e.g., Refs. [26, 27, 28]), cited above, we feel that the relationship between EthUSD/BitUSD and gas price posited in Research Question 3 of our study provides an update on Pierro and Rocha's work of 2019 [24] on the link. This, we think, is an important addition to the corpus of research given the wide fluctuations in the price of cryptocurrencies.
2. Specifically investigating the performance of forecasts over different horizons. These time horizons are useful, as a user must select between these and potentially be penalized in terms of cost or missed transactions for choosing one over the other. There is thus a real cost penalty for the user in not choosing correctly here.
3. In our study, we use multiple approaches: a direct-recursive hybrid LSTM forecasting approach, inclusion of an attention mechanism with the matrix profile, as seen applied to low-granularity daily COVID data and also Convolutional Neural Networks (CNNs) fed to LSTM architectures or CNN-LSTMs.
In the case of matrix profiles, as noted above, this is the first incidence that we could find of this method used in gas price prediction. With the developing work on this method, we feel there is considerable potential for the method to be used to characterize patterns in gas price time series.
## 5 Materials and Methods
### Research Framework and Methodology
The essence of the problem at hand is optimizing costs for a transaction sender. Senders are required to submit the price they pay per unit of gas with their transaction; the risks associated with under/overpaying lie with the sender. Oracles exist to recommend a gas price to address this risk; however, these are limited to simple heuristics.
Previous studies have attempted to improve upon these oracles with time-series forecasting-based approaches; however, these are limited to a short lookahead window. To our knowledge, existing recommenders and studies are limited to short lookaheads on the order of 5 min [14], a single block [25; 29], or a handful of blocks [15].
This study seeks to provide insight into gas prices further into the future than existing oracles and studies. Knowledge of when gas prices will be low, or high, and the magnitude of these movements are proposed to provide value when planning transactions. For the purpose of generating this insight, the problem is framed as a time-series forecasting -- supervised learning problem. Working within this framework is advantageous, as there as there are a wealth of available methods within this framework and a large body of existing work to draw from. LSTM models, attention models, and CNN-LSTM are all identified as powerful modeling approaches toward time-series forecasting. [20].
Time-series forecasting methods often make use of several data pre-processing methods before modeling. This study has identified wavelet transforms and the matrix profile as pre-processing and exploratory methods novel to gas price prediction and seeks to contribute understanding as to their applicability.
The presented methodology intends to investigate forecasting performance, across the identified modeling approaches, and pre-processing methods. Additionally, wavelet coherence is investigated as an exploratory tool.
### Description of Dataset
Ethereum blockchain data were collected by query from the publicly available BigQuery database. Data spanning 26 November 2021 to 27 April 2022 were used in final modeling. Data were retrieved on a block-by-block basis with the final modeled dataset consisting of 953,336 blocks, averaging one block every 14 s, with an average of 203 transactions per block. Transactions were grouped by block to determine the block minimum, maximum and percentile gas price data, block transaction and contract counts. The gas used, base fee and size are provided on a block-by-block basis as is on the blockchain. The value of ETH cryptocurrency is known to affect gas prices [16]. Minute-wise tick opening prices of ETH, in US Dollar Tether, a stable coin tied to the price of the US dollar, were retrieved from Binance exchange historic records [42]. There were no missing data in the dataset.
### Wavelet Coherence
Wavelet coherence is a bi-variate framework that probes the interaction of two time series on the basis of a wavelet function, over varying frequency scales, through time [17]. A wavelet \(\psi\) is a time and frequency localized function with zero mean. The popular model wavelet can be defined as in Equation (1), with \(\omega_{0}\) denoting the dimensionless central frequency.
\[\psi(t)=\pi^{-1/4}e^{i\omega_{0}t}e^{-\epsilon^{2}/2} \tag{1}\]
In order to compute the wavelet coherence spectrum, we first compute the Continuous Wavelet Transforms (CWTs) and cross-wavelet transform for the two time series. Equation (2) shows the CWT \(W_{x}(\tau,s)\) of time series \(x(t)\). The CWT is yielded by the inner product of \(x(t)\) with a continuous family of "daughter wavelets" \(\psi_{\tau,s}(t)\).
\[W_{x}(\tau,s)=\langle x(t),\psi_{\tau,s}(t)\rangle=\int\limits_{-\infty}^{+ \infty}x(t),\psi_{\tau,s}^{*}(t)dt \tag{2}\]
Equation (3) shows the general form of a daughter wavelet. Daughter wavelets result from stretching the mother wavelet by varying \(|s|\), and translating through time by varying \(\tau\) with \(s,\tau\in R\), \(s\neq 0\). Complex conjugation of the daughter functions is denoted by \(\psi_{\tau,s}^{*}\). Varying \(s,\tau\) in a continuous manner yields the set of daughter wavelets used in the CWT.
\[\psi_{r,s}(t) = |s|^{-t_{2}}\quad\psi\frac{t-\tau}{s} \tag{3}\]
Equation (4) shows the cross-wavelet transform \(W_{xy}(\tau,s)\), which can be defined in terms of the CWT of the investigated time series \(W_{x}(\tau,s)\) and \(W_{y}^{*}(\tau,s)\). These wavelet transforms can be interpreted as \(\tau\times s\) matrices, indicating amplitude at scale s and time \(\tau\). \(|W_{xy}(\tau,s)|\), the cross-wavelet power, indicates local covariance.
\[W_{xy}(\tau,s)= W_{x}(\tau,s) W_{y}^{*}(\tau,s) \tag{4}\]
Equation (5) shows wavelet coherence \(R_{xy}^{2}(\tau,s)\), which can be estimated by using the cross-wavelet and auto-wavelet power spectrum, as laid out in Torrence et al. [43]. The \(\tau\times s\) wavelet coherence matrix can be viewed as the time and frequency localized correlation of two time series on the basis of a wavelet convolution. The bi-wavelets package used in this project uses a modified version of this equation, as found in Liu et al. [44]. \(S\) is a smoothing operator, which is achieved by convolution in time and scale.
\[R_{xy}^{2}(\tau,s)=\frac{\left|S\left(s^{-1}W_{xy}(\tau,s)\right)\right|^{2}} {S(s^{-1}\ \ |W_{x}\ \ (\tau,s)|^{2})\ \ S(s^{-1}\ \ |W_{y}\ \ (\tau,s)|^{2})} \tag{5}\]
Coherency does not distinguish between positive and negative correlation due to the squaring of terms. Equation (6) shows phase difference, incorporating the imaginary \(\mathfrak{S}\) and real \(\mathfrak{R}\) parts of the of the power spectrum, which can be used to differentiate between these movements and give information as to the leading/lagging nature of the correlation.
\[\phi_{xy} = tan^{-1}\left(\frac{\mathfrak{R}\left[s\left(s^{-1}w_{xy}(\tau,s )\right)\right]}{3\left[s\left(s^{-1}w_{xy}(\tau,s)\right)\right]}\right) \tag{6}\]
A phase is visualized as arrows in high-correlation regions. Right-pointing arrows indicate signals in phase, while left-pointing arrows indicate signals in anti-phase. The lead-lag relationship is indicated by arrows pointing right-up for first variable leading and left-down for second variable leading.
### Wavelet Denoising
The Discrete Wavelet Transform (DWT) is a variation on the wavelet transform that uses a discrete set of mutually orthogonal wavelet scales as opposed to the continuous set found in the CWT. DWT decomposition is typically achieved through use of high/low-pass convolution filter banks. These convolution filters are designed using wavelet basis functions with perfect reconstruction and no-aliasing constraints. High- and low-pass filters are applied to the input to generate Detail \(D_{j}\) and Approximation coefficients \(A_{j}\). The same filters can be recursively applied to these coefficients to yield additional decomposition levels, as shown in Figure 2[45]. DWT can be used to denoise specific frequency bands of a signal by applying a threshold to particular decomposition levels. The signal can then be reconstructed from the thresholded decomposition coefficients, using an additional set of inverse filters, which are orthogonal to the decomposition filters [21]. A hard thresholding approach is used, where values in decomposition level \(D_{j}\) that are below a threshold \(u\) are set to zero. Equations (7)-(9) show calculation of the threshold \(u\) for a given decomposition level \(D_{j}\). The threshold \(u\) is calculated based on the mean absolute deviation of the decomposition level \(MAD\left(D_{j}\right)\), a user-defined denoising factor \(\lambda\), and the number of time points in the decomposition level \(\#D_{j}\).
\[MAD\left(D_{j}\right) = \frac{1}{n}\ \sum\nolimits_{i=1}^{n}\left|D_{ji}-\left(\bar{D}_{j} \right)\right| \tag{7}\]
\[\sigma_{D_{n}} = \frac{1}{\lambda}\left(MAD\left(D_{j}\right)\ \ \right) \tag{8}\]
\[u_{D_{j}}=\sigma_{D_{j}}\left(\sqrt{3ln(\#D_{j})}\right) \tag{9}\]
The effect of denoising can be evaluated by comparing the performance of models on raw vs. denoised data. However, this approach requires modeling for all denoising parameter sets to be tested, which is computationally expensive. The Signal-to-Noise Ratio (SNR) of the denoised signal and the RMSE of the raw vs. denoised signal can be used to indicate the effectiveness of the denoising approach and degradation of the signal, as seen in Qiu et al. [22]. Figure 3 shows how these RMSE and SNR measures are affected by altering the denoising factor \(\lambda\).
Figure 3: Evaluation of denoising of minimum block gas price using wavelet threshold denoising. Average RMSE and SNR of the top 5 wavelets by SNR shown across a range of denoising factor, \(\lambda\), values. As \(\lambda\) is increased, we see a decrease in RMSE and SNR.
Figure 2: Wavelet Decomposition.
### Down-Sampling and Normalization
Data are down-sampled to the mean over a 5 min window before modeling. Initial modeling approaches truncated outliers in the block minimum gas price to a max of 2 standard deviations with min/max normalization. Z-score normalization, as shown in Equation (10), with no outlier truncation was used in later approaches. Z-score normalization of datapoint \(x\) to \(x^{\prime}\) involves subtraction of the sample mean \(\mu\) and division by standard deviation, \(\sigma\).
\[x^{\prime}=\frac{x-\mu}{\sigma} \tag{10}\]
### Matrix Profile
The matrix profile is a companion time series [46] that indicates a similarity of subsequences in the parent time series. Given a subsequence size and input time series, the distance profile indicates the minimum Euclidian distance in terms of subsequence similarity to another subsequence of that size; points in the time series with a high matrix profile indicate the start of a discord, which is a subsequence with little to no repetition in the time series; and low values indicate a motif, which is a subsequence that repeats within the time series [46]. It has been shown [36] that motifs and matrix profiles can be effective at improving the performance of LSTMs.
The matrix profile is calculated for minimum gas price data, and used as an additional input in modeling, to indicate the proximity of the nearest discord. The matrix profile is calculated on a rolling basis as the forecasting windows move forward to prevent leakage into the training data. The matrix profile series is always one full window size shorter than the input data. One window size of data is removed from start of the gas price and other variables, after calculation of the matrix profile, to align the size of the inputs model. Figure 4 shows the minimum gas price for a training example with its companion matrix profile. The matrix profile foundation Python package is used for the computation of matrix profiles throughout with a window size of 1 day.
Figure 4: Minimum gas price and matrix profile.
## 6 Methods for Data Modeling
### Long Short-Term Memory (LSTM)
Long Short-Term Memory (LSTM) features in a wide domain of time-series forecasting applications, including in stock market price prediction [22; 38; 39; 47] and use in previous gas price forecasting studies [14]. LSTM networks were developed to address the problem of exploding and vanishing gradients in recursive neural networks (RNNs), particularly when information-carrying inputs are found several timesteps from the forecast window.
LSTM networks can be trained using a modified backpropagation algorithm, backpropagation through time, with gradient descent and its variations. The ADAM optimization algorithm is used throughout this project, and _tanh_ activation is used in all cases to allow for GPU support with the cuDNN library.
### Recursive and Hybrid Strategies
There are several forecasting strategies available for tackling the challenge of multi-step forecasting [48]. As shown in Equation (11), the recursive strategy first trains a model \(f\) to predict one timestep ahead \(y_{t+1}\) given an input series of \(N\) observations; \(t\in\{n,\ldots,N-1\}\). Extension of the forecast horizon past a one timestep lookahead is achieved by recursively appending the output to the input series and then feeding this new appended input back into the model.
\[y_{t+1}=f(y_{t},...,y_{t-n+1}) \tag{11}\]
Equation (12) shows a direct strategy, which trains an independent model \(f_{h}\) for each timestep in the lookahead horizon H; \(h\in\{1,\ldots,H\}\)
\[y_{t+h}=f_{h}(y_{t},...,y_{t-n+1}) \tag{12}\]
Equation (13) shows a direct-recursive hybrid strategy, which combines the direct and recursive approaches. An initial model \(f_{0}\) is trained as in the above models. A separate model \(f_{1}\), as in the direct strategy, is then trained using the appended input of \(f_{h}\), as in the recursive strategy. This process is recursively applied, learning H models \(f_{h}\). This takes advantage of the stochastic dependency of the recursive approach while addressing its tendency for compounding errors with the direct multi-model approach. A hybrid LSTM model was trained to a lookahead of 10 timesteps, using hyperparameters from Mars et al. [13] for the base one-step lookahead model.
\[y_{t+h}=f_{h}(y_{t+h-1},...,y_{t-n+1}) \tag{13}\]
Equation (14) shows a multiple output strategy. This strategy trains a single model \(f\), which outputs \([y_{t+1},...,y_{t+h}]\) given \((y_{t},...,y_{t-n+1})\). This strategy approach allows for the modeling of dependency on future values, and it is a solution to impact stochastic dependency and compounding errors found in the single-output mapping models previously discussed. This is of particular concern when considering extended forecast horizons.
\[[y_{t+1},...,y_{t+h}]=f(y_{t},...,y_{t-n+1}) \tag{14}\]
Multiple output models can be further applied in a direct-recursive manner. The \(H\) step horizon can be segregated into several blocks. An initial model is trained to output the first block in the horizon; then, recursive training of new models with the inclusion of the previous models output as input is applied to generate the full horizon.
### Encoder-Decoder and Attention Mechanism
Encoder-decoder networks function by first passing inputs into an encoder network. The encoder generates an intermediate representation of the inputs, which contains sufficient information for the decoder network to generate the target output. Encoder-decoder
networks were originally developed to address sequence-to-sequence prediction problems in natural language processing, but they have since been widely adapted to time-series forecasting. The attention mechanism is a development that involves weighting outputs with an alignment of queries and keys [22]. A schematic of this is shown in Figure 5.
Equations (15)-(17) show the generalized attention mechanism, involving three primary components: queries \(Q\), keys \(K\), and values \(V.\) The dot product of query vectors \(q\) and value vectors \(v,\) their alignment scores, is passed through a softmax activation to generate weights \(\alpha_{q,k_{i}}.\) The final attention score or Context vector \(Context(q,K,V)\) is the sum of all weighted value vectors \(\alpha_{q,\ k_{i}}v_{k_{i}}.\)
\[e_{q,\ k_{i}}=q\cdot k_{i} \tag{15}\]
\[\alpha_{q,k_{i}}=softmax\big{(}e_{q,\ k_{i}}\big{)} \tag{16}\]
\[Context(q,K,V)=\sum_{i_{1}}a_{q,\ k_{i}}v_{k_{i}} \tag{17}\]
Figure 5: Schematic of Attention Head.
Equations (18)-(20) show the attention mechanism used in this project. \(k_{i}\) and \(v_{k_{i}}\)are both set to the encoder hidden state at timestep \(i,\ h_{i}.\ q\) is set to the hidden states of an alignment model \(\tilde{h}\), which takes the encoder output \(h_{f}\) as input. The mechanism essentially trains the alignment model to weight all hidden states \(\ \left[h_{i}\right]\) of the encoder to generated the context vectors. These weighted hidden states are then passed to a feedforward layer to generate the forecast or to a second attention layer followed by a feedforward layer in the two-layer models.
\[e_{\tilde{h}_{i}\ h_{i}}=\tilde{h}_{i}\cdot h_{i} \tag{18}\]
\[\alpha_{q,k_{i}}=softmax(e_{\tilde{h}_{i}\ h_{i}}) \tag{19}\]
\[Context(q,K,V)=\alpha_{q,\ k_{i}}\cdot h_{i} \tag{20}\]
A multiheaded approach is applied, with one attention head, and one set of weighted hidden states being constructed for each input, with each head being fed all inputs. In the final model, these context vectors are concatenated and passed to a second layer of attention heads before passing this to a final linear layer. Both encoder and alignment LSTMs are set to 30 units each in the multiheaded models and 200 units each in the single-headed model. Training times were found to be similar for a multiheaded vs. single-head model with these hyperparameters of units.
### Cnn-Lstm
Convolutional Neural Networks (CNNs) consist of a bank of convolution filters and pooling layers. Convolutional layers function by scanning a convolution filter kernel across the data to generate new combined features. CNNs were originally developed for image processing, using two-dimensional filters. Multivariate time-series data can be treated in the same manner by representation in structured matrix form or by simply scanning a 1-D filter across each variable independently. Convolution layers are typically followed by a non-linear activation function, such as _tanh_ or ReLU. Pooling layers then aggregate the output of convolution layer, typically taking the minimum, maximum, or average of a number of kernel outputs to generate a single datapoint [38].
A CNN-LSTM then passes the output of the CNN to an LSTM. This combined approach has seen much use in time series modeling, particularly financial data with complex multivariate dependencies. This project employs two layers of 1-D convolutional filters with a _tanh_ activation function and no pooling layer. The final model uses a multiheaded architecture with independent convolution layers being fed all inputs. The number of heads is set to the number of inputs. After a grid search on the first month of data, a final filter size of 7 with nine filters per convolutional layer were used. These were fed to two LSTM layers of 100 units each.
### Training Strategies
A sliding window of fixed input timesteps followed by a fixed number of forecast timesteps is used to generate training/validation examples. In all models, 70% of these examples are used for training, and 30% are used for validation. A walk-forward approach is desirable; however, the dataset contains over 40,000 timesteps even when down-sampled to a 5 min resolution. It is not feasible to walk-forward with every timestep. A daily walk is employed in the univariate, single-step lookahead analysis displayed below in Figure 6 In all other cases, a model, or set of models in the hybrid strategy, is trained and validated on one month of data with metrics averaged over 5 months. Models are trained for 15 epochs in all cases, with callbacks set to save weights for the lowest validation loss model during training.
## 7 Results
### Wavelet Coherence
Figure 7 shows coherence plots of block minimum gas price versus (a) Block Base Fee, (b) Gas Used, (c) Smart Contract-Type Transaction Counts, and (d) ETH/USDT ticker price. Base fee shows high correlation with signals in phase. Low-correlation areas can be seen in the time scale of 60-1000 min over narrow time periods. As can be seen from Figure 7a, volumes of high-priority, high-tipping transactions are sufficient to deviate the block minimum transaction gas fee selected by miners from the block base fee. Gas used, ETH/USDT and contract counts plots display noisy spectra at low time scales. Contract counts show strong anti-phase correlation at 1000 to 2000 min time scales across the majority of time periods. This is consistent with findings on a 1-day timescale in Donmez et al [24] regarding smart contract-type transactions being of lower urgency, and having lower gas price, than ETH transfers.
Figure 6: Univariate 1-step Walk-Forward Metrics. Blue lines refer to validation metrics for walk-forward, univariate, single-timestep lookahead model. Model is trained on one month of data validated with a 70:30 training: validation split; then, the data training/validation window was walked forward one day. \(x\)-axis represents start of the training period. (**a**) represents RMSE, (**b**) represents MAE, (**c**) represents MAPE, (**d**) represents R\({}^{2}\).
### Single Step Lookahead
Figure 6 shows validation metrics for a univariate LSTM model, predicting one step ahead. The dramatic increase in RMSE and R\({}^{2}\) seen in Figure 6a,c, in windows starting in January and March is associated with extreme minimum gas price values in the validation data. This highlights the volatile nature of the data, sensitivity of metrics to changes in the data, sensitivity of metrics to changes in the data, and the need for a back-testing strategy to account for this behavior in the data.
Table 1 shows validation metrics for an LSTM model using minimum, 5th and 95th percentile gas prices, with additional variables. Hyperparameters as optimized by Mars et al. [14] were used for basic LSTM modeling. Increasing the depth or width of the network did not noticeably improve performance. Additionally, the 10th to 90th block gas price percentiles in increments of 10 were tested as inputs with marginal differences in metrics. This architecture is unable to model the complex dependencies between the variables or that the majority of variance in a one-step lookahead scenario is accounted for by minimum gas price variable.
### Hybrid Models
Direct-recursive hybrid strategies were employed with univariate and multivariate models. The base one-step lookahead models in the multivariate test showed poor performance metrics on variables aside from the minimum gas price. The ability to accurately predict multiple outputs indicates potential avenues for the development of a multivariate approach.
Tables 2-5 show the performance metrics for hybrid and multiple output models. Modeling strategies are applied to five separate one-month blocks of data; then, monthly metrics are averaged to yield Tables 2-5. Figure 8 shows RMSE and R\({}^{2}\) degradation as the lookahead horizon is extended.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Variable** & **RMSE** & **MAE** & **MAPE** & **R\({}^{2}\)** \\ \hline No Additional Variables & 20.28 & 10.50 & 0.142 & 0.680 \\ Block Size (Gas) & 19.18 & 9.55 & 0.125 & 0.715 \\ Base Fee & 19.89 & 10.28 & 0.132 & 0.693 \\ Transaction Count & 20.00 & 9.94 & 0.129 & 0.687 \\ Block Size (Bytes) & 19.96 & 10.16 & 0.133 & 0.687 \\ ETH/USDT & 20.14 & 10.42 & 0.135 & 0.685 \\ Average Gas Price & 20.11 & 10.46 & 0.142 & 0.683 \\ Maximum Gas Price & 20.42 & 10.75 & 0.140 & 0.674 \\ Smart Contract Count & 20.09 & 10.40 & 0.135 & 0.684 \\ All of Above & 19.35 & 9.74 & 0.126 & 0.711 \\ \hline \end{tabular}
\end{table}
Table 1: Multivariate Single-Lookahead LSTM Error Metrics. Validation metrics for multivariate, single-step lookahead LSTM models. Average of 5 models, each trained/validated on different month of data taken.
Figure 7: Wavelet Coherence Plots. Wavelet coherence plots of secondary variables against block minimum gas price. Time scale is in minutes. Period is in Month–Date. Data are down-sampled to a 5 min resolution before plotting. Heat indicates correlation and arrows indicate phase. Results show coherence plots of block minimum gas price versus (**a**) Block Base Fee, (**b**) Gas Used, (**c**) Smart Contract-Type Transaction Counts, and (**d**) ETH/USDT ticker price. Base fee shows high correlation with signals in phase. Low-correlation areas can be seen in the time scale of 60–1000 min over narrow time periods.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Variable** & **RMSE** & **MAE** & **MAPE** & \(\mathbf{R^{2}}\) \\ \hline Multi-Att 2 Layer MP Rev & 25.07 & 14.02 & 0.193 & 0.509 \\ Multi-Att 2 Layer Uni MP Rev & 25.54 & 14.17 & 0.194 & 0.501 \\ \hline \hline \end{tabular}
* Model parameter shorthand: Att Attention; Multi Multi Multiheaded; MP Matrix Profile; Uni Univariate; Rev MP fed in reverse; DB4 DB4 denoised gas price; Bior 3.3 Bior 3.3 denoised gas price.
\end{table}
Table 4: Multiple Output Model: Average of 5 Lookaheads *, All Months. Validation metrics for multivariate, single-step lookahead LSTM models. Average of 5 models, each trained/validated on different month of data taken.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Variable** & **RMSE** & **MAE** & **MAPE** & \(\mathbf{R^{2}}\) \\ \hline Att 1 Head & 27.15 & 15.89 & 0.226 & 0.435 \\ Multi-Att 1 Layer & 28.46 & 15.86 & 0.245 & 0.389 \\ Multi-Att 2 Layer & 24.70 & 14.00 & 0.199 & 0.521 \\ Multi-Att 2 Layer MP & 25.63 & 14.33 & 0.206 & 0.486 \\ Multi-Att 2 Layer Uni & 25.74 & 14.47 & 0.190 & 0.484 \\ Multi-Att 2 Layer Uni MP & 27.38 & 15.76 & 0.220 & 0.421 \\ \hline \hline \end{tabular}
* Model parameter shorthand: Att Attention; Multi Multi Multiheaded; MP Matrix Profile; Uni Univariate; Rev MP fed in reverse; DB4 DB4 denoised gas price; Bior 3.3 Bior 3.3 denoised gas price.
\end{table}
Table 2: Hybrid Model: Average of 5 Lookaheads *, All Months. Validation metrics for multivariate, single-step lookahead LSTM models. Average of 5 models, each trained/validated on different month of data taken.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Variable** & **RMSE** & **MAE** & **MAPE** & \(\mathbf{R^{2}}\) \\ \hline CNN & 27.30 & 16.25 & 0.230 & 0.414 \\ CNN MP FWD & 27.68 & 16.42 & 0.238 & 0.414 \\ Multi-Att 2 Layer & 27.00 & 15.60 & 0.217 & 0.436 \\ Multi-Att 2 Layer MP & 28.27 & 17.30 & 0.237 & 0.402 \\ Multi-Att 2 Layer MP DB4 & 27.13 & 15.37 & 0.213 & 0.435 \\ Multi-Att 2 Layer Uni Bior 3.3 & 27.85 & 16.38 & 0.232 & 0.410 \\ Hybrid & 26.08 & 13.09 & 0.171 & 0.5421 \\ Hybrid MP & 27.02 & 14.29 & 0.195 & 0.5166 \\ Hybrid MP DB4 & 27.27 & 14.34 & 0.193 & 0.5082 \\ \hline \hline \end{tabular}
* Model parameter shorthand: Att Attention; Multi Multiheaded; MP Matrix Profile; Uni Univariate; Rev MP fed in reverse; DB4 DB4 denoised gas price; Bior 3.3 Bior 3.3 denoised gas price.
\end{table}
Table 3: Hybrid Model: Average of 10 Lookaheads *, All Months. Validation metrics for multivariate, single-step lookahead LSTM models. Average of 10 models, each trained/validated on different month of data taken.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Variable** & **RMSE** & **MAE** & **MAPE** & \(\mathbf{R^{2}}\) \\ \hline Multi-Att 2 Layer MP Rev & 26.78 & 15.49 & 0.221 & 0.452 \\ Multi-Att 2 Layer MP Rev DB4 & 26.82 & 15.17 & 0.212 & 0.450 \\ Multi-Att 2 Layer MP Rev Bior 3.3 & 27.25 & 15.65 & 0.228 & 0.431 \\ Hybrid MP Rev & 27.33 & 13.92 & 0.184 & 0.509 \\ Hybrid MP Rev DB4 & 27.40 & 13.82 & 0.179 & 0.508 \\ \hline \hline \end{tabular}
* Model parameter shorthand: Att Attention; Multi Multi Multiheaded; MP Matrix Profile; Uni Univariate; Rev MP fed in reverse; DB4 DB4 denoised gas price; Bior 3.3 Bior 3.3 denoised gas price.
\end{table}
Table 5: Multiple Output Model: Average of 10 Lookaheads *, All Months. Validation metrics for multivariate, single-step lookahead LSTM models. Average of 10 models, each trained/validated on different month of data taken.
### Cnn-Lstm
Results of the grid search of multiheaded models found nine filters with a kernel size of seven to have the lowest achieved validation loss on the first month of data. The final
Figure 8: Validation Forecasts for Different Methods and Lookahead Window Lengths from 5 to 50 min (**a**) Hybrid 5 Min Lookahead; (**b**) Multihead Attention 5 Minute Lookahead; (**c**) Hybrid 50 Min Lookahead; (**d**) Multihead Attention 50 Minute Lookahead. Gas price values are quoted in gwei.
model was then trained with the same inputs as the attention models. Metrics were comparable to the attention models and inferior to the hybrid model. The use of two-dimensional convolution filters has seen use in previous works involving multivariate time-series data, and we would recommend their investigation in future works [38].
### Attention
The attention models were trained with a single-headed architecture, a multiheaded architecture, one and two attention layers, with a wavelet and matrix profile data preprocessing. The inclusion of multiple heads and multiple layers was found to improve validation metrics. Additionally, multivariate attention models showed better performance than univariate in contrast to the hybrid models. This may be because the more complex architecture is better suited to learning the complex dependencies between variables.
Figure 9 shows the performance of models at different lookaheads. Hybrid models were found to significantly outperform attention models at shorter lookaheads; however, only the univariate hybrid model has comparable metrics to the attention models at longer lookaheads. Averaging over all lookaheads, the best attention and hybrid models had similar RMSE; however, the hybrid model outperformed on other metrics. For reference, Figure 8 shows validation forecasts of hybrid and attention models for the 5 and 50 min lookahead. Comparing Figure 8(a) and 8(b), respectively, it is evident that the hybrid model is much better able to track the stochastic movements of the data at the 5 min lookahead.
Figure 9: Performance Metrics (R\({}^{2}\) and RMSE) at different Lookaheads of the various models. (a) refers to R\({}^{2}\) metrics for different lookaheads and (b) refers to RMSE metrics for different lookaheads.
### Matrix Profile
The matrix profile was fed as input to hybrid and attention models in both reverse and forward chronological order. In the case of hybrid models, addition of the matrix profile to the training examples had a negative effect on validation metrics. There is little difference in metrics between the reverse/forward matrix profile model in the hybrid case. This is likely to be due to the fact that hyperparameters had been tuned for a univariate model; the model may also not be sufficiently complex to extract the necessary features to make use of the matrix profile. We would suggest that optimizing the base one-step lookahead model with these inputs would be of interest in future works.
The addition of the matrix profile as an input to attention multiple output models showed inconsistent results. Inclusion of the forward matrix profile had a negative effect on the validation metrics in all attention models, and it had a marginal effect on the CNN model. Interestingly, the addition of the reversed matrix profile noticeably improved R\({}^{2}\) in the univariate five-step and all 10-step lookahead attention models as opposed to a decrease seen with the forward matrix profile. A reversed matrix profile did not improve metrics in the five-step lookahead multivariate model; however, the decline in metrics was less pronounced than with the addition of the forward matrix profile; the reversed matrix profile performed better than the forward in all attention models. This behavior could be explained by the introduction of some degree of bi-directionality into the models.
### Wavelet Denoising
Wavelet denoising was applied with a Daubechies mother wavelet with scaling function 4, to the second decomposition level, \(\lambda=3\). Denoising with a biorthogonal wavelet with scaling functions (3,3) to the second decomposition level and \(\lambda=10\) was also tested, showing a noticeable decrease in R\({}^{2}\). The biorthogonal wavelet was selected as this wavelet provided the greatest signal to noise ratio gains at \(\lambda=10\), with RMSE of the denoised signal vs. the original of 2.37. The Daubechies wavelet was selected due to its popularity and use in wavelet coherence. In all cases, wavelet denoising was found to have marginal to negative effects on validation metrics. Between the selection of decomposition levels, mother wavelet, and thresholding parameters, the parameter space for wavelet denoising is considerable. A wider search of this parameter space would be of interest in future work.
Comparison with previous works is difficult; to our knowledge, no previous studies have attempted to forecast on a similar time scale. Additionally, the gas price optimization problem can be framed in a number of manners; as a forecasting problem, as a transaction selection probability estimate, or the various heuristic approaches found in existing recommenders/oracles. Future works could benefit from a reframing of the problem, such as applying machine learning toward a transaction inclusion probability estimate.
## 8 Discussion
### Research Questions
In terms of Research Question 1, we found hybrid, multiheaded CNN-LSTM and attention approaches to be the best methods to forecast block minimum gas price. These were successfully applied to forecast multiple timesteps ahead, up to 50 min. The hybrid univariate model outperformed other models, particularly at earlier lookaheads. Attention models had comparable RMSE to the hybrid model at longer lookaheads but were outperformed on other metrics.
As regards Research Question 2, whether wavelet transforms and the matrix profile can improve forecasting metrics, or provide insight into gas price mechanics, wavelet denoised and matrix profile data were tested with a variety of modeling approaches, with mixed results. Wavelet denoising was not found to have any beneficial impacts on validation metrics; however, a narrow set of possible parameters was tested, so broad conclusions cannot be drawn as to the utility of the method in this domain. Matrix profile data fed in forward chronological order were not found to improve validation metrics in any
models. However, interestingly, feeding matrix profile data in reverse was found to improve some attention models.
In order to answer Research Question 3, on the relationship between blockchain and ETH cryptocurrency exchange data on the one hand and gas price on the other, and whether these data can be used to improve forecasting metrics, we looked to wavelet coherence for insights. Wavelet coherence demonstrated a tendency for variables to correlate on a 1-day timescale. Smart contract counts were found to have strong anti-phase correlation on a 1-day timescale, which is in agreement with previous works [24]. Additionally, deviation of the base fee from the block minimum gas price can be seen at specific time periods, across a wide range of time scales, indicating periods of high numbers of high-priority transactions. Variability in univariate walk-forward metrics demonstrates the volatile and changing nature of the data, and it is an indication of the challenges modeling these data presents. The utility of additional variables beyond the gas price appears to be dependent on modeling architecture; additional variables had no effect on hybrid/one-step lookahead models but were beneficial in attention models.
### Comparison with Previous Works
Comparison with previous works is difficult; to our knowledge, no previous studies have attempted to forecast on a similar time scale. Additionally, the gas price optimization problem can be framed in several manners; as a forecasting problem, as a transaction selection probability estimate, or the various heuristic approaches found in existing recommenders/oracles. Future works could benefit from a reframing of the problem, such as applying machine learning toward a transaction inclusion probability estimate.
Work by Mars et al. [14] is the most directly comparable. Mars and this work both operate with data down-sampled to a 5 min window, with Z-score normalization, within a supervised learning framework and with similar performance metrics. We can easily compare one timestep lookahead metrics; however, Mars do not provide forecasts past the first 5 min window. The authors in [14] provide MSE, MAE, RMSE and R\({}^{2}\) metrics, as found in this work. R\({}^{2}\) is most directly comparable as it dimensionless. Mars achieved an R\({}^{2}\) score of 0.896 on both GRU and LSTM-based forecasts, forecasting the minimum block gas price averaged for all blocks in the next 5 min. This study was able to achieve an R\({}^{2}\) of 0.715 within the same forecasting framework. The difference in performance can be attributed the more complete hyperparameter search performed by Mars et al. [14] and modeling on different time periods of data. MAE, MSE and RMSE metrics are quoted on a different scale to those found in this work, so a direct comparison is not possible.
Liu et al. [25] produce forecasts looking one block into the future. Forecasts presented by Liu achieve significantly better metrics than the models presented in this work that looked at 5 min windows; however, it is not clear how directly comparable these are given the timescale difference. This work was also completed before the London Fork. Liu et al. measured the proportion of their forecast values that falls into three categories: below lowest gas price and thus fail, above lowest gas price but below real gas price and thus succeed while saving costs, and higher than the real gas price, which succeeds but increase costs. This evaluation could prove useful in future works. Lan et al. [29] also produce similar one block ahead forecasts with improvements based on the addition of pending transactions in the Mempool as features. XG-boost based models outperform LSTM-based models in both cases and could be of interest in future forecasting studies on extended lookaheads.
Chuang and Lee [34] measure the performance of their model using two metrics: the first is success rate, or the proportion of their recommended transaction prices that are above the minimum gas price of the block, and the second is an Inverse Probability Weight measure (IPW), which increases with predicted gas price and decreases with success rate. IWP is used as the goal is to produce gas prices prediction that will result in successful transactions while keeping costs down. It is difficult to compare these metrics with those
produced in this work. The success rate and IPW could be calculated for the forecasts generated in this work in future works.
## 9 Conclusions
### Summary
In summary, this project has furthered forecasting attempts in an understudied area, with a novel combination of techniques, following a major update to the network in question. Gas price has been demonstrably forecasted at extended lookaheads; wavelet coherence has been shown to provide insight into the relation between gas price and blockchain variables, and the inclusion of matrix profile data showed a potential improvement of forecasting metrics. Direct Recursive Hybrid LSTM models were found to perform better than other modeling approaches given the limitations of the study. Further investigation is needed before drawing conclusions as to wavelet threshold denoising due to the limitations of the study.
### Contributions
This study is the first that we have found to investigate gas price forecasting over different forecasting horizons. This study provides a methodology for forecasting gas prices up to 50 min ahead, in windows of 5 min. Forecasts over a range of lookaheads allow users to make an informed decision on gas price selection and the optimal window to submit their transaction in without fear of their transaction being rejected. This methodology provides more detailed and verbose information regarding gas price dynamics, in comparison to existing recommenders, oracles and forecasting approaches, that provide simple heuristics or limited lookahead horizons.
We have investigated multiple approaches toward generating the above-mentioned forecasts. Direct Recursive Hybrid LSTM models, attention models, CNN fed to LSTM architectures (CNN-LSTM), with matrix profile data and wavelet denoising were investigated. This is the first application of a matrix profile being applied to gas price data and forecasting that we are aware of. This study also demonstrated the applicability of wavelet coherence toward the analysis of movements in gas price data and related time-series data, insight regarding co-movements of gas price, block gas used, smart-contract transaction volume and ETH cryptocurrency price.
This study demonstrated that matrix profile data can enhance an attention-based model; however, given the hardware constraints, hybrid models outperformed attention and CNN-LSTM models. The potential for forecasting in extended and varying lookaheads was demonstrated with the utility of these time horizons being that a user must select between these and potentially be penalized in terms of cost or missed transactions for choosing one over the other.
The focus of this study was also to investigate data in the aftermath of the London Hard Fork, and it sheds insight into the transaction dynamics of the network after this major fork. We feel that this time period is of interest, as Research Question 3 of our study provides an update on Pierro and Rocha's work of 2019 [23] on the link between EthUSD/BitUSD and gas price.
### Limitations of the Study
The limitations of this study are primarily related to available computing resources. Model training time was considerable on the available hardware. All data analysis, training and testing were performed on a desktop PC with a AMD Ryzen 5 1600 CPU and Nvidia 3060 GPU. The robustness of the training and testing strategy could be improved with a more thorough cross-testing method, such as a full implantation of walk-forward validation. The timespan of data considered is also limited due to the training time of models.
Optimizations of pre-processing methods and model hyperparameters were also restricted due to hardware limitations. A more thorough hyperparameter grid search or
Bayesian optimization would be of interest in future studies with more resources available. It is likely that the direct recursive hybrid model, an aggregate of many relatively simple models, outperformed the more complex CNN and attention models due to the above-mentioned limitations. Hybrid model hyperparameters are optimized for the single-timestep lookahead case; optimizing hybrid performance for longer lookaheads is also of interest.
The investigation of wavelet denoising and matrix-profile parameters were also limited by the model training time. The investigation of model performance when fed data using different wavelet-denoising approaches and different matrix profile window sizes and thresholds over different time periods with the varying model architectures previously mentioned would be of interest in future investigations.
### Future Work
As mentioned in the limitations, future works would address limitations relating to the robustness of training/testing, time span of data investigated, and thoroughness of hyperparameter and pre-processing parameter search. Transformer models have shown promise in natural language and time-series forecasting problems; investigation would be of interest with sufficient resources for a through parameterization [49]. The XG-Boost-based model has also shown good performance in previous studies on this domain [25; 29]. Several studies have investigated the use of Mempool data, investigation of these data toward improving forecasting performance and price dynamic understanding is also of interest [28; 29; 34; 40].
Future works could also take advantage of domain-specific evaluation metrics such as those found in Chuang and Lee [34] and Liu et al. [25] to allow for better comparison of performance and more meaningful measures of performance.
To conclude, this project has furthered forecasting attempts in an understudied area, with a novel combination of techniques. Gas price has been demonstrably forecasted at extended lookaheads; wavelet coherence has been shown to provide insight into the relation between gas price and blockchain variables, and the inclusion of matrix profile data was demonstrated to show a potential improvement of forecasting metrics. Further investigation is needed before drawing conclusions as to wavelet threshold denoising.
Conceptualization, C.B.; Data curation, C.B.; Formal analysis, C.B.; Funding acquisition, M.C.; Investigation, M.C. and C.B.; Methodology, M.C. and C.B.; Project administration, M.C. and C.B.; Resources, C.B.; Software, C.B.; Validation, M.C. and C.B.; Visualization, C.B.; Writing\(-\)original draft, C.B.; Writing\(-\)review and editing, M.C. and C.B. All authors have read and agreed to the published version of the manuscript.
For this research, the author M.C. wishes to acknowledge the support, in part, from the Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre at DCU. ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by the Science Foundation Ireland through the SFI Research Centres Programme. Both authors acknowledge the support of the Dublin City University Faculty of Engineering and Computing _Faculty Committee for Research_ to meet publication charges.
Data Availability Statement: Data and implementation details are available at [https://github.com/microlisk/Blockchain_Transaction_Fee_Forecasting](https://github.com/microlisk/Blockchain_Transaction_Fee_Forecasting) (accessed on 01 May 2023).
The authors acknowledge the helpful suggestions from the reviewers and the assistance of the Editor. We also acknowledge permission granted by the authors in [14] to reproduce Figure 1 in their work.
The authors declare no conflict of interest.
|
2309.01592 | Les Houches Lectures on Deep Learning at Large & Infinite Width | These lectures, presented at the 2022 Les Houches Summer School on
Statistical Physics and Machine Learning, focus on the infinite-width limit and
large-width regime of deep neural networks. Topics covered include various
statistical and dynamical properties of these networks. In particular, the
lecturers discuss properties of random deep neural networks; connections
between trained deep neural networks, linear models, kernels, and Gaussian
processes that arise in the infinite-width limit; and perturbative and
non-perturbative treatments of large but finite-width networks, at
initialization and after training. | Yasaman Bahri, Boris Hanin, Antonin Brossollet, Vittorio Erba, Christian Keup, Rosalba Pacelli, James B. Simon | 2023-09-04T13:21:18Z | http://arxiv.org/abs/2309.01592v3 | # Les Houches Lectures on Deep Learning at Large & Infinite Width
###### Abstract
These lectures, presented at the 2022 Les Houches Summer School on Statistical Physics and Machine Learning, focus on the infinite-width limit and large-width regime of deep neural networks. Topics covered include various statistical and dynamical properties of these networks. In particular, the lecturers discuss properties of random deep neural networks; connections between trained deep neural networks, linear models, kernels, and Gaussian processes that arise in the infinite-width limit; and perturbative and non-perturbative treatments of large but finite-width networks, at initialization and after training.8
Footnote 8: These are notes from lectures delivered by Yasaman Bahri and Boris Hanin and a first version were compiled by Antonin Brossollet, Vittorio Erba, Christian Keup, Rosalba Pacelli, and James Simon. Recordings of the lecture series can be found at [https://www.youtube.com/playlist?list=PLElq5bchE3R1QYiNthdjg:JDa4TUzR-Yb](https://www.youtube.com/playlist?list=PLElq5bchE3R1QYiNthdjg:JDa4TUzR-Yb).
###### Contents
* 1 Lecture 1: Yasaman Bahri
* 1.1 Introduction
* 1.2 Setup
* 1.3 Prior in function space
* 1.4 Prior in function space for deep fully-connected architectures
* 1.5 Prior in function space for more complex architectures
* 1.6 Bayesian inference for Gaussian processes
* 1.7 Large-depth fixed points of Neural Network Gaussian Process (NNGP) kernel recursion
* 2 Lecture 2
* 2.1 Introduction
* 2.2 Wick's theorem
* 2.3 Gradient descent dynamics of optimization in the infinite-width limit
* 3 Lecture 3
* 3.1 Introduction
* 3.2 Perturbation theory for dynamics at large but finite width
* 3.3 Large learning rate dynamics at large width: the "catapult" mechanism
* [4]**Lecture 4: Boris Hanin*
* 4.1 Notation Dictionary
* 4.2 Notation
* 4.3 Main Question: Statement, Answer, and Motivation
* 4.4 Answer to Main Question
* 4.5 Motivations
* 4.6 Intuition for Appearance of \(L/n\)
* 4.7 Summary of Yasaman's Lectures 1 - 3
* 4.8 Formalizing Inter-Neuron Correlations and Non-Gaussian Fluctuations
* 4.9 Proof of Theorem 4.2
* 4.10 Step 2: Decompose the Self-Averaging Observable \(\Sigma^{(\ell)}_{\alpha}\) into a Mean and Fluctuation
* 4.11 Step 3: Expand in Powers of Centered Collective Observables
* 4.12 Step 4: Relating \(k^{(\ell+1)}_{4,\alpha}\) to the Dressed 2 Point Function and Obtaining Its Recursion
* 4.13 Step 5: Solving the 4 point function recursion
* 5 Lecture 5
* 5.1 Introduction
* 5.2 Goal
* 5.3 Formalism For Proof of Theorem 5.1
* 5.4 Formulas for Gradients Using Paths
* 5.5 Deriving \(L/n\) Behavior of Input-Output Jacobian
* 5.6 Open questions and dreams
## 1 Lecture 1: Yasaman Bahri
### Introduction
This lecture series will be focused on the infinite-width limit and large-width regime of deep neural networks. Some of the themes that this series will encompass are:
* exactly solvable models.
* mean-field theory & Gaussian field theory.
* perturbation theory and non-perturbative phenomena.
* dynamical systems.
Lectures 1-3 are due to Yasaman Bahri and Lectures 4-5 are due to Boris Hanin.
### Setup
We are interested in neural networks \(f_{\theta}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{L+1}}\), where \(n_{0},n_{L+1}\) are the input and outputs dimensions and \(\theta\) denotes the collection of neural network parameters (weights and biases for fully-connected networks, for example). A "vanilla" fully-connected (FC) deep neural
network (NN) of hidden layer widths \(n_{l}\) and depth \(L\) is defined by the iterative relationship
\[z_{i}^{l}(x)=b_{i}^{l}+\sum_{j=1}^{n_{l}}W_{ij}^{l}\phi(z_{j}^{l-1}(x))\,,\qquad 1 \leq l\leq L\,,1\leq i\leq n_{l+1}\,, \tag{1}\]
and
\[z_{i}^{0}(x)=b_{i}^{0}+\sum_{j=1}^{n_{0}}W_{ij}^{0}x_{j}\,,\qquad f_{i}(x):=z_ {i}^{L}(x), \tag{2}\]
where \(x\) is the input, \(z^{l}\in\mathbb{R}^{n_{l+1}}\) is the vector of preactivations at layer \(l\), \(\{b_{i}^{l},W_{ij}^{l}\}_{ij}\) are the biases and weights at layer \(l\), and \(\phi\) is a nonlinear function such as tanh or ReLU, i.e. \(\phi(x)=\max(0,x)\). The parameters are initialized independently as
\[b_{i}^{l}\thicksim\mathcal{N}(0,\sigma_{b}^{2})\,,\qquad W_{ij}^{l}\thicksim \mathcal{N}\left(0,\frac{\sigma_{w}^{2}}{n_{l}}\right), \tag{3}\]
where \(\mathcal{N}(\mu,\sigma^{2})\) is the Normal distribution of mean \(\mu\) and variance \(\sigma^{2}\). Note the dependence of the weight variance on the inverse layer width, which will play an important role in subsequent discussions. We refer to the distribution of parameters at initialization as the _prior_. We mainly consider the case of scalar output \(n_{L+1}=1\) in these lectures (the results are straightforward to generalize to the multi-dimensional setting) and uniform hidden layer widths \(n_{l}:=n\) for \(1\leq i\leq L\).
### Prior in function space
It will be fruitful to translate results, where possible, to the space of functions rather than space of NN parameters, particularly for NNs where there can be a large degree of redundancy in the representation. For example, FC NNs have a permutation symmetry associated with a hidden layer,
\[W_{ij}^{l+1},W_{jk}^{l}\to W_{i\pi(j)}^{l+1},W_{\pi(j)k}^{l},\qquad\forall\text{ permutations $\pi$ of $n$ elements}, \tag{4}\]
so that two different collections of parameters correspond to exactly the same function. A first natural question is then what _prior over functions_ is induced by the prior over parameters?
**Definition 1** (Gaussian process).: _A function \(f:\mathbb{R}^{n_{0}}\to\mathbb{R}\) is a draw from a Gaussian process (GP) with mean function \(\mu:\mathbb{R}^{n_{0}}\to\mathbb{R}\) and kernel function \(K:\mathbb{R}^{n_{0}}\times\mathbb{R}^{n_{0}}\to\mathbb{R}\) if, for any finite collection of inputs \(\{x_{1},\dots,x_{m}\}\), the vector of outputs \(\{f(x_{1}),\dots,f(x_{m})\}\) is a multivariate Normal random variable with mean \(\mu_{i}=\mu(x_{i})\) and covariance \(K_{ij}=K(x_{i},x_{j})\)._
**Result 1** (See Ref. [1]).: _Consider a FC NN with a single hidden layer (\(L=1\) in our notation) of width \(n\) with parameters drawn i.i.d. as_
\[b_{i}^{0}\thicksim\mathcal{N}(0,\sigma_{b}^{2})\,,\quad W_{ij}^{0}\thicksim \mathcal{N}\left(0,\frac{\sigma_{w}^{2}}{n_{0}}\right)\,,\quad b_{i}^{1} \thicksim\mathcal{N}(0,\sigma_{b}^{2})\,,\quad W_{ij}^{1}\thicksim\mathcal{N} \left(0,\frac{\sigma_{w}^{2}}{n}\right). \tag{5}\]
_Then, in the limit \(n\to\infty\), the distribution of the output \(z_{i}^{1}\), for any \(i=1,...,n_{2}\), is a Gaussian process with a deterministic mean function \(\mu^{1}(x)=0\) and kernel function \(K^{1}\) given by_
\[K^{1}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\mathbb{E}_{u_{1},u_{2} \thicksim\mathcal{N}(0,\Sigma)}\left[\phi(u_{1})\phi(u_{2})\right], \tag{6}\]
_where_
\[\Sigma=\begin{bmatrix}K^{0}(x,x)&K^{0}(x,x^{\prime})\\ K^{0}(x^{\prime},x)&K^{0}(x^{\prime},x^{\prime})\end{bmatrix}, \tag{7}\]
\(K^{0}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\frac{(x,x^{\prime})}{n_{0}}\)_, and different outputs \(z_{i}^{1},z_{j}^{1}\) for \(i\neq j\) are independent._
Proof (informal).: Consider the collection of preactivations, \(S:=\{z_{i}^{1}(x_{a})\}_{\begin{subarray}{c}a=1...m,\\ i=1...m_{2}\end{subarray}}\), which are random variables conditioned on the input values \(x_{1},...,x_{m}\), and recall that
\[z_{i}^{1}(x_{a})=b_{i}^{1}+\sum_{j=1}^{n}W_{ij}^{1}\phi\big{(}z_{j}^{0}(x_{a}) \big{)}\,. \tag{8}\]
Notice that each \(z_{i}^{1}(x_{a})\) is a sum of i.i.d. random variables (each, the product of two random variables). By applying the Central Limit Theorem (CLT) to the collection \(S\) in the limit of large \(n\) and noting that the the variances and covariances are finite, we find that \(S\) is governed by the multivariate Normal distribution. Since different outputs \(z_{i}^{1},z_{j}^{1}\) with \(i\neq j\) additionally have zero covariance, they are independent. Below, we will drop the reference to \(x_{a}\) with \(a=1...m\) and instead refer to arbitrary \(x,x^{\prime}\). Bear in mind that the source of randomness is entirely from the parameters and not from the inputs.
The covariance function of the GP for arbitrary \(x,x^{\prime}\) is
\[\mathbb{E}[z_{i}^{1}(x)z_{i}^{1}(x^{\prime})] =\mathbb{E}[b_{i}^{1}b_{i}^{1}]+\sum_{j,j^{\prime}=1}^{n}\mathbb{ E}[W_{ij}^{1}W_{ij^{\prime}}^{1}\phi(z_{j}^{0}(x))\phi(z_{j}^{0}(x^{\prime}))] \tag{9}\] \[=\sigma_{b}^{2}+\sum_{j,j^{\prime}=1}^{n}\mathbb{E}[W_{ij}^{1}W_{ ij^{\prime}}^{1}]\,\mathbb{E}[\phi(z_{j}^{0}(x))\phi(z_{j^{\prime}}^{0}(x^{ \prime}))]\] \[=\sigma_{b}^{2}+\frac{\sigma_{w}^{2}}{n}\sum_{j,j^{\prime}=1}^{n }\delta_{jj^{\prime}}\,\mathbb{E}[\phi(z_{j}^{0}(x))\phi(z_{j^{\prime}}^{0}(x^ {\prime}))]\] \[=\sigma_{b}^{2}+\frac{\sigma_{w}^{2}}{n}\sum_{j=1}^{n}\mathbb{E}[ \phi(z_{j}^{0}(x))\phi(z_{j}^{0}(x^{\prime}))]\] \[=\sigma_{b}^{2}+\sigma_{w}^{2}\,\mathbb{E}[\phi(z_{j}^{0}(x))\phi (z_{j}^{0}(x^{\prime}))]\] \[:=K^{1}(x,x^{\prime})\,,\]
where the second-to-last line holds for any \(j\) and we have used the fact that the contributions from different \(j=1...n\) are identical. We can similarly compute the covariance of the preactivations at the previous layer, obtaining
\[\mathbb{E}[z_{i}^{0}(x)z_{i}^{0}(x^{\prime})]=\sigma_{b}^{2}+\sigma_{w}^{2} \bigg{(}\frac{x\cdot x^{\prime}}{n_{0}}\bigg{)}:=K^{0}(x,x^{\prime})\,. \tag{10}\]
Note that the preactivations \(z^{0}\) are also described by a multivariate Normal distribution, but in this case it is due to the Normal distribution on the weights and biases since the sum is over \(n_{0}\) terms, which we keep finite unlike the hidden layer size \(n\). Finally, note the remaining expectation in (9) can be expressed as a function of the kernel \(K^{0}(x,x^{\prime})\). Indeed, because of the Gaussianity of \(z^{0}\) we have
\[K^{1}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\,\mathbb{E}_{u_{1},u_{2} \sim\mathcal{N}(0,\Sigma)}[\phi(u_{1})\phi(u_{2})]\,, \tag{11}\]
where
\[\Sigma=\begin{bmatrix}K^{0}(x,x)&K^{0}(x,x^{\prime})\\ K^{0}(x^{\prime},x)&K^{0}(x^{\prime},x^{\prime})\end{bmatrix}. \tag{12}\]
### Prior in function space for deep fully-connected architectures
We can generalize this last result to finite-depth FC NNs. There are at least two sensible options for taking the infinite-width limit [2, 3]:
* the _sequential_ limit, where the width of each layer \(l\) is taken to infinity one by one, from first to last.
* the _simultaneous_ limit, where the width of each layer \(l\) is taken to infinity at the same time.
In both cases, with the natural extension of the prior (5) to multiple layers, each of the hidden-layer preactivations and the output of the NN are again GPs with zero mean and covariance function \(K^{l}\) that can be computed iteratively as
\[K^{l}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\mathbb{E}_{u_{1},u_{2}\sim \mathcal{N}(0,\Sigma)}\left[\phi(u_{1})\phi(u_{2})\right], \tag{13}\]
where
\[\Sigma=\begin{bmatrix}K^{l-1}(x,x)&K^{l-1}(x,x^{\prime})\\ K^{l-1}(x^{\prime},x)&K^{l-1}(x^{\prime},x^{\prime})\end{bmatrix} \tag{14}\]
and the initial covariance is \(K^{0}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\bigg{(}\frac{x\cdot x^{ \prime}}{n_{0}}\bigg{)}\). We refer readers to the references for proofs of the two cases.
Notice that \(\mathbb{E}_{u_{1},u_{2}\sim\mathcal{N}(0,\Sigma)}\{\phi(u_{1})\phi(u_{2})\}\) is a function of the elements of the covariance matrix \(\Sigma\in\mathbb{R}^{2x2}\). We will write it generically as
\[\mathcal{F}_{\phi}(\Sigma_{11},\Sigma_{12},\Sigma_{22}):=\mathbb{E}_{u_{1},u_ {2}\sim\mathcal{N}(0,\Sigma)}\left[\phi(u_{1})\phi(u_{2})\right]. \tag{15}\]
This function can in fact be computed in closed-form for certain choices of nonlinearity \(\phi\). For the case of ReLU, \(\phi=\max(x,0)\), one has
\[\mathcal{F}_{\text{ReLU}}(\Sigma_{11},\Sigma_{12},\Sigma_{22})=\frac{1}{2\pi} \sqrt{\Sigma_{11}\Sigma_{22}}\left[\sin\theta+(\pi-\theta)\cos\theta\right], \tag{16}\]
where \(\theta=\arccos(\Sigma_{12}/\sqrt{\Sigma_{11}\Sigma_{22}})\)[4].
### Prior in function space for more complex architectures
The convergence of the prior for wide, deep neural networks to GPs extends to other architectures, such as neural networks with convolutional layers [5] or attention layers [6], provided that their weights are initialized i.i.d. with the appropriate inverse scaling of the weight variance with the hidden layer width. The form of the recursion will depend on the nature of the layers.
For example, a simple NN built by stacking one-dimensional convolutional layers is defined by iterating
\[z_{i,\alpha}^{l}=b_{i}^{l}+\sum_{j=1}^{n}\sum_{\beta=-k}^{k}W_{ij,\beta}^{l} \phi(z_{j,\alpha+\beta}^{l-1}(x)), \tag{17}\]
where Latin symbols index into _channels_ running from \(1\) to \(n\); Greek symbols on \(z\) variables index into _spatial_ dimensions running from \(1\) to \(D\), the spatial dimension of the input; and the index \(\beta\) runs over the spatial size \(2k+1\) of the convolutional filters. At initialization, we draw parameters i.i.d. as9
Footnote 9: As before, this is modified appropriately for the parameters of the first layer, since the input dimension is \(n_{0}\).
\[b_{i}^{l}\sim\mathcal{N}(0,\sigma_{b}^{2}),\quad W_{ij,\beta}^{l}\sim \mathcal{N}\left(0,\nu_{\beta}\frac{\sigma_{w}^{2}}{n}\right), \tag{18}\]
where \(\nu_{\beta}\) provides a possibly non-uniform magnitude to different spatial coordinates (in the uniform case, \(\nu_{\beta}=1/(2k+1)\)) [5, 7]. We take the number of hidden-layer channels \(n\rightarrow\infty\) while keeping all other dimensions \(k,D,n_{0}\) fixed. As before, each preactivation and the output of the NN are GPs with zero means, while the covariance function depends on the layer and also acquires spatial components. Indeed
\[K_{\alpha,\alpha^{\prime}}^{l}(x,x^{\prime})=\mathbb{E}\left[z_{ \alpha}^{l}(x)z_{\alpha^{\prime}}^{l}(x^{\prime})\right] =\sigma_{b}^{2}+\sum_{j,j^{\prime}=1}^{n}\sum_{\beta,\beta^{\prime }=-k}^{k}\mathbb{E}[W_{ij,\beta}^{l}W_{ij^{\prime},\beta^{\prime}}^{l}]\mathbb{ E}[\phi(z_{j,\alpha+\beta}^{l-1}(x))\phi(z_{j^{\prime},\alpha^{\prime}+\beta}^{l-1}(x^{ \prime}))]\] \[=\sigma_{b}^{2}+\sigma_{w}^{2}\sum_{\beta=-k}^{k}\nu_{\beta} \mathbb{E}[\phi(z_{j,\alpha+\beta}^{l-1}(x))\phi(z_{j,\alpha^{\prime}+\beta}^ {l-1}(x^{\prime}))]\] \[=\sigma_{b}^{2}+\sigma_{w}^{2}\sum_{\beta=-k}^{k}\nu_{\beta} \mathcal{F}_{\phi}(K_{\alpha+\beta,\alpha+\beta}^{l-1}(x,x),K_{\alpha+\beta, \alpha^{\prime}+\beta}^{l-1}(x,x^{\prime}),K_{\alpha^{\prime}+\beta,\alpha^{ \prime}+\beta}^{l-1}(x^{\prime},x^{\prime})), \tag{19}\]
where \(\mathcal{F}_{\phi}\) is again the one defined in (15), and the base case is
\[K_{\alpha,\alpha^{\prime}}^{0}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\sum _{\beta=-k}^{k}\nu_{\beta}\frac{1}{n_{0}}\sum_{j,\alpha+\beta}x_{j,\alpha^{ \prime}+\beta}^{\prime}. \tag{20}\]
Often channel and spatial indices are aggregated into a single index before the output. Below we describe two example strategies; here, \(\overline{W}\), \(\overline{b}\), \(\overline{z}\) refer to the output layer variables.
* Aggregation by vectorization -- In this example, we flatten the last hidden-layer preactivations across channel and spatial dimensions together, \[\overline{z}_{i}^{L+1}(x)=\overline{b}_{i}^{L+1}+\sum_{j=1}^{n \cdot D}\overline{W}_{ij}^{L+1}\phi(\text{Vec}[z^{L}(x)]_{j}),\] (21) where \(n\), \(D\) are the incoming channel and spatial dimensions, respectively, and \(\text{Vec}(\cdot)\) is the vectorization operator. We initialize \(\overline{b}\) as before and \(\overline{W}_{ij}\sim\mathcal{N}(0,\frac{\sigma_{w}^{2}}{n\cdot D})\). The covariance of the network output is \[\mathbb{E}[\overline{z}_{i}^{L+1}(x)\,\overline{z}_{i}^{L+1}(x^{ \prime})] =\sigma_{b}^{2}+\sum_{j,j^{\prime}=1}^{n\cdot D}\mathbb{E}[ \overline{W}_{ij}^{L+1}\overline{W}_{ij^{\prime}}^{L+1}]\mathbb{E}[\phi( \text{Vec}[z^{L}(x)]_{j})\phi(\text{Vec}[z^{L}(x^{\prime})]_{j^{\prime}})]\] \[=\sigma_{b}^{2}+\frac{\sigma_{w}^{2}}{D}\sum_{\alpha=1}^{D} \mathcal{F}_{\phi}\left(K_{\alpha,\alpha}^{L}(x,x),K_{\alpha,\alpha}^{L}(x,x^{ \prime}),K_{\alpha,\alpha}^{L}(x^{\prime},x^{\prime})\right).\] In this case, the final covariance depends on the prior layer covariance at the _same_ spatial location of two inputs, neglecting some of the information contained in the full tensor \(K_{\alpha,\alpha^{\prime}}^{L}(x,x^{\prime})\).
* Aggregation over spatial indices -- In this example, we aggregate over spatial indices with a fixed vector of weights \(h_{\alpha}\), \[\overline{z}_{i}^{L+1}(x)=\overline{b}_{i}^{L+1}+\sum_{j=1}^{n}\overline{W}_{ ij}^{L+1}\sum_{\alpha=1}^{D}h_{\alpha}\phi(z_{j,\alpha}^{L}(x)),\] (23)
and similar to the previous computations (taking \(\overline{W}_{ij}\thicksim\mathcal{N}(0,\frac{\sigma_{w}^{2}}{n})\)),
\[\mathbb{E}[\overline{z}_{i}^{L+1}(x)\overline{z}_{i}^{L+1}(x^{\prime})]=\sigma_{ b}^{2}+\sigma_{w}^{2}\sum_{a,a^{\prime}=1}^{D}h_{a}h_{a^{\prime}}\mathcal{F}_{\phi} \left(K_{a,a}^{L}(x,x),K_{a,a^{\prime}}^{L}(x,x^{\prime}),K_{a^{\prime},a^{ \prime}}^{L}(x^{\prime},x^{\prime})\right). \tag{24}\]
Notice that in this case, even with spatially uniform aggregation \(h_{a}=1/D\), the final covariance receives spatially off-diagonal contributions from the prior layer covariance.
Finally, we note that residual NNs are another architecture that is straightforward to treat. The preactivations take the form
\[z_{i}^{l}(x)=b_{i}^{l}+\sum_{j=1}^{n}W_{ij}^{l}\phi(z_{j}^{l-1}(x))+\gamma^{l} z_{i}^{l-1}(x), \tag{25}\]
where \(\gamma^{l}\) are fixed hyperparameters. In this case, the kernel recursion takes the form
\[K^{l}(x,x^{\prime})=\sigma_{b}^{2}+\sigma_{w}^{2}\,\mathcal{F}_{\phi}\left(K ^{l-1}(x,x),K^{l-1}(x,x^{\prime}),K^{l-1}(x^{\prime},x^{\prime})\right)+( \gamma^{l})^{2}\,K^{l-1}(x,x^{\prime})\,. \tag{26}\]
To summarize, we have seen how compositional kernels and GPs can emerge from taking a natural infinite-width limit of deep NNs in different architectural classes. The quantities we have derived
* can be used directly in kernel ridge regression or Bayesian inference. In some settings, these kernel-based predictors can be as good as or better models than their NN counterparts.
* enable further theoretical understanding of deep NNs at initialization and after training. As one example, understanding the structure of these compositional kernels on realistic data can lend insight into the advantages of different architectures.
### Bayesian inference for Gaussian processes
Consider a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1,\ldots m}\) and suppose we would like to make predictions at a point \(x_{*}\) in a Bayesian manner, using a model \(f_{\theta}(x)\) with learnable parameters \(\theta\). Let \(\vec{x}=[x_{1},\ldots,x_{m}]^{T}\) and \(\vec{y}=[y_{1},\ldots,y_{m}]^{T}\). The distribution of the output \(z_{*}=f_{\theta}(x_{*})\), conditioned on the dataset \(\mathcal{D}\) and \(x_{*}\), is given by
\[p(z_{*}\mid\mathcal{D},x_{*})=\int d\,\theta\,p(z_{*}\mid\theta,x_{*})p( \theta\mid\mathcal{D})\,. \tag{27}\]
A convenient way to rewrite this is to introduce the vector of function values on the training data, \(\vec{z}=[f_{\theta}(x_{1}),\ldots f_{\theta}(x_{m})]\). Then
\[p(z_{*}\mid\mathcal{D},x_{*})=\int d\vec{z}\,p(z_{*}\mid\vec{z},\vec{x},x_{*} )\,p(\vec{z}\mid\mathcal{D}), \tag{28}\]
in which we changed the integral over parameters to an integral over the finite set of function values.
A natural question is under which conditions the conversion from parameter to function space is allowed. In general, one might expect a functional integral over functions that can be represented by the model, i.e. \(\int\mathfrak{D}z\). In our case, we are implicitly assuming that the likelihood depends on the parameters only through the outputs of the model. Note that working in function space might allow certain properties of the model to be constrained more naturally,
such as function smoothness; on the other hand, other forms of regularization (such as \(L_{2}\) regularization on parameters) might be more challenging to write in a simple form.
We would like to now consider a specific likelihood. By Bayes' theorem,
\[p(\vec{z}\mid\mathcal{D})\to p(\vec{z}\mid\vec{y})=\frac{p(\vec{y}\mid\vec{z})p( \vec{z})}{p(\vec{y})}, \tag{29}\]
(we forgo writing the conditioning on inputs where it is understood), and assuming the targets and model are related by zero-mean Gaussian noise of variance \(\sigma_{e}^{2}\),
\[p(\vec{y}\mid\vec{z})\propto\prod_{i=1}^{m}\exp\left[-\frac{(y_{i}-z_{i})^{2} }{2\sigma_{e}^{2}}\right]. \tag{30}\]
The terms \(p(z_{*}\mid\vec{z})\) and \(p(\vec{z})\) combine to yield the prior distribution \(p(z_{*},\vec{z})\), which for GPs is a multivariate Gaussian distribution with mean and covariance that depend on the inputs \((x_{*},\vec{x})\). Assuming zero mean, we have
\[p(z_{*},\vec{z})\propto\exp\left\{-\frac{1}{2}\begin{bmatrix}z_{*}&\vec{z} \end{bmatrix}\begin{bmatrix}K(x_{*},x_{*})&K(\vec{x},x_{*})^{T}\\ K(\vec{x},x_{*})&K(\vec{x},\vec{x})\end{bmatrix}^{-1}\begin{bmatrix}z_{*}\\ \vec{z}\end{bmatrix}\right\}, \tag{31}\]
where \(K(\vec{x},x_{*})_{i}=K(x_{i},x_{*})\) is an \(m\)-dimensional column vector and \(K(\vec{x},\vec{x})_{ij}=K(x_{i},x_{j})\) is an \(m\times m\)-dimensional matrix.
We see that the predictive distribution (28) of the model output \(z_{*}\) at \(x_{*}\) involves an integral with a Gaussian integrand, and thus \(z_{*}|\mathcal{D},x_{*}\sim\mathcal{N}(\mu_{*},\sigma_{*}^{2})\) with
\[\mu_{*} =K(\vec{x},x_{*})^{T}\left(K(\vec{x},\vec{x})+\sigma_{e}^{2}I \right)^{-1}\vec{y}\;, \tag{32}\] \[\sigma_{*}^{2} =K(x_{*},x_{*})-K(\vec{x},x_{*})^{T}\left(K(\vec{x},\vec{x})+ \sigma_{e}^{2}I\right)^{-1}K(\vec{x},x_{*}).\]
The marginal likelihood \(p(\mathcal{D})\) for GPs can be expressed analytically as
\[\log p(\mathcal{D})=-\frac{1}{2}\vec{y}^{T}\left(K(\vec{x},\vec{x})+\sigma_{ e}^{2}I\right)^{-1}\vec{y}-\frac{1}{2}\log\det\left(K(\vec{x},\vec{x})+ \sigma_{e}^{2}I\right)-\frac{m}{2}\log 2\pi. \tag{33}\]
Here the first term accounts for dataset fitting, while the second represents a complexity penalty that favors simpler covariance functions.
In contrast to Bayesian inference for generic models, which often requires approximations because of the integrals involved, Bayesian inference for GPs [8] can be performed exactly. Given the "NNGP" [2] correspondence between infinitely-wide NNs and GPs discussed in prior sections, we can use the resulting compositional kernels to make Bayesian predictions using deep NNs in this limit.
### Large-depth fixed points of Neural Network Gaussian Process (NNGP) kernel recursion
We would now like to investigate the large-depth behavior \(l\to\infty\) of the NNGP kernel recursion
\[K^{l}(x,x^{\prime}) =\sigma_{b}^{2}+\sigma_{w}^{2}\,\mathcal{F}_{\phi}(K^{l}(x,x),K^{ l}(x,x^{\prime}),K^{l}(x^{\prime},x^{\prime})), \tag{34}\] \[K^{0}(x,x^{\prime}) =\sigma_{b}^{2}+\sigma_{w}^{2}\bigg{(}\frac{x\cdot x^{\prime}}{n _{0}}\bigg{)}.\]
As training deep NNs was known to be a challenge in practice, these large-depth limits have been used [9] as proxy metrics to identify regions of hyperparameter space where networks can be trained. (In this example, hyperparameters for which we might desire guidance
on choosing include \(\sigma_{w}\), \(\sigma_{b}\), \(L\), and \(\phi\).) It was hypothesized that for deep NNs to be trainable using backpropagation, forward propagation of information about the inputs through the depth of the network would be needed. In lieu of an information-theoretic approach, a proxy for the information content contained in the forward signal is the covariance between pairs of inputs. Regions of hyperparameter space where the covariance function quickly converges to a structureless limit are to be avoided for choosing architectures and initialization strategies. We will briefly treat the simplest analysis (of forward propagation) in this direction for the case of a fully-connected NN [9]. With further developments in deep learning theory, analogous but comprehensive treatments have been constructed; we refer the reader to the later literature, see e.g. [7, 10, 11].
Let us consider the correlation between a pair of inputs \(x_{\alpha}\) and \(x_{\beta}\). We will need to track recursions for the three quantities \(K_{\alpha a}\), \(K_{\beta\beta}\), and \(K_{a\beta}\). For the diagonal elements,
\[K^{l}_{aa}=\sigma_{b}^{2}+\sigma_{w}^{2}\int Ds\left(\phi(\sqrt{k_{aa}^{l-1}s} )\right)^{2}, \tag{35}\]
where \(Ds\) is the standard Gaussian measure, while for off-diagonal elements
\[K^{l}_{a\beta}=\sigma_{b}^{2}+\sigma_{w}^{2}\int Ds_{1}Ds_{2}\,\phi(u_{1})\phi (u_{2}), \tag{36}\]
with
\[\begin{split} u_{1}&=\sqrt{k_{aa}^{l-1}}s_{1},\\ u_{2}&=\sqrt{k_{\beta\beta}^{l-1}}\left(c_{a\beta }^{l-1}s_{1}+\sqrt{1-(c_{a\beta}^{l-1})^{2}}s_{2}\right),\\ c_{a\beta}^{l}&=K^{l}_{a\beta}/\sqrt{K^{l}_{a\alpha }K^{l}_{\beta\beta}}.\end{split} \tag{37}\]
Now suppose that the diagonal elements of the kernel approach a fixed point \(q^{*}\) (this occurs for any bounded \(\phi\) and the convergence is rapid with depth, see [9]). In this case, note that \(c^{*}=1\) is always a fixed point of the recursion for off-diagonal covariances, as we recover the condition for the fixed point of the diagonal elements. Is the fixed point stable or unstable to leading order in small deviations? By expanding the map \(c_{a\beta}^{l-1}\to c_{a\beta}^{l}\) around the fixed point, one finds the stability of \(c^{*}=1\) is governed by
\[\chi_{1}=\frac{\partial\,c_{a\beta}^{l}}{\partial\,c_{a\beta}^{l-1}}=\sigma_{ w}^{2}\int Ds\left(\phi^{\prime}(\sqrt{q^{*}}s)\right)^{2}. \tag{38}\]
If \(\chi_{1}<1\), then \(c^{*}=1\) is a stable fixed point, while if \(\chi_{1}>1\) it is unstable.
The rate of convergence with depth can be obtained by expanding the recursion relationships to leading order around the fixed points [9]. In the case of the diagonal elements, we define \(\epsilon^{l}:=K^{l}_{a\alpha}-q^{*}\) and obtain
\[\epsilon^{l}=\epsilon^{l-1}\left[\chi_{1}+\sigma_{w}^{2}\int Ds\,\phi^{\prime \prime}(\sqrt{q^{*}}s)\phi(\sqrt{q^{*}}s)\right]+O((\epsilon^{l-1})^{2}) \tag{39}\]
so that, at large \(l\), \(\epsilon^{l}=\epsilon^{0}\exp\left(-l/\xi_{q}\right)\) with characteristic depth scale
\[\xi_{q}^{-1}=-\log\left[\chi_{1}+\sigma_{w}^{2}\int Ds\,\phi^{\prime\prime}( \sqrt{q^{*}}s)\phi(\sqrt{q^{*}}s)\right]. \tag{40}\]
To study off-diagonal elements, we instead examine the correlation \(\epsilon^{l}=c^{l}_{a\beta}-c^{*}\). On the basis that the diagonal elements approach their fixed point \(q^{*}\) more rapidly [9], we substitute \(K^{l}_{aa}=q^{*}\) to obtain
\[\epsilon^{l}=\epsilon^{l-1}\left[\sigma_{w}^{2}\int Ds_{1}\,Ds_{2}\,\phi^{\prime }(u_{1}^{*})\phi^{\prime}(u_{2}^{*})\right]+O((\epsilon^{l-1})^{2}), \tag{41}\]
where
\[\begin{split} u_{1}^{*}&=\sqrt{q^{*}}s_{1}\,,\\ u_{2}^{*}&=\sqrt{q^{*}}\left(c^{*}s_{1}+\sqrt{1-( c^{*})^{2}}s_{2}\right)\,,\end{split} \tag{42}\]
and the characteristic depth is given by
\[\xi_{c}=-\log\left[\sigma_{w}^{2}\int Ds_{1}\,Ds_{2}\,\phi^{\prime}(u_{1}^{*}) \phi^{\prime}(u_{2}^{*})\right]. \tag{43}\]
We have now three options:
* if \(\chi_{1}<1\), then \(c^{*}=1\) is a stable fixed point, and we term the corresponding region of the \((\sigma_{b},\sigma_{w})\) plane the _ordered phase_. In this phase, on average across random networks two inputs \(x_{a}\), \(x_{\beta}\) will tend to align exponentially fast, with characteristic depth \(\xi_{c}\), as they propagate through layers of the deep NN.
* if \(\chi_{1}>1\), then \(c^{*}=1\) is an unstable fixed point, and the corresponding region of the \((\sigma_{b},\sigma_{w})\) plane is termed a _chaotic phase_. There will be another fixed point \(c^{*}<1\) which will be stable. Two inputs \(x_{a}\), \(x_{\beta}\) will tend towards uniform correlation (possibly vanishing) across all pairs \(a\neq\beta\) exponentially fast in the NN depth, with a characteristic depth scale \(\xi_{c}\).
* if \(\chi_{1}=1\), then \(c^{*}=1\) is marginally stable, and stability is determined by higher-order terms in the expansion around the fixed point. The corresponding region of the \((\sigma_{b},\sigma_{w})\) plane is a _critical line_. In this phase, the correlation between two inputs \(x_{a}\), \(x_{\beta}\) will tend towards a fixed point at a slower rate, algebraically instead of exponentially fast. Indeed, one can show that as \(\chi_{1}\to 1\), \(\xi_{c}\rightarrow+\infty\). It is found that the maximum depth of a NN that can be trained with backpropagation increases as the initialization hyperparameters get closer to this critical line [9].
Figure 1: Phase diagram in the \((\sigma_{b}^{2},\sigma_{w}^{2})\) plane for fixed points of the NNGP recursion relationship with nonlinearity \(\phi=\tanh\), showing _ordered_ and _chaotic_ phases separated by a critical line. Figure reproduced from [12]; see also [9].
Let us consider the case \(\phi=\tanh\) as an example, with phase diagram in Fig. 1 showing ordered and chaotic phases separated by a critical line. The ordered phase is smoothly connected to the regime \(\sigma_{b}\gg\sigma_{w}\); intuitively, the shared bias dominates over the weights acting on the input signals, and two inputs degenerate into a common value as they are passed through deeper layers of the random network (hence, the stability of the \(c^{*}=1\) fixed point). The chaotic phase smoothly connects to the regime \(\sigma_{b}\ll\sigma_{w}\), where randomness from the weights dominates and leads to reduced correlation between the inputs.
## 2 Lecture 2
### Introduction
In the previous lecture, we treated the properties of deep neural networks at initialization in the limit \(n\to\infty\). We also discussed the predictions arising from Bayesian inference in this limit. In this lecture, we turn our attention to training deep NNs with empirical risk minimization and understanding the optimization dynamics, either by gradient descent or gradient flow, in this same limit of infinitely-wide hidden layers.
Before doing so, we introduce a few tools that enable us to analytically treat leading deviations away from the infinite-width limit in randomly initialized deep NNs. These tools have also been used to construct a perturbation theory for finite-width deep NNs after training [10].
### Wick's theorem
Wick's theorem is a fundamental result about Gaussian random variables that simplifies computations involving expectations of products of such variables.
**Result 2** (Wick's theorem).: _Let \(z\in\mathbb{R}^{n}\) be a centered random Gaussian vector with covariance matrix \(K\), \(z\sim\mathcal{N}\left(0,K\right)\). Then, the expectation of any product of the elements of \(z\) can be expressed as a sum over all possible pairings of indices_
\[\mathbb{E}\big{[}z_{\mu_{1}}\dots z_{\mu_{2m}}\big{]}=\sum_{all pairings} \mathbb{E}\big{[}z_{\mu_{k_{1}}}z_{\mu_{k_{2}}}\big{]}\dots\mathbb{E}\big{[} z_{\mu_{k_{2m-1}}}z_{\mu_{k_{2m}}}\big{]}=\sum_{all pairings}K_{\mu_{k_{1}}\mu_{k_{2}}}\dots K _{\mu_{k_{2m-1}}\mu_{k_{2m}}}. \tag{44}\]
(Here, the result is for products containing an even number of elements since odd ones vanish.) We will use this to compute higher-order correlation functions in randomly initialized deep linear networks, illustrating some of the effects of finite-width which carry beyond deep linear networks to nonlinear ones [10].
#### 2.2.1 Two-point correlation function
Using Wick's theorem (44), we compute the covariance between any two preactivations of the same layer for a randomly initialized deep linear neural network, assuming no bias terms for
simplicity [10],
\[\begin{split}\mathbb{E}\big{[}z_{i_{1}}^{l}(x_{\alpha})\,z_{i_{2}}^{l }(x_{\beta})\big{]}&=\sum_{j_{1},j_{2}=1}^{n}\mathbb{E}\big{[}W_{i_ {1}j_{1}}^{l}W_{i_{2}j_{2}}^{l}z_{j_{1}}^{l-1}(x_{\alpha})\,z_{j_{2}}^{l-1}(x_{ \beta})\big{]}\\ &=\sum_{j_{1},j_{2}=1}^{n}\mathbb{E}\big{[}W_{i_{1}j_{1}}^{l}W_{i _{2}j_{2}}^{l}\big{]}\mathbb{E}\big{[}z_{j_{1}}^{l-1}(x_{\alpha})\,z_{j_{2}}^{l -1}(x_{\beta})\big{]}\\ &=\delta_{i_{1}i_{2}}\frac{\sigma_{w}^{2}}{n}\sum_{j_{1},j_{2}=1} ^{n}\delta_{j_{1}j_{2}}\mathbb{E}\big{[}z_{j_{1}}^{l-1}(x_{\alpha})\,z_{j_{2}}^ {l-1}(x_{\beta})\big{]}\\ &=\delta_{i_{1}i_{2}}\frac{\sigma_{w}^{2}}{n}\sum_{j=1}^{n} \mathbb{E}\big{[}z_{j}^{l-1}(x_{\alpha})\,z_{j}^{l-1}(x_{\beta})\big{]}.\end{split} \tag{45}\]
Let us decompose the two-point correlation function for inputs \(x_{\alpha},x_{\beta}\) in layer \(l\) as
\[\mathbb{E}\big{[}z_{i_{1}}^{l}(x_{\alpha})\,z_{i_{2}}^{l}(x_{\beta})\big{]}:= \delta_{i_{1}i_{2}}G_{\alpha\beta}^{l}, \tag{46}\]
where \(G_{\alpha\beta}^{l}\) is defined as
\[G_{\alpha\beta}^{l}=\frac{1}{n}\sum_{j=1}^{n}\mathbb{E}\big{[}z_{j}^{l}(x_{ \alpha})\,z_{j}^{l}(x_{\beta})\big{]}. \tag{47}\]
With these definitions, we can express the recursion (45) in a compact form
\[G_{\alpha\beta}^{l}=\sigma_{w}^{2}G_{\alpha\beta}^{l-1}, \tag{48}\]
leading to the depth-dependent form
\[G_{\alpha\beta}^{l}=\big{(}\sigma_{w}^{2}\big{)}^{l}\,G_{\alpha\beta}^{0}. \tag{49}\]
#### 2.2.2 Four-point correlation function
Similarly, we obtain a recursion for the four-point correlation function,
\[\begin{split}\mathbb{E}\big{[}z_{i_{1}}^{l}\dots\,z_{i_{4}}^{l }\big{]}&=\sum_{j_{1},j_{4}=1}^{n}\mathbb{E}\big{[}W_{i_{1}j_{1}} ^{l}\dots W_{i_{4}j_{4}}^{l}\big{]}\mathbb{E}\big{[}z_{j_{1}}^{l-1}\dots\,z_{j _{4}}^{l-1}\big{]}\\ &=\frac{\big{(}\sigma_{w}^{2}\big{)}^{2}}{n^{2}}\sum_{j_{1},j_{ 4}=1}^{n}\big{(}\delta_{i_{1}i_{2}}\delta_{j_{1}j_{2}}\delta_{i_{1}j_{4}} \delta_{j_{3}j_{4}}+\delta_{i_{1}i_{3}}\delta_{j_{1}j_{3}}\delta_{i_{2}i_{4}} \delta_{j_{2}j_{4}}+\dots\big{)}\,\mathbb{E}\big{[}z_{j_{1}}^{l-1}z_{j_{2}}^{l -1}z_{j_{3}}^{l-1}z_{j_{4}}^{l-1}\big{]}\\ &=\big{(}\sigma_{w}^{2}\big{)}^{2}\big{(}\delta_{i_{1}i_{2}} \delta_{i_{3}i_{4}}+\delta_{i_{1}i_{3}}\delta_{i_{2}i_{4}}+\delta_{i_{1}i_{4}} \delta_{i_{2}i_{3}}\big{)}\,\frac{1}{n^{2}}\sum_{j,k=1}^{n}\mathbb{E}\big{[}z_{ j}^{l-1}z_{j}^{l-1}z_{k}^{l-1}z_{k}^{l-1}\big{]}\,\end{split} \tag{50}\]
using Wick's theorem to decompose the expectation value into a sum over pairings of indices. (For simplicity, we have treated the case of a single sample \(x_{\alpha}=x\) and dropped reference to the samples, but this calculation can be extended to a general choice of four samples.)
We again factor the correlation function as a term encoding the structure of indices and a scalar function,
\[\mathbb{E}\big{[}z_{i_{1}}^{l}\dots\,z_{i_{4}}^{l}\big{]}:=\big{(}\delta_{i_{1} i_{2}}\delta_{i_{3}i_{4}}+\delta_{i_{1}i_{3}}\delta_{i_{2}i_{4}}+\delta_{i_{1}i_{4}} \delta_{i_{2}i_{3}}\big{)}\,G_{4}^{l}. \tag{51}\]
Using this decomposition the final factor of the recursion (50) can be written as
\[\frac{1}{n^{2}}\sum_{j,k=1}^{n}\mathbb{E}\left[z_{j}^{l-1}z_{j}^{l-1}z_{k}^{l-1}z_ {k}^{l-1}\right]=\frac{1}{n^{2}}\sum_{j,k=1}^{n}\left(\delta_{jj}\delta_{kk}+ \delta_{jk}\delta_{jk}+\delta_{jk}\delta_{jk}\right)G_{4}^{l-1}=\left(1+\frac{2 }{n}\right)G_{4}^{l-1}, \tag{52}\]
which yields the recursion
\[G_{4}^{l}=\left(\sigma_{w}^{2}\right)^{2}\left(1+\frac{2}{n}\right)G_{4}^{l-1}. \tag{53}\]
It is easy to see using (50) that
\[G_{4}^{0}=\left(G_{2}^{0}\right)^{2}, \tag{54}\]
with
\[G_{2}^{0}=\frac{\sigma_{w}^{2}}{n_{0}}\sum_{j=1}^{n_{0}}x_{j}x_{j}. \tag{55}\]
referring to the two-point correlation function. Unrolling the recursion relation we obtain various relationships
\[\begin{split} G_{4}^{l}&=\left(\sigma_{w}^{2} \right)^{2l}\left[\prod_{l^{\prime}=1}^{l}\left(1+\frac{2}{n}\right)\right] \left(G_{2}^{0}\right)^{2}\\ &=\left[\prod_{l^{\prime}=1}^{l}\left(1+\frac{2}{n}\right)\right] \left(G_{2}^{l}\right)^{2}\\ &=\left(1+\frac{2}{n}\right)^{l}\left(G_{2}^{l}\right)^{2}.\end{split} \tag{56}\]
#### 2.2.3 Large-\(n\) expansion
Let us discuss what we learn from these simple applications of Wick's Theorem [10]. In the limit \(n\rightarrow\infty\), the recursion for (56) simplifies to \(G_{4}^{l}=\left(G_{2}^{l}\right)^{2}\) and the correlation function becomes
\[\mathbb{E}\left[z_{i_{1}}^{l}z_{i_{2}}^{l}z_{i_{3}}^{l}z_{i_{4}}^{l}\right]= \left(\delta_{i_{1}i_{2}}\delta_{i_{3}i_{4}}+\delta_{i_{1}i_{3}}\delta_{i_{2} i_{4}}+\delta_{i_{1}i_{4}}\delta_{i_{2}i_{3}}\right)\left(G_{2}^{l}\right)^{2}\, \tag{57}\]
which is what we would obtain if all the preactivations were Gaussian random variables (indeed, we know from the last lecture the preactivations are described by a Gaussian process in this limit). Large but finite \(n\) gives rise to a deviation from Gaussianity which to leading order acquires the form
\[\begin{split} G_{4}^{l}-\left(G_{2}^{l}\right)^{2}=& \left[\left(1+\frac{2}{n}\right)^{l}-1\right]\left(G_{2}^{l}\right)^{2}\\ =&\frac{2l}{n}\left(G_{2}^{l}\right)^{2}+O\left(\frac {1}{n^{2}}\right),\end{split} \tag{58}\]
valid if the depth is not too large. The correction to the four-point correlation function from its infinite-width form is therefore governed by the ratio of the depth to width of the network, \(l/n\). It turns out this ratio also governs the corrections to gradient-based learning in trained finite-width deep NNs [10]. The deviations from Gaussianity at finite width will be discussed further in Lectures 4 and 5.
### Gradient descent dynamics of optimization in the infinite-width limit
We next treat the dynamics of training deep NNs within empirical risk minimization under gradient flow (GF) or gradient descent (GD) in the infinite-width limit. We specialize to the case of square loss, where an analytic closed-form derivation is possible. This setting further develops the rich set of connections between infinitely-wide neural networks, kernel regression, and Gaussian processes [13, 14] which we partly established in the first lecture.
#### 2.3.1 Setting
We consider a fully-connected deep NN of depth \(L\) and width \(n\) represented by \(f_{t}(x):\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}\) with parameters \(\theta_{t}=\left\{W^{l}_{ij}(t),b^{l}_{i}(t)\right\}_{lij}\). We view the NN function and parameters as inheriting a time dependence from optimization and use the notation \(f_{t}(x)=f(x,\theta_{t})\) to emphasize this. The loss function on a dataset \(\mathcal{D}=\{(x_{a},y_{a})\}_{a=1}^{m}\) is
\[\mathcal{L}(\theta)=\frac{1}{m}\sum_{a=1}^{m}\ell\left(f(x_{a},\theta),y_{a} \right), \tag{59}\]
where we take \(\ell\) to be the square loss. We will sometimes write \(\mathcal{L}_{t}=\mathcal{L}(\theta_{t})\).
#### 2.3.2 Gradient descent dynamics for the neural network function
Let us investigate the dynamics on the deep NN function that arises from applying gradient descent to the network parameters. The latter are updated as
\[\theta_{\mu,t+1}=\theta_{\mu,t}-\eta\frac{\partial\mathcal{L}_{t}}{\partial \theta_{\mu}}, \tag{60}\]
where \(\mu\) indexes into the collection of trainable parameters and \(\eta\) is the learning rate. (In what follows, we will use the \(\mu\) index where necessary when it clarifies the structure of contracted derivatives, but will drop it otherwise.) Transcribing from parameter to function space dynamics using the chain rule, we can expand in the limit of small learning rate
\[\begin{split} f_{t+1}(x)=f(x,\theta_{t+1})&=f(x, \theta_{t}-\eta\frac{\partial\mathcal{L}_{t}}{\partial\theta})\\ &=f_{t}(x)-\eta\sum_{\mu}\frac{\partial f_{t}(x)}{\partial \theta_{\mu}}\frac{\partial\mathcal{L}_{t}}{\partial\theta_{\mu}}+\frac{\eta ^{2}}{2}\sum_{\mu,\nu}\frac{\partial^{2}f_{t}(x)}{\partial\theta_{\mu}\partial \theta_{\nu}}\frac{\partial\mathcal{L}_{t}}{\partial\theta_{\mu}}\frac{ \partial\mathcal{L}_{t}}{\partial\theta_{\nu}}+\dots.\end{split} \tag{61}\]
For illustration, we will examine the continuous time limit of these dynamics, but they can straightforwardly be extended to the discrete time setting by keeping higher-order terms in \(\eta\). Letting \(\eta\) tend to zero, the evolution of the function \(f_{t}(x)\) under gradient flow is
\[\frac{\mathrm{d}f_{t}(x)}{\mathrm{d}t}=-\sum_{\mu}\frac{\partial f_{t}(x)}{ \partial\theta_{\mu}}\frac{\partial\mathcal{L}_{t}}{\partial\theta_{\mu}}. \tag{62}\]
It will be useful to separate out the relationship between the NN function and the parameters (this map encodes the structure of the NN) with the relationship between the NN function and the loss, rewriting the gradients as
\[\frac{\partial\mathcal{L}_{t}}{\partial\theta_{\mu}}=\sum_{a\in\mathcal{D}} \frac{\partial\mathcal{L}_{t}}{\partial f(x_{a})}\frac{\partial f_{t}(x_{a})} {\partial\theta_{\mu}}. \tag{63}\]
The function evolution can therefore be rewritten
\[\frac{\mathrm{d}f_{t}(x)}{\mathrm{d}t}= -\sum_{a\in\mathcal{D}}\frac{\partial\mathcal{L}_{t}}{\partial f(x_{a })}\Bigg{[}\sum_{\mu}\frac{\partial f_{t}(x_{a})}{\partial\theta_{\mu}}\frac{ \partial f_{t}(x)}{\partial\theta_{\mu}}\Bigg{]} \tag{64}\] \[= -\sum_{a\in\mathcal{D}}\frac{\partial\mathcal{L}_{t}}{\partial f( x_{a})}\Theta_{t}(x_{a},x).\]
The quantity \(\Theta_{t}(x,x^{\prime})\) is the inner product defined as
\[\Theta_{t}(x,x^{\prime}):=\sum_{\mu}\frac{\partial f_{t}(x)}{\partial\theta_{ \mu}}\frac{\partial f_{t}(x^{\prime})}{\partial\theta_{\mu}}. \tag{65}\]
The \(n\to\infty\) limit of these dynamics was first studied in [13], where the quantity \(\Theta_{t}(x,x^{\prime})\) was introduced. The infinite-width limit of \(\Theta\) in randomly initialized networks was termed the _Neural Tangent Kernel_ (NTK), and it will play a crucial role in our subsequent discussion. We will overload terminology and in what follows also use this term to refer to the dynamical variable \(\Theta_{t}(x,x^{\prime})\), which may be evaluated away from initialization or in finite-sized networks.
For the form of the loss we consider, the term \(\partial\mathcal{L}_{t}/\partial f(x)\) only depends on \(f_{t}(x)\). However, \(\Theta_{t}(x,x^{\prime})\) is, in general, a new variable whose dynamics we need to track to ensure a closed system of equations. Time derivatives arise entirely from the dynamics of parameters, so we can make the substitution for the operator
\[\frac{\mathrm{d}}{\mathrm{d}t}=\sum_{\mu}\frac{\partial\theta_{\mu}}{\partial t }\frac{\partial}{\partial\theta_{\mu}}. \tag{66}\]
The time evolution of the NTK is
\[\frac{\mathrm{d}\Theta_{t}(x,x^{\prime})}{\mathrm{d}t}= \sum_{\mu}\Bigg{[}\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\frac{ \partial f_{t}(x)}{\partial\theta_{\mu}}\bigg{)}\frac{\partial f_{t}(x^{\prime })}{\partial\theta_{\mu}}+\frac{\partial f_{t}(x)}{\partial\theta_{\mu}}\frac{ \mathrm{d}}{\mathrm{d}t}\bigg{(}\frac{\partial f_{t}(x^{\prime})}{\partial \theta_{\mu}}\bigg{)}\Bigg{]} \tag{67}\] \[= \sum_{\mu,\nu}\Bigg{[}\frac{\partial\theta_{\nu}}{\partial t} \frac{\partial}{\partial\theta_{\nu}}\bigg{(}\frac{\partial f_{t}(x)}{ \partial\theta_{\mu}}\bigg{)}\frac{\partial f_{t}(x^{\prime})}{\partial\theta _{\mu}}+\frac{\partial f_{t}(x)}{\partial\theta_{\mu}}\frac{\partial}{\partial t }\frac{\partial}{\partial\theta_{\nu}}\bigg{(}\frac{\partial f_{t}(x^{\prime })}{\partial\theta_{\mu}}\bigg{)}\Bigg{]}\] \[= -\sum_{a\in\mathcal{D}}\frac{\partial\mathcal{L}_{t}}{\partial f( x_{a})}\Bigg{[}\sum_{\mu,\nu}\Bigg{(}\frac{\partial^{2}f_{t}(x)}{\partial \theta_{\mu}\partial\theta_{\nu}}\frac{\partial f_{t}(x^{\prime})}{\partial \theta_{\nu}}\frac{\partial f_{t}(x^{\prime})}{\partial\theta_{\mu}}+\frac{ \partial^{2}f_{t}(x^{\prime})}{\partial\theta_{\mu}\partial\theta_{\nu}}\frac{ \partial f_{t}(x_{a})}{\partial\theta_{\nu}}\frac{\partial f_{t}(x)}{\partial \theta_{\mu}}\bigg{)}\Bigg{]},\]
where from the second line to the third line, we used the gradient flow of the parameters.
We find that the dynamics of the NTK is therefore governed by the quantity in square brackets in Eq. 67, which involves a contraction of first and second-order derivatives of the network map \(\theta\to f(x)\). The quantity in square brackets may be a new dynamical variable different from \(f,\Theta\) in general, and the full set of equations describing the dynamics in function space is generically not closed at this level (that is, using only 64 and 67), requiring the generation of further equations as we did for \(\Theta\). We will return to this procedure in the next lecture.
#### 2.3.3 Remark on normalization
Constructing and training a neural network comes with a design choice as to the parameterization of the parameters, and their initialization prior to optimization. Our presentation has thus far followed historical development; to maintain consistency with the subsequent
literature, we now use a different parameterization and initialization, which in the literature has been referred to as _NTK parameterization_[13, 14]. In the layer-to-layer transformations, we factor out an explicit \(\sigma_{w}/\sqrt{n_{l}}\) dependence in front of the weights (the important factor is the size-dependence rather than \(\mathcal{O}(1)\) constants) and instead use the initialization scheme \(W_{ij}^{l}\sim\mathcal{N}(0,1)\). In contrast, the scheme used thus far absorbs the appropriate width scaling into the initialization, e.g. \(W_{ij}^{l}\sim\mathcal{N}(0,\sigma_{w}^{2}/n_{l})\); it is referred to as _standard parameterization_ in the literature and has been a common empirical practice in deep learning. Both schemes give rise to the same Gaussian processes in the infinite-width limit, but their dynamics under gradient descent for general networks can be somewhat different. The choice of parameterization also affects the dependence of a suitable choice of hyperparameters, such as the learning rate in gradient descent, on the size of the network. (In standard parameterization, the learning rate will exhibit an implicit dependence on the size of hidden layers.) Hereafter, we will use NTK parameterization since it makes the infinite-width limiting behavior more explicit; however, the important features of our discussion (notably the connection between the infinite-width limit, kernel regression, and Gaussian processes) will be unaffected and so we do this without loss of generality. Further discussion on the difference between NTK and standard parameterization can be found in [14]. We note in passing that, since this earlier work, a rich literature has developed on the topic of parameterizations, hyperparameter selection, and infinite-width limits.
#### 2.3.4 Example: single hidden-layer neural network
Let us examine a concrete example: a single hidden-layer NN. We set the biases to zero for simplicity. The preactivations in the hidden layer and the output are
\[\begin{split} z_{i}^{0}(x)=&\sum_{k=1}^{n_{0}} \sigma_{w}\frac{W_{ik}^{0}}{\sqrt{n_{0}}}x_{k}\\ f(x)=&\sum_{i=1}^{n}\sigma_{w}\frac{W_{i}^{1}}{ \sqrt{n}}\phi\left(z_{i}^{0}(x)\right).\end{split} \tag{68}\]
The Neural Tangent Kernel for this network is
\[\Theta(x,x^{\prime})=\frac{\sigma_{w}^{2}}{n}\sum_{i=1}^{n}\phi\left(z_{i}^{0} (x)\right)\phi\left(z_{i}^{0}(x^{\prime})\right)+\left(\frac{\sigma_{w}^{2}}{ n}\sum_{i=1}^{n}\left(W_{i}^{1}\right)^{2}\phi^{\prime}\left(z_{i}^{0}(x)\right) \phi^{\prime}\left(z_{i}^{0}(x^{\prime})\right)\right)\left(\frac{\sigma_{w}^{ 2}}{n_{0}}\sum_{j=1}^{n_{0}}x_{j}x_{j}^{\prime}\right). \tag{69}\]
The training dynamics of the preactivations obey
\[\frac{\mathrm{d}z_{i}^{0}(x)}{\mathrm{d}t}=-\sum_{\alpha\in D}\frac{\partial \mathcal{L}_{t}}{\partial f(x_{\alpha})}\frac{\sigma_{w}W_{i}^{1}\phi^{\prime }\left(z_{i}^{0}(x_{\alpha})\right)}{\sqrt{n}}\left(\frac{\sigma_{w}^{2}}{n_ {0}}\sum_{j=1}^{n_{0}}x_{\alpha,j}x_{j}\right), \tag{70}\]
and the dynamics of the weights in the last layer are
\[\frac{\mathrm{d}W_{i}^{1}}{\mathrm{d}t}=-\sum_{\alpha\in D}\frac{\partial \mathcal{L}_{t}}{\partial f(x_{\alpha})}\frac{\sigma_{w}\phi\left(z_{i}^{0}(x _{\alpha})\right)}{\sqrt{n}}. \tag{71}\]
A back-of-the-envelope estimate shows that at initialization, these two quantities vanish as \(n\to\infty\); indeed \(\left|\frac{\mathrm{d}z_{i}^{0}}{\mathrm{d}t}\right|_{t=0}\sim\mathcal{O}( \frac{1}{\sqrt{n}})\) and \(\left|\frac{\mathrm{d}W_{i}^{1}}{\mathrm{d}t}\right|_{t=0}\sim\mathcal{O}( \frac{1}{\sqrt{n}})\). These terms contribute to the dynamics of \(\Theta_{t}\), and a simple calculation suggests that a similar vanishing occurs for the NTK evolution estimated at initialization, as \(n\to\infty\),
\[\left|\frac{\mathrm{d}\Theta_{t}(x,x^{\prime})}{\mathrm{d}t}\right|_{t=0} \xrightarrow[n\to\infty]{}0. \tag{72}\]
On the other hand, we calculated \(\Theta_{t}\) above, and it is \(\mathcal{O}(1)\) at initialization as \(n\to\infty\). Thus, a suggestive picture based on these estimates at initialization as \(n\to\infty\), _assuming they continue to hold true during training_, is that individual parameters and hidden-layer preactivations do not evolve under dynamics in this limit, and the NTK remains at its initial value.
#### 2.3.5 Neural Tangent Kernel in the infinite-width limit
We will give a physicist's treatment of the behavior of the NTK in the infinite-width limit, both at initialization and after training. The NTK is in general a random variable when the network parameters are themselves drawn from a distribution. However, certain properties become deterministic due to the law of large numbers as \(n\to\infty\). The first main result states that the NTK at initialization approaches a deterministic quantity as \(n\to\infty\), with a recursion relation that parallels the recursion we derived in Lecture 1 for the NNGP The second main result considers the dynamics of the NTK as \(n\to\infty\): surprisingly, the NTK stays constant during the course of training. Both of these results were hinted at in the last section for a single hidden-layer NN, based off of our back-of-the-envelope estimates at initialization: specifically, we found that \(\Theta_{t=0}\sim\mathcal{O}(1)\) and \(d\Theta_{t=0}/dt\to 0\).
The constancy of the NTK enables us to solve for the network evolution analytically and gives a connection between deep NN learning and kernel regression.
#### Initialization
**Result 3** ([13]).: _For a network of depth \(L\) at initialization with nonlinearity \(\phi\), and in the limit as the layer width \(n\to\infty\) sequentially, the NTK \(\Theta^{L}\) converges to a deterministic limiting kernel:_
\[\Theta^{L,kj}\to\Theta_{0}^{L}\cdot\delta_{kj}, \tag{73}\]
_where we treat the general setting of non-scalar maps \(f:\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{t+1}}\),_
\[\Theta^{L,kj}(x,x^{\prime}):=\sum_{\mu}\frac{\partial f_{k}(x)}{\partial\theta _{\mu}}\frac{\partial f_{j}(x^{\prime})}{\partial\theta_{\mu}}, \tag{74}\]
_and \(\Theta_{0}^{L}:\mathbb{R}^{n_{0}}\times\mathbb{R}^{n_{0}}\to R\) is a kernel whose recursion relation we will derive below, while touching on the main ideas behind the proof of this result; we refer the reader to [13] for complete technical details. We can understand how the result arises by induction in the depth of the network. As we sequentially take each hidden layer to be of infinite size, we will leverage the fact that the distribution of preactivations originating from that layer is described by a Gaussian process with a covariance function given by the NNGP kernels discussed in Lecture 1._
_To derive the recursion relation, we split the parameters into two groups corresponding to those from the last layer and those from earlier in the network._
\[\begin{split}\Theta^{L,kj}(x,x^{\prime})&=\sum_{ \mu}\frac{\partial f_{k}(x)}{\partial\theta_{\mu}}\frac{\partial f_{j}(x^{ \prime})}{\partial\theta_{\mu}}\\ &=\sum_{\mu\in\underset{L}{\text{last layer}}}\frac{\partial f_{ k}(x)}{\partial\theta_{\mu}}\frac{\partial f_{j}(x^{\prime})}{\partial \theta_{\mu}}+\sum_{\begin{subarray}{c}\mu\in\text{ earlier layers}\\ 1,\dots,l-1\end{subarray}}\frac{\partial f_{k}(x)}{\partial\theta_{\mu}} \frac{\partial f_{j}(x^{\prime})}{\partial\theta_{\mu}}.\end{split} \tag{75}\]
_Working in NTK parameterization for the layer-to-layer transformation,_
\[f_{k}(x)=z_{k}^{L}(x)=\sigma_{b}\,b_{k}^{L}+\sum_{i=1}^{n}\sigma_{w}\frac{W_{ ki}^{L}}{\sqrt{n}}\phi\left(z_{i}^{L-1}(x)\right), \tag{76}\]
the NTK takes the form
\[\Theta^{L,kj}(x,x^{\prime})=\delta_{kj}\sigma_{b}^{2}+\delta_{kj} \frac{\sigma_{w}^{2}}{n}\sum_{i=1}^{n}\phi\left(z_{i}^{L-1}(x)\right)\phi\left(z_ {i}^{L-1}(x^{\prime})\right)\\ +\delta_{kj}\sigma_{w}^{2}\sum_{i,s=1}^{n}\frac{W_{ki}^{L}W_{js}^ {L}}{n}\phi^{\prime}\left(z_{i}^{L-1}(x)\right)\phi^{\prime}\left(z_{s}^{L-1}( x^{\prime})\right)\sum_{\begin{subarray}{c}\mu\in\text{ earlier layers}\\ 1,\dots,L-1\end{subarray}}\frac{\partial z_{i}^{L-1}(x)}{\partial\theta_{\mu}}\frac{ \partial z_{i}^{L-1}(x^{\prime})}{\partial\theta_{\mu}}. \tag{77}\]
Using the induction hypothesis we can simplify the last term
\[\Theta^{L,kj}(x,x^{\prime})=\delta_{kj}\bigg{[}\sigma_{b}^{2}+ \frac{\sigma_{w}^{2}}{n}\sum_{i=1}^{n}\phi\left(z_{i}^{L-1}(x)\right)\phi\left( z_{i}^{L-1}(x^{\prime})\right)\\ +\frac{\sigma_{w}^{2}}{n}\sum_{i=1}^{n}(W_{ki}^{L})^{2}\phi^{ \prime}\left(z_{i}^{L-1}(x)\right)\phi^{\prime}\left(z_{i}^{L-1}(x^{\prime}) \right)\Theta^{L-1}(x,x^{\prime})\bigg{]}. \tag{78}\]
The second and third term are averages of i.i.d. random variables in the infinite width limit. Thus, by the law of large numbers, they concentrate to their mean when \(n\to\infty\). Since the distribution on \(z^{L-1}\) is given by a Gaussian process, we can further simplify the expression. Revisiting the discussion in 1.4, we let
\[\mathcal{F}_{\phi}(\Sigma)=\mathbb{E}_{(u,v)\sim\mathcal{N}(0, \Sigma)}\left[\phi(u)\phi(v)\right] \tag{79}\] \[\widetilde{\mathcal{F}}_{\phi}(\Sigma)=\mathbb{E}_{(u,v)\sim \mathcal{N}(0,\Sigma)}\left[\phi^{\prime}(u)\phi^{\prime}(v)\right],\]
where
\[\Sigma=\left(\begin{array}{c|c}K_{11}&K_{12}\\ \hline K_{21}&K_{22}\end{array}\right). \tag{80}\]
The first and second terms concentrate to
\[\sigma_{b}^{2}+\sigma_{w}^{2}\,\mathbb{E}\left[\phi\left(z_{i}^{L -1}(x)\right)\phi\left(z_{i}^{L-1}(x^{\prime})\right)\right]=\sigma_{b}^{2}+ \sigma_{w}^{2}\,\mathcal{F}_{\phi}(K^{L-1}(x,x),K^{L-1}(x,x^{\prime}),K^{L-1}( x^{\prime},x^{\prime}))\\ =K^{L}(x,x^{\prime}). \tag{81}\]
The third term as \(n\to\infty\) becomes
\[\sigma_{w}^{2}\,\mathbb{E}\left[\left(W_{ki}^{L}\right)^{2}\right]\mathbb{E} \left[\phi^{\prime}\left(z_{i}^{L-1}(x)\right)\phi^{\prime}\left(z_{i}^{L-1}( x^{\prime})\right)\right]\Theta^{L-1}(x,x^{\prime})=\sigma_{w}^{2}\widetilde{ \mathcal{F}}_{\phi}(K^{L-1}(x,x),\dots)\Theta^{L-1}(x,x^{\prime}). \tag{82}\]
Altogether, in a randomly initialized infinitely-wide deep NN, we have the following recursion for the NTK,
\[\Theta^{L,kj}(x,x^{\prime})=\delta_{kj}\bigg{(}K^{L}(x,x^{\prime})+\sigma_{w} ^{2}\widetilde{\mathcal{F}}_{\phi}(K^{L-1}(x,x),K^{L-1}(x,x^{\prime}),K^{L-1}( x^{\prime},x^{\prime}))\cdot\Theta^{L-1}(x,x^{\prime})\bigg{)}. \tag{83}\]
Hence the NTK depends both on the two-point correlation function of forward-propagated signal (i.e. \(K^{L}\)) and on back-propagated signal (such as the integral involving the derivative of \(\phi\), which can sometimes be computed in closed-form).
#### Training
In examining the single-hidden layer NN in Sec. 2.3.4, we saw how the size-dependent parameterization (or initialization) factors of \(1/\sqrt{n}\) resulted in dynamical variables such as
individual weight matrix elements or individual pre-activations in a layer acquiring a vanishingly small rate of change, with respect to optimization time, at initialization as \(n\to\infty\). This originated from the combination of inverse-\(n\) dependent factors and other quantities remaining \(\mathcal{O}(1)\); it then resulted in the vanishing of the time derivative of the NTK at initialization. In fact, with certain losses (such as square loss as we are considering), this vanishing time derivative continues to hold during training [13], so that macroscopic variables such as \(\Theta_{t}(x,x^{\prime})\), as well as individual parameters and preactivations, are frozen at their initial values in the infinite-width limit. (While these dynamical variables stay at their initial values during optimization as \(n\to\infty\), they collectively still enable the NN function to adapt and fit the training data). To summarize this informally,
**Result 4** ([13]).: _Under gradient flow as \(n\to\infty\), the NTK stays constant during training and equal to its initial value,_
\[\Theta_{t}^{L,kj}\to\Theta_{0}^{L}\delta_{kj}. \tag{84}\]
Consequently, the differential equation for the NN function takes the simple form
\[\frac{\mathrm{d}f_{t}(x)}{\mathrm{d}t}= -\sum_{\alpha\in\mathcal{D}}\frac{\partial\mathcal{L}_{t}}{ \partial f(x_{\alpha})}\Theta_{0}(x_{\alpha},x)=-\sum_{\alpha\in\mathcal{D}} \left(f_{t}(x_{\alpha})-y_{\alpha}\right)\Theta_{0}(x_{\alpha},x), \tag{85}\]
which can be solved exactly.
#### 2.3.6 Closed-form solution for dynamics and equivalent linear model
We can derive an explicit solution for \(f_{t}(x)\) from the linear ordinary differential equation in (85). Before doing so, however, we discuss an equivalent formulation for the dynamics that lends perspective to the complexity of the model that is learned in this infinite-width limit and yields a parameter-space description. As we state in the next section (Result 5), the optimization dynamics of the NN function in the infinite-width limit is equivalent to the function realizing a first-order Taylor expansion with respect to the NN parameters [14]; more precisely, it realizes the specific linear model
\[f_{t}^{\mathrm{lin}}(x):=f_{0}(x)+\nabla_{\theta}f_{0}(x)\big{|}_{\theta= \theta_{0}}\cdot\omega_{t}\, \tag{86}\]
where \(\omega_{t}=\theta_{t}-\theta_{0}\) is the change in the parameters during training from their initial value. Note that this model is still nonlinear with respect to inputs \(x\). Hence, we can also study parameter space dynamics in the infinite-width limit, obtaining a linear ODE for the NN parameters \(\theta_{t}\) in analogy to (85). Let \(\mathcal{X}\) and \(\mathcal{Y}\) denote the collection of training inputs and targets vectorized over the sample dimension \(m=1,...,M\). Solving the ODEs in closed-form yields
\[\omega_{t}=-\nabla_{\theta}f_{0}(\mathcal{X})^{\top}\cdot\Theta_ {0}^{-1}\cdot\big{(}I-e^{-\Theta_{0}t}\big{)}\cdot(f_{0}(\mathcal{X})- \mathcal{Y}), \tag{87}\] \[f_{t}^{\mathrm{lin}}(\mathcal{X})=\big{(}I-e^{-\Theta_{0}t} \big{)}\mathcal{Y}+e^{-\Theta_{0}t}f_{0}(\mathcal{X}), \tag{88}\]
where \(\Theta_{0}\equiv\Theta_{0}(\mathcal{X},\mathcal{X})\). The value of the NN function in the infinite-width limit (equivalently, the value of the linear model (86)) is
\[f_{t}(x)=\underbrace{\Theta_{0}(x,\mathcal{X})\cdot\Theta_{0}^{-1}\cdot\big{(} I-e^{-\Theta_{0}t}\big{)}\cdot\mathcal{Y}}_{\mu_{t}(x)}+\underbrace{f_{0}(x)- \Theta_{0}(x,\mathcal{X})\cdot\Theta_{0}^{-1}\cdot\big{(}I-e^{-\Theta_{0}t} \big{)}\cdot f_{0}(\mathcal{X})}_{\gamma_{t}(x)}, \tag{89}\]
where we grouped all the terms depending on the initial function in \(\gamma_{t}(x)\). While we have solved the dynamics for a particular instantiation of an infinite-width random network, if we
consider the distribution on \(f_{t}(x)\) that arises from the initial distribution on \(f_{0}\) (namely, \(f_{0}(x)\) is a sample from a GP), we find \(f_{t}(x)\) is also described by a GP whose mean and covariance functions can be calculated from (89). (We separated the terms into \(\mu_{t}(x)\) and \(\gamma_{t}(x)\) to hint that they contribute to the mean and variance of \(f_{t}(x)\), respectively.) This GP can be contrasted with the one arising from Bayesian inference in the infinite-width limit (32). The GP arising from empirical risk minimization and gradient flow has mean and variance [14]
\[\begin{split}\mu(x)&=\Theta_{0}(x,\mathcal{X}) \cdot\Theta_{0}^{-1}\cdot\mathcal{Y},\\ \sigma^{2}(x)&=K(x,x)+\Theta_{0}(x,\mathcal{X}) \cdot\Theta_{0}^{-1}\cdot K\cdot\Theta_{0}^{-1}\cdot\Theta_{0}(\mathcal{X},x)- (\Theta_{0}(x,\mathcal{X})\cdot\Theta_{0}^{-1}\cdot K(\mathcal{X},x)+\\ & K(x,\mathcal{X})\cdot\Theta_{0}^{-1}\cdot\Theta_{0}(\mathcal{X},x)).\end{split} \tag{90}\]
(Recall that \(\Theta_{0},K\) without arguments refers to the \(m\times m\) matrix constructed by evaluating on training samples \(\mathcal{X}\).)
Rather surprisingly, we have found that the infinite-width limit under optimization leads to exactly solvable dynamics for deep neural networks. In principle, the result could have been quite complicated, and with infinitely-many parameters the learned function might have been rather ill-behaved. Instead, the dynamics have a relatively simple description: it is captured by the kernel \(\Theta_{0}\) associated with the deep NN and which is computable via recursion relations. We reiterate how this simplicity came about due to the way in which NN parameters are commonly represented (either through explicit or implicit factors involving the hidden-layer size) in deep learning.
#### 2.3.7 Aside: linear model equivalence in two parameterizations
In Sec. 2.3.6, we mentioned how gradient flow dynamics at infinite width realizes a linear relationship between the NN function and parameters during the course of training (86), and that this is equivalent to the Neural Tangent Kernel \(\Theta\) staying constant at its initial value \(\Theta_{0}\) as \(n\to\infty\). While we have focused our discussion on gradient flow, these equivalences between infinite-width deep NN dynamics, linear models, kernel regression, and Gaussian processes hold under gradient descent up to a maximum learning rate. Below, we state these results informally [14], highlighting the value of the maximum learning, and contrast how the results appear in NTK and standard parameterization.
**Result 5** ([14]).: _Assume that the smallest eigenvalue of the NTK at initialization is positive \(\lambda_{min}>0\) and let \(\lambda_{max}\) be the largest eigenvalue. Under gradient descent with a learning rate \(\eta<\eta_{critical}\) where \(\eta_{critical}=\frac{2}{\lambda_{min}+\lambda_{max}}\), we have (in NTK parameterization),_
\[\begin{split}\sup_{t\geq 0}\left\|f_{t}(x)-f_{t}^{lin}(x) \right\|_{2}&=O\left(\frac{1}{\sqrt{n}}\right)\\ \sup_{t\geq 0}\frac{\|\theta_{t}-\theta_{0}\|_{2}}{\sqrt{n}}& =O\left(\frac{1}{\sqrt{n}}\right)\qquad\text{ as }n\to\infty\\ \sup_{t\geq 0}\left\|\Theta_{t}-\Theta_{0}\right\|_{F}& =O\left(\frac{1}{\sqrt{n}}\right).\end{split} \tag{91}\]
In standard parametrization (c.f. 2.3.3), it is necessary to have \(\eta_{0}<\eta_{critical}\) and the learning rate used in gradient descent is instead \(\eta:=\eta_{0}/n\). In this parameterization, we define the Neural Tangent Kernel as
\[\Theta=\frac{1}{n}\sum_{\mu}\frac{\partial f(x)}{\partial\theta_{\mu}}\frac{ \partial f(x^{\prime})}{\partial\theta_{\mu}}, \tag{92}\]
and the analogous scalings are
\[\begin{split}\sup_{t\geq 0}\left\|f_{t}(x)-f_{t}^{\text{lin}}(x) \right\|_{2}&=O\left(\frac{1}{\sqrt{n}}\right)\\ \sup_{t\geq 0}\left\|\theta_{t}-\theta_{0}\right\|_{2}& =O\left(\frac{1}{\sqrt{n}}\right)\qquad\text{ as }n\to\infty\\ \sup_{t\geq 0}\left\|\Theta_{t}-\Theta_{0}\right\|_{F}& =O\left(\frac{1}{\sqrt{n}}\right).\end{split} \tag{93}\]
We see that the primary differences between the two parameterizations in the infinite-width limit is the bound on the \(L_{2}\) parameter distance moved during optimization and the form of the maximum learning rate.
## 3 Lecture 3
### Introduction
In this lecture, we go beyond the exactly solvable infinite-width limit to discuss both perturbative and non-perturbative corrections that are visible at large but finite width. One aspect of the exactly solvable limit discussed in Lecture 2 is that it exhibits no "feature learning"; rather, the model relies on a fixed set of random features from initialization for prediction. Equivalently, the Neural Tangent Kernel does not change during the course of training. Finite-size hidden layers in a deep neural network instead gives rise to "weak" or "strong" amounts of feature learning, and one goal of this lecture is to illustrate two theoretical descriptions of such feature learning.
We begin by revisiting the function space description we alluded to in Sec. 2.3.2 which gives rise to a hierarchy of coupled differential equations necessary for closure. This hierarchy can be truncated to compute leading order corrections arising from finite width [15, 16].10 We then give a contrasting example of a minimal model whose learning (at large \(n\)) is quite different than the exactly solvable kernel limit and its perturbative corrections, a phenomenon termed "catapult dynamics" [17]. This phenomena arises from using a learning rate in gradient descent that is larger than the critical value (Result 5).
Footnote 10: While we do not discuss it here, capturing the effect of depth is treated in [10].
### Perturbation theory for dynamics at large but finite width
In Sec. 2.3.2, we derived ODEs for the evolution of the NN function and the dynamical Neural Tangent Kernel under gradient flow. (Here, we use the abbreviation \(R_{a}:=\frac{\partial\mathcal{L}_{t}}{\partial f(x_{a})}=f_{t}(x_{a})-y_{a}\) for the residual originating from the loss.) These were
\[\frac{df_{t}(x)}{dt}=-\sum_{a\in\mathcal{D}}\underbrace{\partial\mathcal{L}_{ t}}_{\equiv R_{a,t}}\Theta_{t}(x_{a},x), \tag{94}\]
and
\[\frac{d\Theta_{t}(x,x^{\prime})}{dt}=-\sum_{a\in\mathcal{D}}R_{a,t}\underbrace {\left[\sum_{u,v}\frac{\partial f_{t}(x_{a})}{\partial\,\theta_{u}}\frac{ \partial^{2}f_{t}(x)}{\partial\,\theta_{u}\partial\,\theta_{v}}\frac{ \partial f_{t}(x^{\prime})}{\partial\,\theta_{v}}+(x\leftrightarrow x^{ \prime}).\right]}_{\mathcal{O}_{3}(x,x^{\prime},x_{a})}. \tag{95}\]
where we use \((x\leftrightarrow x^{\prime})\) to denote the expression obtained from exchanging \(x\) and \(x^{\prime}\) in the preceding term appearing in square brackets (hence, note that \(\mathcal{O}_{3}\) is symmetric under exchange of arguments \(x,x^{\prime}\)).
#### 3.2.1 Hierarchy of coupled ODEs
While the specific form of these ODEs and new dynamical variables (such as \(\mathbb{O}_{3}\)) will depend on the particular NN, generically the system of coupled ODEs may not be closed at this order. Hence we continue generating new equations in the hierarchy by computing time derivatives of the new variables that appear. Altogether we obtain a hierarchy of coupled ODEs for dynamical variables \(\mathbb{O}_{s}(x_{1},...,x_{s},t)\) that involve particular types of contractions, over NN parameters, of high and low-order derivatives of the NN function with respect to parameters. We refer to this hierarchy of coupled ODEs as a function space description since it references dynamical variables whose arguments are all on sample space \(x\in\mathbb{R}^{n_{0}}\) and the NN parameters are summed over in the description of the new variables.11
Footnote 11: These ODEs were first introduced and studied at a physics-level of rigor in [15] for deep linear and ReLU networks, which we follow here, and then analyzed from a mathematically rigorous perspective in [16]. A related set of variables is introduced and studied in [10].
Continuing the procedure described, we derive the evolution of \(\mathbb{O}_{3}\) in terms of a new variable \(\mathbb{O}_{4}\),
\[\frac{d\mathbb{O}_{3,t}(x,x^{\prime},x_{\alpha})}{dt}=-\sum_{\beta\in\mathcal{ D}}R_{\beta,t}\mathbb{O}_{4,t}(x,x^{\prime},x_{\alpha},x_{\beta}), \tag{96}\]
and so on. To write a compact expression, we define
\[\begin{split}&\mathbb{O}_{1}(x_{1}):=f(x_{1})\\ &\mathbb{O}_{2}(x_{1},x_{2}):=\Theta(x_{1},x_{2})\\ &\mathbb{O}_{s}(x_{1},...,x_{s}):=\sum_{\mu}\frac{\partial\, \mathbb{O}_{s-1}}{\partial\,\theta_{\mu}}\frac{\partial f(x_{s})}{\partial\, \theta_{\mu}},\,s\geq 3,\end{split} \tag{97}\]
and they obey an associated hierarchy of coupled ODEs
\[\frac{d\mathbb{O}_{s,t}(x_{1},...,x_{s})}{dt}=-\sum_{\alpha\in\mathcal{D}}R_{ \alpha,t}\mathbb{O}_{s+1,t}(x_{1},...,x_{s},x_{\alpha}). \tag{98}\]
This system of equations has an appealing structure that is, at a high level, reminiscent of the BBGKY (Bogoliubov-Born-Green-Kirkwood-Yvon) hierarchy in statistical physics, where we might interpret \(x_{1},x_{2},...\) as interacting particles. While we will not pursue this correspondence further, we note that natural physical constraints can enable closure of the BBGKY hierarchy. Similarly, to make further progress we must find some natural means for closure of this system for deep NN dynamics.
It turns out, as derived in [15], that the "higher-order" (in \(s\)) variables \(\mathbb{O}_{s}\) have a natural scale at initialization that is suppressed in inverse width (for deep NNs with specific choices of nonlinearities). Specifically, for a function \(F_{t}(x)\) different from the \(\mathbb{O}_{s}\) variables,
\[\mathbb{E}_{\theta_{t}}\left[\mathbb{O}_{s,t}(x_{1},...,x_{s})F_{t}(x)\right] =\begin{cases}\mathcal{O}(n^{-\frac{s+2}{2}}),&s\text{ even}\\ \mathcal{O}(n^{-\frac{s-1}{2}}),&s\text{ odd}.\end{cases} \tag{99}\]
In a randomly initialized NN in NTK parameterization, this scaling of expectation values can be derived by counting the number of sums and derivatives. (For deep linear networks, this would be a straightforward application of Wick's Theorem.) The scaling of expectation values holds during training as well, since the dynamical corrections to the \(\mathbb{O}_{s}\) variables are governed by suppressed variables with larger \(s\). If the training loss (tied to the contribution of \(R_{\alpha,t}\) variables) decreases fast enough, the changes to the scaling of expectation values can be neglected compared to the scaling at initialization. (In particular, we know from the exactly
solvable limit that in its vicinity, the training loss and hence \(R_{a}\) decrease exponentially in time, further suppressing the accumulation of corrections if \(n\) is large.)
Therefore, we find that the contribution of higher-order variables \(\mathcal{O}_{s}\) to the dynamics of the NN function \(f_{t}(x)\) is suppressed in the coupled ODEs, and we can truncate the hierarchy at finite \(s\) if the width \(n\) is large and gradient flow is valid.
#### 3.2.2 Dynamics with leading order \(1/n\) correction from finite width
From the scaling of the expectation values in (99), we have that \(\mathcal{O}_{3},\mathcal{O}_{4}\sim\mathcal{O}(1/n)\), while \(\mathcal{O}_{\delta\geq 5}\sim\mathcal{O}(1/n^{2})\). We aim to calculate finite-width NN dynamics correct to \(1/n\) but dropping terms of higher order. As our focus is on the correction to _dynamics_ rather than the discrepancy between infinite and finite-width that exists already _at initialization_, we will base our integration of the ODEs from a randomly initialized NN that is at large but _finite_\(n\). Hence, in this section \(\mathcal{O}_{s,0}\) variables (and in particular the NTK \(\Theta_{0}\)) refer to the initial values of these variables in a randomly initialized _finite-width_ network.
Now, since
\[\frac{d\mathcal{O}_{4,t}(\cdot)}{dt}=-\sum_{a\in\mathcal{D}}R_{a,t}\mathcal{O} _{5,t}(\cdot,x_{a})\sim\mathcal{O}\left(\frac{1}{n^{2}}\right), \tag{100}\]
based on the scaling of average values, we set the right-hand side to zero and take \(\mathcal{O}_{4,t}(\cdot)=\mathcal{O}_{4,0}(\cdot)\), i.e. equal to its initial value. (We use the symbol \(\cdot\) here as substitute for the same set of arguments on both sides of the equation.) Examining next the preceding equation in the hierarchy,
\[\frac{d\mathcal{O}_{3,t}(\cdot)}{dt}\approx-\sum_{a\in\mathcal{D}}\underbrace {\left(f_{t}(x_{a})-y_{a}\right)}_{a_{0}+a_{1}/n+a_{2}/n^{2}+\ldots}\frac{ \mathcal{O}_{4,t}(\cdot,x_{a})}{b_{1}/n+b_{2}/n^{2}+\ldots}, \tag{101}\]
we consider the variables on the right-hand side as having a power-series expansion in inverse width (with coefficients \(\{a_{i}\}\) and \(\{b_{i}\}\)), with the expansion for the residual and for \(\mathcal{O}_{4,t}\) beginning at \(1/n^{0}\) and \(1/n\), respectively. Hence, to compute \(\mathcal{O}_{3,t}\) correct to \(\mathcal{O}(1/n)\) we only need the \(1/n^{0}\) contribution from the residual \(R_{a,t}=f_{t}(x_{a})-y_{a}\), which is precisely the exactly solvable exponential-in-time dynamics we derived in Lecture 2 (89) (albeit interpreting the quantities as originating from a randomly initialized, finite-width network). After substitution and integration, we obtain
\[\mathcal{O}_{3,t}(\tilde{x})=\mathcal{O}_{3,0}(\tilde{x})-\sum_{a,\beta\in \mathcal{D}}\mathcal{O}_{4,0}(\tilde{x},x_{a})\left[\Theta_{0}^{-1}\right]_{a,\beta}\left(1-e^{-t\Theta_{0}}\right)_{a,\beta}\left(f_{0}(x_{\beta})-y_{ \beta}\right), \tag{102}\]
where we used the shorthand \(\tilde{x}=(x_{1},x_{2},x_{3})\) for three of the sample arguments and explicitly denote the matrix elements of \(\Theta_{0}^{-1}\) and \(e^{-t\Theta_{0}}\) that are needed. Note that the time-dependent term in (102) implicitly scales as \(\sim 1/n\) due to this scaling in \(\mathcal{O}_{4,0}\).
Our next step is to use the correction to \(\mathcal{O}_{3,t}\) to correct the dynamical Neural Tangent Kernel. We write this as \(\Theta_{t}=\Theta_{0}+\Theta_{t}^{(1)}+\mathcal{O}(1/n^{2})\), keeping in mind our overloaded notation so that \(\Theta_{0}\) is extracted from a randomly initialized _finite-width network_. Computing \(\Theta_{t}^{(1)}\) is analytically tractable since it requires an integration against exponentials; to highlight the structure of the result, we perform it in the eigenbasis of \(\Theta_{0}\), with eigenvalues \(\{\lambda_{i}\}\) and eigenvectors
\(\{\hat{e}_{i}\}\):
\[\begin{split}\Theta_{t}^{(1)}(x_{1},x_{2})\approx&- \underbrace{\Theta_{0}^{(1)}(x_{1},x_{2})}_{=0}-\int_{0}^{t}\mathrm{d}t^{\prime} \sum_{a\in D}\underbrace{\left(f_{t}(x_{a})-y_{a}\right)}_{a_{0}+a_{1}/n+\ldots }\underbrace{\mathbb{O}_{3,t}(x_{1},x_{2},x_{a})}_{c_{1}/n+c_{2}/n^{2}+\ldots} \\ =&-\int_{0}^{t}\mathrm{d}t^{\prime}\sum_{i}\left( \mathbb{O}_{3,0}(\vec{x})\cdot\hat{e}_{i}\right)e^{-\lambda_{i}t^{\prime}} \left(R_{0}\cdot\hat{e}_{i}\right)\\ &+\int_{0}^{t}\mathrm{d}t^{\prime}\sum_{ij}\left(\hat{e}_{i}\cdot \mathbb{O}_{4,0}(\vec{x})\cdot\hat{e}_{j}\right)\cdot\frac{1}{\lambda_{j}} \left(1-e^{-\lambda_{j}t^{\prime}}\right)\left(R_{0}\cdot\hat{e}_{j}\right)e ^{-t^{\prime}\lambda_{i}}\left(R_{0}\cdot\hat{e}_{i}\right)\\ =&-\sum_{i}\left(\mathbb{O}_{3,0}(\vec{x})\cdot\hat{ e}_{i}\right)\left(R_{0}\cdot\hat{e}_{i}\right)\left(\frac{1-e^{-\lambda_{i}t}}{ \lambda_{i}}\right)\\ &+\sum_{ij}\left(\hat{e}_{i}\cdot\mathbb{O}_{4,0}(\vec{x})\cdot \hat{e}_{j}\right)\frac{\left(R_{0}\cdot\hat{e}_{i}\right)\left(R_{0}\cdot \hat{e}_{j}\right)}{\lambda_{j}}\left[\frac{1-e^{-\lambda_{i}t}}{\lambda_{i}} -\frac{1-e^{-(\lambda_{i}+\lambda_{j})t}}{\lambda_{i}+\lambda_{j}}\right]. \end{split} \tag{103}\]
We have used the shorthand \(\vec{x}=(x_{1},x_{2})\) for two of the sample arguments and defined the vector \(R_{0}\) with elements \(R_{a,0}=f_{0}(x_{a})-y_{a}\); inner products with \(\hat{e}_{i},\hat{e}_{j}\) involve contracting the entries of these vectors with the sample degrees-of-freedom that are not explicitly referenced (e.g. \(\mathbb{O}_{3,0}(\vec{x})\cdot\hat{e}_{i}=\sum_{a\in D}\mathbb{O}_{3,0}(\vec{x },x_{a})(\hat{e}_{i})_{a}\)).
Finally, we can use the correction to the Neural Tangent Kernel above to compute the correction to the function learned by the NN. For the NN function values evaluated on the training set \(x_{a}\in D\), we obtain
\[f_{t}(x_{a})=y_{a}+\left[e^{-\Theta_{0}t}\left(1-\int_{0}^{t}\mathrm{d}t^{ \prime}\,e^{-\Theta_{0}t^{\prime}}\Theta_{t^{\prime}}^{(1)}e^{-\Theta_{0}t^{ \prime}}\right)\left(f_{0}-y\right)\right]_{a}+\mathcal{O}(1/n^{2}), \tag{104}\]
and we can similarly derive an expression for the function value \(f_{t}(x)\) at an arbitrary point \(x\).
#### Corrected dynamics at late times as \(t\to\infty\)
For late times, these expressions predict exponential-in-time dynamics with an effective kernel \(\Theta_{0}+\Theta_{\infty}^{(1)}\),
\[f(t)\to y+e^{-(\Theta_{0}+\Theta_{\infty}^{(1)})t}(f_{0}-y), \tag{105}\]
with the late-time correction
\[\begin{split}\Theta_{\infty}^{(1)}&:=\lim_{t\to \infty}\Theta_{t}^{(1)}\\ &=-\sum_{i}\left(\mathbb{O}_{3,0}(\vec{x})\cdot\hat{e}_{i}\right) \frac{R_{0}\cdot\hat{e}_{i}}{\lambda_{i}}+\sum_{ij}\frac{\left(R_{0}\cdot\hat{e }_{i}\right)\left(R_{0}\cdot\hat{e}_{j}\right)}{\lambda_{i}(\lambda_{i}+ \lambda_{j})}\left(\hat{e}_{i}\cdot\mathbb{O}_{4,0}(\vec{x})\cdot\hat{e}_{j} \right).\end{split} \tag{106}\]
Recall that these are \(\mathcal{O}(1/n)\) corrections since both \(\mathbb{O}_{3,0},\mathbb{O}_{4,0}\thicksim 1/n\). While this is a valid theoretical description of feature learning (that is, the Neural Tangent Kernel changes from its initial value \(\Theta_{0}\)), we regard this is a regime of "weak" feature learning since the change is small in comparison to the value of \(\Theta_{0}\). Nonetheless, it is challenging to derive closed-form expressions for feature learning that maintain generality across architectures and datasets (the derivation above essentially is model and data agnostic, except for pathological settings), and it is intriguing to have such an expression for further analysis.
### Large learning rate dynamics at large width: the "catapult" mechanism
The connection between infinite-width deep NNs, linear models, kernels, and GPs which was the subject of Lecture 2 holds up to a maximum learning rate \(\eta_{\text{crit}}\) used in gradient descent. In fact, empirically one finds that a large but finite-width NN can be optimized to convergence at learning rates larger than this value [17]. Is it possible to understand some aspects of this regime theoretically?
Indeed, consider a minimal NN model consisting of a single hidden-layer with no nonlinearities,
\[f(x)=\frac{1}{\sqrt{n}}v^{\mathsf{T}}ux, \tag{107}\]
with parameters \(v\in\mathbb{R}^{n}\), \(u\in\mathbb{R}^{nxn_{0}}\), \(m\) samples \((x_{\alpha},y_{\alpha})\) with \(x_{\alpha}\in\mathbb{R}^{n_{0}},y_{\alpha}\in\mathbb{R}\), and trained with gradient descent on square loss in NTK parameterization. To illustrate the main features before returning to the more general case, we consider an even further simplified setting for this model: training on a single sample \((x,y)=(1,0)\) with \(n_{0}=1\). We wish to understand the dynamics of
\[\mathcal{L}_{t}=\frac{f_{t}}{2}\quad\text{with}\quad f_{t}=\frac{1}{\sqrt{n} }v_{t}^{\mathsf{T}}u_{t}. \tag{108}\]
Gradient descent dynamics on the parameters is given by
\[u_{t+1}=u_{t}-\frac{\eta}{\sqrt{n}}f_{t}\cdot v_{t} \nu_{t+1}=v_{t}-\frac{\eta}{\sqrt{n}}f_{t}\cdot u_{t}, \tag{109}\]
and the NTK is just a scalar, \(\Theta_{t}(1,1)=\frac{1}{n}(\|u_{t}\|_{2}^{2}+\|v_{t}\|_{2}^{2}):=\lambda_{t}\). Note that both \(f_{0},\Theta_{0}\thicksim\mathcal{O}(1)\) at initialization. Instead of analyzing the dynamics in parameter space, we work in function space and - in analogy with the construction of the hierarchy of coupled ODEs in Sec. 3.2.1 - write down an evolution for the function, Neural Tangent Kernel, and any other dynamical variables:
\[f_{t+1}=f_{t}\left(1-\eta\lambda_{t}+\frac{\eta^{2}f_{t}^{2}}{n}\right) \lambda_{t+1}=\lambda_{t}+\frac{\eta^{2}f_{t}^{2}}{n}\left(\eta \lambda_{t}-4\right). \tag{110}\]
Surprisingly, for this simplified setting we can close (the discrete time version of) the hierarchy (98) exactly in terms of the variables \(f_{t}\) and \(\lambda_{t}\) alone. This is in contrast to more complex settings where a truncation scheme is required to close the system.
Let us analyze (110) in different regimes. In the \(n\to\infty\) limit, we have
\[f_{t+1}= f_{t}(1-\eta\lambda_{0}) \lambda_{t}=\lambda_{0}, \tag{111}\]
so that the NTK is constant and the function value (and hence loss) converges exponentially in time as long as \(|1-\eta\lambda_{0}|<1\). Consequently, for learning rates \(\eta<\frac{2}{\lambda_{0}}:=\eta_{\text{crit}}\), we obtain NTK dynamics. Backing off slightly from the limit while keeping \(\eta<\eta_{\text{crit}}\), we will obtain \(\mathcal{O}(1/n)\) corrections to the dynamics, analogous to the perturbative corrections we investigated in 3.2.
For learning rates \(\eta>\frac{4}{\lambda_{0}}\), the last term in (110) is positive, causing \(\lambda_{t}\) to increase with time and eventually diverge, along with the loss. In contrast, an interesting regime exists for \(\frac{2}{\lambda_{0}}\leq\eta\leq\frac{4}{\lambda_{0}}\). Initially, the function and loss start to increase in magnitude,
\[f_{t+1}= \overbrace{f_{t}\left(1-\eta\lambda_{t}+\frac{\eta^{2}f_{t}^{2} }{n}\right)}^{\geq 1\text{ for }t=0} \tag{112}\] \[\lambda_{t+1}= \lambda_{t}+\frac{\eta^{2}f_{t}^{2}}{n}\underbrace{(\eta\lambda _{t}-4)}_{<0\forall t}.\]
To see this, note that we can initially ignore the term \(\eta^{2}f_{t}^{2}/n\) in the dynamics of \(f_{t}\), as \(n\) is large, and since \(|1-\eta\lambda_{0}|>1\), \(|f_{t}|\) grows with time. However, once \(|f_{t}|\sim\mathcal{O}(\sqrt{n})\), the second term in the dynamics of \(\lambda_{t}\) yields \(\mathcal{O}(1)\) contributions that enable \(\lambda_{t}\) to decrease and dynamically adjust to the large learning rate. This in turn enables \(|1-\eta\lambda_{t}|<1\) eventually and (combined with the \(\eta^{2}\) term in the dynamics of \(f_{t}\)) results in the convergence of \(f_{t}\) and the loss to a finite value. The mechanism at play here is that the local curvature (essentially captured by \(\lambda_{t}\)) adjusts dynamically to the larger learning rate, and optimization "catapults" to a different region of the high-dimensional landscape in parameters \(u,v\) than its initial condition. This catapult effect, enabling \(|f_{t}|\sim\mathcal{O}(\sqrt{n})\), occurs on a time scale \(t_{*}\sim\mathcal{O}(\log(n))\).
The transition between the "NTK regime" and "catapult regime" that occurs at \(\eta_{\text{crit}}=\frac{2}{\lambda_{0}}\) and becomes progressively sharper as \(n\to\infty\) is reminiscent of a phase transition in dynamics. Indeed, there are measurable quantities that exhibit divergences near this transition. For example, the optimization time \(t_{\epsilon}(\eta)\) needed to reach a loss of \(\mathcal{O}(\epsilon)\) behaves as
\[t_{\epsilon}(\eta)\sim\frac{1}{|\eta_{\text{crit}}-\eta|}, \tag{113}\]
with exponent \(\nu=1\) (and dropped constants) in the vicinity of \(\eta_{\text{crit}}\), approached from below or above.
What is additionally surprising about this phenomenology is that, although we have studied a drastically simplified model, the catapult regime is empirically observed in a diverse range of realistic settings, including different datasets, NN architectures, and precise optimization choices (e.g. stochasticity in gradient descent and choice of standard vs. NTK parameterization) [17]. To reiterate these empirical observations, one finds three regimes of dynamics in large width, deep NNs trained using stochastic gradient descent with learning rate \(\eta\) and square loss:12
Footnote 12: \(\lambda_{0}\) refers to the maximum eigenvalue of the Neural Tangent Kernel at initialization.
1. When \(\eta\lesssim 2/\lambda_{0}\), the "NTK" regime holds, namely the change in the dynamical Neural Tangent Kernel, \(\Delta\Theta_{t}\overset{t\to\infty}{\longrightarrow}0\), vanishes as \(n\) gets larger. We can understand this regime with perturbative corrections discussed earlier in this lecture. The loss decreases fairly monotonically during optimization.
2. When \(2/\lambda_{0}\lesssim\eta\lesssim\eta_{\text{max}}\), the dynamical Neural Tangent Kernel changes by a nonvanishing amount as \(n\to\infty\), \(\Delta\Theta_{t}\overset{t\to\infty}{\longrightarrow}\mathcal{O}(1)\), exhibiting a "strong" form of feature learning. Here \(\eta_{\text{max}}=c/\lambda_{0}\), with \(c\) being a \(\mathcal{O}(1)\) constant. For the minimal model, \(c=4\); while this value is approximately observed in deep NNs with certain nonlinearities, in general \(c\) is a non-universal constant. The loss behaves non-monotonically during optimization, with an initial increase early in training on a time scale \(t\sim\mathcal{O}(\log(n))\). Optimization converges to a region with flatter curvature (as evidenced by the effect on the eigenvalues of the Neural Tangent Kernel).
3. When \(\eta\geq\eta_{\text{max}}\), optimization diverges.
Let us return to the model (107) with the more general setting of \(m\) samples and dimensionality \(n_{0}\)[17]. Gradient descent on parameters takes the form
\[u_{ia,t+1}=u_{ia,t}-\frac{\eta}{m\sqrt{n}}\sum_{a\in\mathcal{D}}v_{i,t}x_{aa}R _{a,t}\qquad v_{i,t+1}=v_{i,t}-\frac{\eta}{m\sqrt{n}}\sum_{a,a\in\mathcal{D}}u_ {ia,t}x_{aa}R_{a,t}, \tag{114}\]
where we use \(R_{a}=f_{a}-y_{a}\) as before. The Neural Tangent Kernel evaluated on the training data has matrix elements \(\Theta_{a\beta}=\frac{1}{nm}\bigg{(}|v|^{2}x_{a}^{T}x_{\beta}+x_{a}^{T}u^{T}ux_{ \beta}\bigg{)}\). Tracking the dynamics in the natural variables on function space (the residual \(R_{a}\) is directly related to \(f_{a}\)) yields
\[\begin{split} R_{a,t+1}&=\sum_{\beta\in\mathcal{D} }(\delta_{a\beta}-\eta\Theta_{a\beta,t})R_{\beta,t}+\frac{\eta^{2}}{nm}(x_{a}^{ T}\zeta_{t})(f_{t}^{T}R_{t})\\ \Theta_{a\beta,t+1}&=\Theta_{a\beta,t}-\frac{\eta} {nm}\bigg{[}(x_{\beta}^{T}\zeta_{t})f_{a,t}+(x_{a}^{T}\zeta_{t})f_{\beta,t}+ \frac{2}{m}(x_{a}^{T}x_{\beta})(R_{t}^{T}f_{t})\bigg{]}+\\ &\frac{\eta^{2}}{n^{2}m}\Big{[}|v_{t}|^{2}(x_{a}^{T}\zeta_{t})(x _{\beta}^{T}\zeta_{t})+(\zeta_{t}^{T}u_{t}^{T}u_{t}\zeta_{t})(x_{a}^{T}x_{ \beta})\Big{]},\end{split} \tag{115}\]
where have defined the vector \(\zeta=\sum_{a\in\mathcal{D}}R_{a}x_{a}/m\in\mathbb{R}^{n_{0}}\). This system of discrete time equations is not closed, unlike the version we considered in the simpler setting, and its closure does not arise naturally with the consideration of higher-order variables analogous to \(\mathbb{O}_{s}\) for \(s\geq 2\). However, we can approximately extract a two-variable closed system of equations that is reminiscent of the simpler system (110). Consider the dynamics of the Neural Tangent Kernel projected onto the residual,
\[R_{t}^{T}\Theta_{t+1}R_{t}=R_{t}^{T}\Theta_{t}R_{t}+\frac{\eta}{n}\zeta_{t}^{ T}\zeta_{t}\bigg{(}\eta R_{t}^{T}\Theta_{t}R_{t}-4f_{t}^{T}R_{t}\bigg{)}. \tag{116}\]
Due to the form of the dominant term in the dynamics of the function (or residual), namely \(\delta_{a\beta}-\eta\Theta_{a\beta_{t}}\), we might be inclined to approximate \(R_{t}\) as becoming well-aligned with the maximum eigenvector of \(\Theta\) at initialization, denoted \(\hat{e}_{\max}\). (Particularly in the catapult regime, the function and the residual grow exponentially fast, and this occurs along the \(\hat{e}_{\max}\) direction.) Hence, as a naive approximation we take \(f_{t}\approx R_{t}\approx(\hat{e}_{\max}\cdot R_{t})\hat{e}_{\max}\). This allows us to approximately simplify the equation for the projected kernel to an equation for the top NTK eigenvalue,
\[\lambda_{t+1}\approx\lambda_{t}+\frac{\eta}{n}\zeta_{t}^{T}\zeta_{t}(\eta \lambda_{t}-4) \tag{117}\]
which bears similarity to the simpler (110). Hence, we can understand how, despite the lack of closure in (115), it contains within it the mechanisms and universal phenomenology of (110), giving rise to distinct regimes of NN dynamics.
## 4 Lecture 4: Boris Hanin
Lectures 4 and 5 are due to Boris Hanin. They continue the trajectory of Yasaman Bahri's Lectures 1-3, focusing on asymptotic and perturbative calculations of the prior distribution of fully-connected neural networks. Lecture 4 derives perturbative corrections to the NNGP Lecture 5 changes tack and discusses exact prior calculations specific to ReLU networks.
### Notation Dictionary
From now on, there be a change of notation that we summarize here:
### Notation
Fix \(L\geq 1\), \(n_{0},\ldots,n_{L+1}\geq 1\), and \(\sigma:\mathbb{R}\to\mathbb{R}\). We will consider a fully connected feed-forward network, which to an input \(x_{a}\in\mathbb{R}^{n_{0}}\) associates an output \(z_{a}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) as follows:
\[z_{i;a}^{(\ell+1)}=\begin{cases}b_{i}^{(\ell+1)}+\sum_{j=1}^{n_{0}}W_{ij}^{( \ell+1)}\sigma\left(z_{j;a}^{(\ell)}\right),&\ell\geq 1\\ b_{i}^{(1)}+\sum_{j=1}^{n_{0}}W_{ij}^{(1)}x_{j;a},&\ell=0\end{cases}. \tag{118}\]
We will have occasion to compute a variety of Gaussian integrals and will abbreviate
\[\left\langle f(z_{a})\right\rangle_{K^{(\ell)}}=\int_{\mathbb{R}}f(z_{a})\exp \left[-\frac{z_{a}^{2}}{2K_{aa}^{(\ell)}}-\frac{1}{2}\log(2\pi K_{aa}^{(\ell) })\right]dz_{a}\]
and more generally
\[\left\langle f(z_{a},z_{\beta})\right\rangle_{K^{(\ell)}}=\int_{\mathbb{R}^{2 }}f(z_{a},z_{\beta})\exp\left[-\frac{1}{2}\sum_{\delta_{i,\gamma}\in\{a,\beta \}}\left(K^{(\ell)}\right)_{\gamma\delta}^{-1}z_{\gamma}z_{\delta}-\frac{1}{ 2}\log\det(2\pi K^{(\ell)})\right]dz_{a}dz_{\beta}\]
for Gaussian integrals in which \((z_{a},z_{\beta})\) is a Gaussian vector with mean \(0\) and covariance
\[K^{(\ell)}=\left(\begin{array}{cc}K_{aa}^{(\ell)}&K_{\alpha\beta}^{(\ell)} \\ K_{a\beta}^{(\ell)}&K_{\beta\beta}^{(\ell)}\end{array}\right).\]
### Main Question: Statement, Answer, and Motivation
#### 4.3.1 Precise Statement of Main Question
Fix \(L\geq 1,n_{0},\ldots,n_{L+1}\geq 1,\sigma:\mathbb{R}\to\mathbb{R}\) as well as constant \(C_{b}\geq 0,C_{W}>0\). Suppose
\[W_{ij}^{(\ell)}\sim\mathcal{N}(0,C_{W}/n_{\ell-1}),\quad b_{i}^{(\ell)}\sim \mathcal{N}(0,C_{b})\qquad\text{independent}. \tag{119}\]
We seek to understand the distribution of the field
\[x_{a}\in\mathbb{R}^{n_{0}}\;\mapsto\;z_{a}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\]
when the hidden layer widths are large but finite:
\[n_{1},\ldots,n_{L}\simeq n\gg 1.\]
### Answer to Main Question
We will endeavor to show that the statistics of \(z_{\alpha}^{(L+1)}\) are determined by
* The universality class of the non-linearity \(\sigma\) (determined by the large \(\ell\) behavior of infinite width networks with this non-linearity).
* The effective depth (or effective complexity) \[\frac{1}{n_{1}}+\cdots+\frac{1}{n_{L}}\simeq\frac{L}{n}.\]
Specifically, we'll see:
* At init, \(L/n\) measures both correlations between neurons and fluctuations in both values and gradients. (this lecture)
* \(L/n\) measures the deviation from the NTK regime in the sense that the change in the NTK from one step of GD scales like \(L/n\). Thus, the (frozen) NTK regime corresponds to the setting in which the effective depth \(L/n\) tends to 0. Moreover, the extent of feature learning, in the sense of figuring out how much the network Jacobian changes at the start of training, is measured by \(L/n\). (next lecture)
* \(L/n\) measures the extent of feature learning in the sense that the entire network function at the end of training scales like the NTK answer plus \(L/n\) plus errors of size \((L/n)^{2}\) (see Chapter \(\infty\) in [10]).
This suggests an interesting phase diagram (see Figure 2).
### Motivations
Before attempting to make precise our answer in SS4.4, we give several motivations for studying our main question:
1. Our first motivation is ML-centric. Namely, to use a neural network in practice requires choosing many hyperparameters, including * width \(n\)
Figure 2: Partial Phase Diagram for Fully Connected Networks with NTK Initialization
* depth \(L\)
* non-linearity \(\sigma\)
* initialization variances \(C_{b},C_{W}\)
* learning rates \(\eta\)
* batch sizes \(|\mathcal{B}|\)
* (\(\ell_{1}\) or \(\ell_{2}\)) regularization strength
Doing direct hyperparameter search is very costly. By studying random networks, we can understand in which combinations these hyperparameters appear in the distribution of \(z_{a}^{(L+1)}\) and, in particular, how to choose them in a coordinated manner so that \(z_{a}^{(L+1)}\) is non-degenerate (say near the start of training) at large values of \(L,n\) and training steps.
2. Our second motivation is mathematical/theoretical. Namely, random fully connected neural networks are non-linear generalizations of random matrix products. Indeed by taking \(n_{\ell}\equiv n\), \(\sigma(t)=t\), \(C_{b}=0,C_{W}=1\), we see that \[z_{\alpha}^{(L+1)}=W^{(L+1)}\cdots W^{(1)}x_{\alpha}\] is simply a linear statistic of product of \(L+1\) iid random matrices. Products of random matrices appear all over the place. When \(L=1\) (or more generally \(L\) is fixed and finite) and \(n\to\infty\), this is like Wigner's (or Wishart's) random theory. In contrast, when \(n\) is fixed and \(L\to\infty\), this is the study of the long time behavior of random dynamical systems. This is the world of the multiplicative ergodic theorem and is used for example in studying Anderson localization in \(1d\). A key point is that these two regimes are very different and what happens when both \(n,L\) are large is relatively poorly understood, even for this random matrix model.
3. The final motivation is again ML-centric. As Yasaman showed in her lectures, when \(L\) is fixed and \(n\to\infty\), fully connected networks with the initialization (119) are in the (frozen) NTK regime. In this setting, the entire training dynamics (at least on MSE with vanishingly small learning rates) are determined by the behavior at initialization. Thus, it is the properties of neural networks at init that allow us to describe the generalization behavior and training dynamics. In particular, by doing perturbation theory _directly for the end of training_, it is possible to understand (see Chapter \(\infty\) of [10]) training in the near-NTK regime in which the NTK changes to order \(1/n\) (really \(L/n\)).
### Intuition for Appearance of \(L/n\)
Before proceeding to explain how to compute finite width corrections to the statistics of random neural networks, we pause to elaborate a simple intuition for why it is \(L/n\), rather than some other combination of \(L\) and \(n\), that should appear. For this, let us consider the very simple case of random matrix products
\[n_{\ell}\equiv n,\,\sigma(t)=t,\,C_{b}=0,C_{W}=1\]
so that
\[z_{\alpha}^{(L+1)}(x)=W^{(L+1)}\cdots W^{(1)}x_{\alpha},\qquad W_{ij}^{(\ell) }\sim\mathcal{N}(0,1/n)\,\,iid.\]
Assuming for convenience that \(||x_{\alpha}||\), let's try to understand what is perhaps the simplest random variable
\[X_{n,L+1}:=\big{|}\big{|}z_{\alpha}^{(L+1)}\big{|}\big{|}\]
associated to our matrix product. In order to understand its distribution recall that for any \(k\geq 1\) a chi-squared random variable with \(k\) degrees of freedom is given by
\[\chi_{k}^{2}\stackrel{{ d}}{{:=}}\sum_{j=1}^{k}X_{j}^{2},\qquad X_{j} \thicksim\mathcal{N}(0,1)\;iid.\]
Recall also that for any unit vector \(u\) we have that if \(W\in\mathbb{R}^{n\times n}\) is a matrix with \(W_{ij}\thicksim\mathcal{N}(0,1/n)\) then
\[Wu\stackrel{{ d}}{{=}}\mathcal{N}(0,\frac{1}{n}\mathrm{I}_{n}), \qquad||Wu||^{2}\stackrel{{ d}}{{=}}\frac{1}{n}\chi_{n}^{2}, \qquad||Wu||\perp\frac{Wu}{||Wu||},\]
where \(\perp\) denotes conditional independence. To use this let's write
\[X_{n,L+1}=\left||W^{(L+1)}\cdots W^{(1)}x_{a}\right||=\left||W^{(L+1)}\cdots W ^{(2)}\frac{W^{(1)}x_{a}}{\left||W^{(1)}x_{a}\right||}\right||\left||W^{(1)}x _{a}\right||.\]
Note that
\[\left(\frac{1}{n}\chi_{n}^{2}\right)^{1/2}\stackrel{{ d}}{{=}} \left||W^{(1)}x_{a}\right||\perp\frac{W^{(1)}x_{a}}{\left||W^{(1)}x_{a} \right||}\in S^{n-1}.\]
Thus, in fact the presentation above allows us to write \(X_{n,L+1}\) as a product of two independent terms! Proceeding in this way, we obtain the following equality in distribution:
\[X_{n,L+1}\stackrel{{ d}}{{=}}\exp\left[\sum_{\ell=1}^{L+1}Y_{ \ell}\right],\qquad Y_{\ell}\thicksim\frac{1}{2}\log\left(\frac{1}{n}\chi_{n}^ {2}\right)\;iid.\]
**Exercise.** Show that
\[\mathbb{E}\left[\frac{1}{2}\log\left(\frac{1}{n}\chi_{n}^{2}\right)\right]=- \frac{1}{4n}+O(n^{-2}),\qquad\mathrm{Var}\left[\frac{1}{2}\log\left(\frac{1}{ n}\chi_{n}^{2}\right)\right]=\frac{1}{4n}+O(n^{-2}).\]
Thus, we see that
\[X_{n,L+1}\stackrel{{ L>1}}{{\approx}}\exp\left[\mathcal{N}\left(- \frac{L}{4n},\frac{L}{4n}\right)\right]\]
and that **taking \(n\) large in each layer tries to make each \(Y_{j}\) close to \(1\) but with errors of size \(1/n\). When we have \(L\) such errors, the total size of the error is on the order of \(L/n\)**.
### Summary of Yasaman's Lectures 1 - 3
We summarize part of Yasaman's lectures in one long theorem.
**Theorem 4.1** (GP + NTK Regime for Networks at Fixed Depth and Infinite Width).: _Fix \(L,n_{0},n_{L+1},\sigma\). Suppose that at the start of training we initialize as in (119)._
1. _GP at Init._ _As_ \(n_{1},\ldots,n_{L}\to\infty\)_, the field_ \(x\mapsto z^{(L+1)}(x)\) _converges weakly in distribution to a free (Gaussian) field with a vanishing one point function_ \[\lim_{n_{1},\ldots,n_{L}\to\infty}\mathbb{E}\left[z_{i;\alpha}^{(L+1)}\right]=0\] _and a two point function that factorizes across neurons_ \[\lim_{n_{1},\ldots,n_{L}\to\infty}\mathrm{Cov}\left(z_{i;\alpha}^{(L+1)},z_{j ;\beta}^{(L+1)}\right)=\xi_{ij}K_{a\beta}^{(L+1)}.\]
_Moreover, the two point function is given by the following recursion_ \[K_{a\beta}^{(t+1)}=\begin{cases}C_{b}+C_{W}\left\langle\sigma(z_{a})\sigma(z_{ \beta})\right\rangle_{K^{(t)}},&\ell\geq 1\\ C_{b}+\frac{C_{W}}{n_{0}}x_{a}\cdot x_{\beta},&\ell=0\end{cases},\] (120) _If_ \(C_{b}\)_,_ \(C_{W}\) _are chosen by "tuning to criticality" (e.g._ \(C_{b}=0\)_,_ \(C_{W}=2\) _for ReLU or_ \(C_{b}=0\)_,_ \(C_{W}=1\) _for_ \(\tanh\)_) in the sense that_ \[\exists K_{*}\geq 0\quad\text{s.t.}\quad K_{*}=C_{b}+C_{W}\left\langle \sigma^{2}\right\rangle_{K_{*}}\] \[\frac{\partial K_{aa}^{(\ell+1)}}{\partial K_{aa}^{(\ell)}} \bigg{|}_{K_{aa}^{(\ell)}=K_{*}}=\chi_{\parallel;a}^{(\ell)}=\frac{C_{W}}{2} \left\langle\beta^{2}\sigma^{2}\right\rangle_{K_{*}}=1\] \[\frac{\partial K_{a\beta}^{(\ell+1)}}{\partial K_{a\beta}^{(\ell +1)}}\bigg{|}_{K_{aa}^{(\ell)}=K_{b\beta}^{(\ell)}=K_{*}^{(\ell)}}=\chi_{ \perp}^{(\ell)}=C_{W}\left\langle(\sigma^{\prime})^{2}\right\rangle_{K_{*}}=1,\] _then_ \[K_{aa}^{(\ell)}\simeq\ell^{-\delta_{1}},\qquad\delta_{1}\in[0,1]\] _and_ \[\operatorname{Corr}_{a\beta}^{(\ell)}:=\frac{K_{a\beta}^{(\ell)}}{\left(K_{aa }^{(\ell)}K_{\beta\beta}^{(\ell)}\right)^{1/2}}\simeq 1-C_{\sigma}\ell^{-\delta_{2} },\qquad\delta_{2}\in[1,2].\] (121)
* _Equivalence to Linear Model in Small LR Optimization with MSE._ _If_ \(\theta=\left\{W^{(\ell)},b^{(\ell)}\right\}\) _is initialized to be_ \(\theta_{0}\) _as in (_119_) and is optimized by gradient flow (or GD with learning rate like_ \(n^{-1/2}\)_) on empirical mean squared error over a fixed dataset, then as_ \(n_{1},\ldots,n_{L}\to\infty\) _optimization is equivalent to first linearizing_ \[z_{\alpha}^{(L+1)}(\theta)\quad\mapsto\quad z_{\alpha}^{(L+1)}(\theta_{0})+ \nabla_{\theta}z_{\alpha}^{(L+1)}(\theta_{0})\left(\theta-\theta_{0}\right)\] _and then performing gradient flow on the same loss. The corresponding (neural tangent) kernel_ \[\Theta_{a\beta}^{(L+1)}:=\nabla_{\theta}z_{\alpha}^{(L+1)}(\theta_{0})^{T} \nabla_{\theta}z_{\beta}^{(L+1)}(\theta_{0})\in\mathbb{R}^{n_{L+1}\times n_{L +1}}\] _satisfies a recursion similar to (_120_)._
### Formalizing Inter-Neuron Correlations and Non-Gaussian Fluctuations
To formulate our main result for this lecture define the normalized connected 4 point function:
\[\kappa_{4;a}^{(\ell)}=\frac{1}{3}\kappa\left(z_{i;a}^{(\ell)}z_{i;a}^{(\ell)} z_{i;a}^{(\ell)}z_{i;a}^{(\ell)}\right)=\frac{1}{3}\left(\mathbb{E}\left[ \left(z_{i;a}^{(\ell)}\right)^{4}\right]-3\mathbb{E}\left[\left(z_{i;a}^{(\ell )}\right)^{2}\right]^{2}\right).\]
Note that \(\kappa_{4;a}^{(\ell)}\) captures both fluctuations
\[\operatorname{Var}\left[\left(z_{i;a}^{(\ell)}\right)^{2}\right]=3\kappa_{4;a }^{(\ell)}+2\mathbb{E}\left[\left(z_{i;a}^{(\ell)}\right)^{2}\right]^{2}\]
and non-Gaussianity (in the sense that if \(z_{i;a}^{(\ell)}\) is Gaussian, then \(\kappa_{4;a}^{(\ell)}=0\)).
**Exercise.** Show that
\[\kappa_{4;a}^{(\ell)}:=\operatorname{Cov}\left(\left(z_{i;a}^{(\ell)}\right)^{ 2},\left(z_{j;a}^{(\ell)}\right)^{2}\right),\]
allowing us to interpret \(\kappa_{\xi;a}^{(t)}\) as a measure of inter-neuron correlations.
Since as \(n_{1},\cdots,n_{L}\to\infty\), neurons are independent and Gaussian, we have that
\[\lim_{n_{1},\ldots,n_{L-1}\to\infty}\kappa_{\xi;a}^{(t)}=0.\]
Our main purpose in this lecture is obtain the following characterization of \(\kappa_{\xi;a}^{(t)}\).
**Theorem 4.2**.: _Fix \(L,n_{0},n_{L+1},\sigma\). Suppose that the weights and biases are chosen as in (119) and that_
\[n_{1},\ldots,n_{L}\simeq n\gg 1.\]
_The four point function is of order \(O(n^{-1})\) and satisfies the following recursion:_
\[\kappa_{4}^{(t+1)}=\frac{C_{W}^{2}}{n_{t}}\mathrm{Var}_{K^{(t)}}\left[\sigma^ {2}\right]+\left(\chi_{||;a}^{(t)}\right)^{2}\kappa_{4}^{(t)}+O(n^{-2}).\]
_Thus, at criticality and uniform width (\(n_{t}=n\)), we have_
\[\frac{\kappa_{\xi;a}^{(L+1)}}{\left(K_{aa}^{(L+1)}\right)^{2}}=C_{\sigma} \frac{L}{n}+O_{L,\sigma}(n^{-2}).\]
_Moreover, for any fix \(m\geq 1\) and any "reasonable" function \(f:\mathbb{R}^{m}\to\mathbb{R}\) we may write_
\[\mathbb{E}\left[f\left(z_{i;a}^{(t)},\,i=1,\ldots,m\right)\right] =\left\langle f\left(z_{i;a},\,i=1,\ldots,m\right)\right\rangle_ {G^{(t)}}\] \[+\frac{\kappa_{4}^{(t+1)}}{8}\left\langle\bigg{(}\sum_{j=1}^{m} \sigma_{j_{j_{0}}}^{4}+\sum_{\begin{subarray}{c}j_{1},j_{2}=1\\ j_{1}\neq j_{2}\end{subarray}}^{m}\sigma_{j_{2};a}^{2}\bigg{)}f\left(z_{i;a}, \,i=1,\ldots,m\right)\right\rangle_{K^{(t)}}\] \[+O(n^{-2}).\]
_Here, \(G^{(t)}\) is the dressed two point function_
\[G^{(t)}_{a\beta}=\mathbb{E}\left[\bar{z}_{i;a}^{(t)}z_{i;\beta}^{(t)}\right]= K_{a\beta}^{(t)}+O(n^{-1}).\]
This Theorem is originally derived in a physics way in the breakthrough paper of Yaida [18]. It was then rederived, again at a physics level of rigor in Chapter 4 of [10]. Finally, it was derived in a somewhat different, and more mathematical, way in [19].
### Proof of Theorem 4.2
#### 4.9.1 A Bit of Background
To study a general non-Gaussian random vector \(z=(z_{1},\ldots,z_{m})\), we will understand its characteristic function
\[\widehat{p}_{z}(\xi):=\mathbb{E}\big{[}e^{-iz\cdot\xi}\big{]}=\int_{\mathbb{ R}^{m}}e^{-i\sum_{j=1}^{m}\bar{z}_{j}\xi_{j}}p(z)dz.\]
Its utility is:
1. For any reasonable \(f\) we can write the expectation of \(f(z)\) using the characteristic function by taking a Fourier transform: \[\mathbb{E}\left[f(z)\right]=\int_{\mathbb{R}^{m}}f(z)p(z)dz=\int_{\mathbb{R}^ {m}}\widehat{f}(\xi)\widehat{p}_{z}(\xi)d\xi\]
2. A Gaussian with \(0\) variance \(K\) has the simplest characteristic function: \[z\sim\mathcal{N}(0,K)\quad\Rightarrow\quad\widehat{p}_{z}(\xi_{1}, \ldots,\xi_{m})=\exp\left[-\frac{1}{2}\sum_{j_{1},j_{2}=1}^{m}K_{j_{1},j_{2}}\xi_ {j_{1}}\xi_{j_{2}}\right].\]
3. Multiplication of \(\widehat{f}(\xi)\) by \(\xi\) corresponds to differentiation: \[\xi_{j}\widehat{f}(\xi)=\widehat{-i\partial_{j}\widehat{f}}(\xi).\]
We will need the following
**Proposition 4.3**.: _Let \(W=(W_{1},\ldots,W_{n})\sim N(\mu,\Sigma)\). Then, for any independent (e.g. constant) matrix \(A\in\mathbb{R}^{k\times n}\), we have_
\[AW\sim\mathcal{N}(A\mu,A\Sigma A^{T}).\]
#### 4.9.2 First Step: Reduce to Collective Observables
**Definition**.: For any \(f:\mathbb{R}^{m}\to\mathbb{R}\) we will say that
\[\mathcal{O}_{f}^{(t)}=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}}f(z_{j;\alpha}^{( t)},\alpha=1,\ldots,m)\]
is a _collective observable_.
Collective observables play a crucial role in our analysis. Indeed, our first step is to rewrite all the quantities in Theorem 4.2 in terms of such objects. Let us fix any \(m\geq 1\). We have
\[\mathbb{E}\left[f\left(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m \right)\right]=\int_{\mathbb{R}^{m}}\widehat{f}(\xi)\mathbb{E}\left[\exp\left[ -i\sum_{j=1}^{m}\xi_{j}z_{j;\alpha}^{(\ell+1)}\right]\right]d\xi.\]
We begin by applying Proposition 4.3 to simplify the characteristic function of \((z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\).
**Lemma 4.4**.: _Conditional on \(z_{\alpha}^{(\ell)}\),_
\[\left(z_{\alpha}^{(\ell+1)}\right)_{i=1}^{n_{\ell+1}}\text{ is a Gaussian with mean $0$ and covariance }\Sigma_{\alpha}^{(\ell)}\cdot\mathrm{I},\]
_where_
\[\Sigma_{\alpha}^{(\ell)}=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{ n_{\ell}}\sigma\left(z_{j;\alpha}^{(\ell)}\right)^{2}\]
_is a collective observable. In particular, for each \(\xi=(\xi_{1},\ldots,\xi_{m})\), we have_
\[\mathbb{E}\left[e^{-i\sum_{i=1}^{m}\xi_{i}z_{\alpha;\alpha}^{(\ell+1)}}\right] =\mathbb{E}\left[e^{-\frac{1}{2}\|\xi\|^{2}\kappa_{\alpha}^{(\ell )}}\right]\]
_and, moreover,_
\[\kappa_{4}^{(\ell+1)}=\mathrm{Var}\left[\Sigma_{\alpha}^{(\ell) }\right].\]
Proof.: We have
\[z_{i;\alpha}^{(\ell+1)}=(\sigma(z_{\alpha}^{(\ell)})\ 1)(W^{(\ell+1)}\ b^{(\ell+1) })^{T}.\]
Thus, if we are given \(\sigma(z_{\alpha}^{(\ell)})\), then \(\left(z_{i;\alpha}^{(\ell+1)}\right)_{i=1}^{n_{\ell+1}}\) are iid Gaussian with mean \(0\) and
\[\Sigma_{\alpha}^{(\ell)}=\mathbb{E}\left[\left(z_{i;\alpha}^{(\ell+1)}\right)^ {2}\right]=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\left(\sigma\left( z_{j;\alpha}^{(\ell)}\right)\right)^{2}.\]
In particular,
\[\mathbb{E}\left[e^{-i\sum_{j=1}^{m}\xi_{j}z_{j;\alpha}^{(\ell+1)}}\right]= \mathbb{E}\left[\mathbb{E}\left[e^{-i\sum_{j=1}^{m}\xi_{j}z_{j;\alpha}^{(\ell+ 1)}}\ \Big{|}\ z_{\alpha}^{(\ell)}\right]\right]=\mathbb{E}\left[e^{-\frac{1}{2} \|\xi\|^{2}\Sigma_{\alpha}^{(\ell)}}\right].\]
We have therefore found that for any reasonable \(f\),
\[\mathbb{E}\left[f(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\right]=\int_{\mathbb{ R}^{m}}\widehat{f}\left(\xi\right)\mathbb{E}\left[e^{-\frac{1}{2}\|\xi\|^{2} \Sigma_{\alpha}^{(\ell)}}\right]d\xi.\]
Step 2: Decompose the Self-Averaging Observable \(\Sigma_{\alpha}^{(\ell)}\) into a Mean and Fluctuation
Since \(\Sigma_{\alpha}^{(\ell)}\) is a collective observable, it makes sense to consider
\[G_{\alpha}^{(\ell)}:=\mathbb{E}\big{[}\Sigma_{\alpha}^{(\ell)}\big{]},\qquad \Delta_{\alpha}^{(\ell)}:=\Sigma_{\alpha}^{(\ell)}-\mathbb{E}\big{[}\Sigma_{ \alpha}^{(\ell)}\big{]}.\]
The scalar \(G_{\alpha}^{(\ell)}\) is sometimes referred to as a dressed two point function.
**Exercise**.: Show for any observables of the form
\[\mathcal{O}_{f}^{(\ell)}=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}}f(z_{j;\alpha} ^{(\ell)})\]
that
\[\mathbb{E}\left[\prod_{j=1}^{q}\left(\mathcal{O}_{f_{j}}^{(\ell)}-\mathbb{E} \Big{[}\mathcal{O}_{f_{j}}^{(\ell)}\Big{]}\right)\right]=\mathcal{O}_{q,f, \ell,\sigma}\left(n^{-\frac{q}{2}\|}\right).\]
Hint: do this in several steps:
1. First check this when \(\ell=1\). This is easy because neurons are independent in the first layer.
2. Next assume that \(\sigma\) is a polynomial and show that if you already know that the result holds at layer \(\ell\), then it must also hold at layer \(\ell+1\). This is not too bad but requires some book-keeping.
3. Show that the full problem reduces to the case of polynomial activations. This is somewhat tricky.
### Step 3: Expand in Powers of Centered Collective Observables
We have
\[\mathbb{E}\Big{[}f(z^{(t)}_{i;a},i=1,\ldots,m)\Big{]} =\int_{\mathbb{R}^{n_{m}}}\widehat{f}(\xi)\,\mathbb{E}\Big{[}e^{- \frac{1}{2}\|\xi\|^{2}\mathbb{z}^{(t)}_{a}}\Big{]}d\xi\] \[=\int_{\mathbb{R}^{n_{m}}}\widehat{f}(\xi)\,e^{-\frac{1}{2}\| \xi\|^{2}G^{(t)}_{a}}\mathbb{E}\Big{[}e^{-\frac{1}{2}\|\xi\|^{2}\Delta^{(t)}_{ a}}\Big{]}d\xi.\]
Applying the exercise above we may actually Taylor expand to find a power series expansion in \(1/n\):
\[\mathbb{E}\Big{[}e^{-\frac{1}{2}\|\xi\|^{2}\Delta^{(t)}_{a}}\Big{]}=\sum_{q \geq 0}\frac{(-1)^{q}}{2^{q}q!}\,\|\xi\|^{2q}\,\mathbb{E}\Big{[}\big{(}\Delta^{( t)}_{a}\big{)}^{q}\Big{]}=1+\frac{1}{8}\,\|\xi\|^{4}\,\mathbb{E}\Big{[}\big{(} \Delta^{(t)}_{a}\big{)}^{2}\Big{]}+O(n^{-2}).\]
Putting this all together yields
\[\mathbb{E}\Big{[}f(\mathbb{z}^{(t)}_{i;a},i=1,\ldots,m)\Big{]}=\int_{\mathbb{ R}^{n_{m}}}\left(1+\frac{1}{8}\,\|\xi\|^{4}\,\mathbb{E}\Big{[}\big{(}\Delta^{(t)}_{ a}\big{)}^{2}\Big{]}\right)\widehat{f}(\xi)\,e^{-\frac{1}{2}\|\xi\|^{2}G^{(t)}_{ a}}d\xi+O(n^{-2}).\]
In particular, we obtain
\[\mathbb{E}\Big{[}f(\mathbb{z}^{(t)}_{i;a},i=1,\ldots,m)\Big{]}=\langle f \rangle_{G^{(t)}_{a}}+\frac{1}{8}\mathbb{E}\Big{[}\big{(}\Delta^{(t)}_{a} \big{)}^{2}\Big{]}\Bigg{\langle}\left(\sum_{j=1}^{m}\partial_{z^{(t)}_{j;a}}^ {2}\right)^{2}f\Bigg{\rangle}_{G^{(t)}_{a}}+O(n^{-2}).\]
**Exercise**. Show that
\[\langle f\rangle_{G^{(t)}_{a}}=\langle f\rangle_{K^{(t)}_{aa}}+O(n^{-1}).\]
Hint: define
\[S^{(t)}_{a}:=G^{(t)}_{a}-K^{(t)}_{aa}.\]
We already know that \(\mathbb{E}\Big{[}\big{(}\Delta^{(t)}_{a}\big{)}^{2}\Big{]}=O(n^{-1})\). Now obtain a recursion for \(S^{(t)}_{a}\) using the perturbative expansion above and check that the solution is of order \(O(n^{-1})\).
### Step 4: Relating \(k^{(\ell+1)}_{4;a}\) to the Dressed 2 Point Function and Obtaining Its Recursion
Recall that Lemma 4.4 we saw that
\[k^{(\ell+1)}_{4;a}=\mathbb{E}\Big{[}\big{(}\Delta^{(t)}_{a}\big{)}^{2}\Big{]}.\]
Moreover,
\[\mathbb{E}\Big{[}\big{(}\Delta^{(t)}_{a}\big{)}^{2}\Big{]}=\frac{1}{n_{\ell}} \mathbb{E}\Big{[}\big{(}X^{(\ell)}_{1;a}\big{)}^{2}\Big{]}+\left(1-\frac{1}{n _{\ell}}\right)\mathbb{E}\Big{[}X^{(\ell)}_{1;a}X^{(\ell)}_{2;a}\Big{]},\]
where
\[X^{(\ell)}_{j;a}:=C_{W}\left(\sigma(z^{(\ell)}_{j;a})^{2}-\mathbb{E}\Big{[} \sigma(z^{(\ell)}_{j;a})^{2}\Big{]}\right).\]
Applying the result of Step 3 (and a previous exercise) yields
\[\frac{1}{n_{\ell}}\mathbb{E}\Big{[}\big{(}X^{(\ell)}_{1;a}\big{)}^{2}\Big{]}= \frac{1}{n_{\ell}}C_{W}^{2}\left\langle\left(\sigma^{2}-\big{(}\sigma^{2})_{K^{ (t)}_{aa}}\right)^{2}\right\rangle_{K^{(t)}_{aa}}+O(n^{-2})=\frac{C_{W}^{2}}{n_ {\ell}}\text{var}_{K^{(t)}}[\sigma^{2}]+O(n^{-2}).\]
Finally, note that
\[0=\mathbb{E}\left[X_{i;\alpha}^{(t)}\right]=\left\langle X_{i;\alpha}^{(t)} \right\rangle_{G_{\alpha}^{(t)}}+O(n^{-2}).\]
Hence,
\[\mathbb{E}\left[X_{1;\alpha}^{(t)}X_{2;\alpha}^{(t)}\right] =\kappa_{4;\alpha}^{(t)}\left\langle\left(\frac{1}{8}\sum_{j=1}^{ 2}\partial_{z_{j}\alpha}^{4}+\frac{1}{4}\partial_{z_{1;\alpha}}^{2}\partial_{ z_{2;\alpha}}^{2}\right)X_{1;\alpha}^{(t)}X_{2;\alpha}^{(t)}\right\rangle_{K_{aa}^{(t )}}+O(n^{-2})\] \[= \left(\frac{C_{W}}{2}\left\langle\partial^{2}\sigma^{2}\right\rangle _{K_{aa}^{(t)}}\right)^{2}\kappa_{4;\alpha}^{(t)}+O(n^{-2})\] \[= \left(\chi_{||;\alpha}^{(t)}\right)^{2}\kappa_{4;\alpha}^{(t)}+O (n^{-2}).\]
### Step 5: Solving the \(4\) point function recursion
In this section, we solve the four point function recursion
\[\kappa_{4}^{(t+1)}=\frac{C_{W}^{2}}{n_{t}}\mathrm{Var}_{K^{(t)}}[\sigma^{2}] +\left(\chi_{||;\alpha}^{(t)}\right)^{2}\kappa_{4;\alpha}^{(t)}+O(n^{-2})\]
in the special case when
\[\sigma(t)=\mathrm{ReLU}(t)=t\,1_{t>0}.\]
First of all, as Yasaman showed, we have
\[K_{aa}^{(t+1)}=C_{b}+C_{W}\left\langle\sigma^{2}(z_{\alpha}) \right\rangle_{K_{aa}^{(t)}}=C_{b}+\frac{C_{W}}{2}K_{aa}^{(t)}\] \[\chi_{||;\alpha}^{(t+1)}=\frac{\partial K_{aa}^{(t+1)}}{\partial K _{aa}^{(t)}}=\frac{C_{W}}{2}.\]
So we are at criticality only if
\[C_{b}=0,\qquad C_{W}=2.\]
With this, we have
\[\chi_{||;\alpha}^{(t)}\equiv 1,\qquad K_{aa}^{(t)}\equiv\frac{2}{n_{0}}\left||x _{a}\right||^{2}.\]
Thus,
\[\mathrm{Var}_{K^{(t)}}[\sigma^{2}]=\left\langle\sigma^{4}\right\rangle_{K_{aa }^{(t)}}-\left(\left\langle\sigma^{2}\right\rangle_{K_{aa}^{(t)}}^{2}\right) ^{2}=\frac{3}{2}\left(K_{aa}^{(t)}\right)^{2}-\frac{1}{4}\left(K_{aa}^{(t)} \right)^{2}=\frac{5}{4}\left(K_{aa}^{(t)}\right)^{2}.\]
So we find
\[\frac{\kappa_{4}^{(t)}}{\left(K_{aa}^{(t)}\right)^{2}}=\sum_{\ell^{\prime}=1 }^{\ell-1}\frac{5}{n_{\ell^{\prime}}}+O(n^{-2}).\]
**Exercise.** Redo this analysis for \(\sigma(t)=\tanh(t)\) to find that if \(n_{\ell}\equiv n\) we have
\[\frac{\kappa_{4}^{(t)}}{\left(K_{aa}^{(t)}\right)^{2}}=\frac{2\ell}{3n}\left( 1+o_{\ell}(1)\right)+O_{\ell}(n^{-2}).\]
Hint: you should start by deriving the asymptotics
\[K_{aa}^{(t)}=\frac{1}{2\ell}+O(\log(\ell)/\ell^{2}),\]
using this to compute the form of the coefficients in the recursion for \(\kappa_{4}^{(t)}\), and then solve this recursion to leading order in \(\ell\).
## 5 Lecture 5
### Introduction
As in the last lecture, let us fix \(L\geq 1\), \(n_{0},\ldots,n_{L+1}\geq 1\), and \(\sigma:\mathbb{R}\to\mathbb{R}\). We will continue to consider a fully connected feed-forward network, which to an input \(x_{\alpha}\in\mathbb{R}^{n_{0}}\) associates an output \(z_{\alpha}^{(L+1)}\in\mathbb{R}^{n_{L+1}}\) as follows:
\[z_{i;\alpha}^{(\ell+1)}=\begin{cases}\sum_{j=1}^{n_{\ell}}W_{ij}^{(\ell+1)} \sigma\left(z_{j;\alpha}^{(\ell)}\right),&\ell\geq 1\\ \sum_{j=1}^{n_{0}}W_{ij}^{(1)}x_{j;\alpha},&\ell=0\end{cases} \tag{122}\]
Note that we have set the biases to be 0. We will mainly be interested in the setting where
\[n_{1},\ldots,n_{L}\simeq n\gg 1\]
and we have tuned to criticality:
\[W_{ij}^{(\ell+1)}=\sqrt{\frac{2}{n_{\ell}}}\widetilde{W}_{ij}^{(\ell+1)}, \qquad\widetilde{W}_{ij}^{(\ell+1)}\sim\mu,\qquad b_{i}^{(\ell+1)}=0,\]
where \(\mu\) is any distribution of \(\mathbb{R}\) with:
* \(\mu\) is symmetric around 0 with no atoms
* \(\mu\) has variance 1 and finite (but otherwise arbitrary) higher moments.
### Goal
The goal of this lecture is to introduce a combinatorial formalism for studying the important special case of ReLU network at a single input:
\[\sigma(t)=\text{ReLU}(t)=t1_{t>0},\qquad x_{\alpha}\neq 0\in\mathbb{R}^{n_{0}} \text{ fixed}.\]
The main results will illustrate the following
**Theorem 5.1** (Meta-Claim).: _The behavior at of a random ReLU network with iid random weights at a single input is exactly solvable and is determined by the inverse temperature_
\[\beta:=5\left(\frac{1}{n_{1}}+\cdots+\frac{1}{n_{L}}\right)\simeq\frac{5L}{n}.\]
_Specifically,_
* _The distribution of the squared entries_ \(\left(\partial_{x_{\alpha}}z_{q;a}^{(L+1)}\right)^{2}\) _of the input-out Jacobian are log-normal with inverse temperature_ \(\beta\)_:_ \[\left(\partial_{x_{\alpha},\alpha}z_{q;a}^{(L+1)}\right)^{2}\simeq\exp\left[ \mathcal{N}(-\frac{\beta}{2},\beta)\right]\] _We will derive this result shortly._
* _The fluctuations of the NTK_ \(\Theta_{aa}^{(L+1)}\) _evaluated a single input at initialization are exponential in_ \(\beta\) _:_ \[\mathbb{E}\left[\Theta_{aa}^{(L+1)}\right]\sim L,\qquad\frac{\mathbb{E}\left[ \left(\Theta_{aa}^{(L+1)}\right)^{2}\right]}{\mathbb{E}\left[\Theta_{aa}^{(L+ 1)}\right]^{2}}\sim\exp\left[5\beta\right].\]
* _The relative change in the NTK from one step of GD is_ \[\frac{\mathbb{E}\left[\Delta\Theta_{aa}^{(L+1)}\right]}{\mathbb{E}\left[ \Theta_{aa}^{(L+1)}\right]}\sim\frac{L}{n}\exp\left[5\beta\right]\]
### Formalism For Proof of Theorem 5.1
The purpose of this section is to introduce to a combinatorial approach to understanding essentially any statistic of a random ReLU network that depends on its values at a single input. I developed this point of view in the articles [20, 21, 22].
To explain the setup let us fix \(L\geq 1\) as well as \(n_{0},\ldots,n_{L+1}\geq 1\) and a random ReLU network \(x\in\mathbb{R}^{n_{0}}\mapsto g^{(L+1)}(x)\in\mathbb{R}^{n_{L+1}}\) defined recursively by
\[z^{(\ell+1)}(x)=\begin{cases}W^{(1)}x+b^{(1)}\in\mathbb{R}^{n_{1}},&\ell=0\\ W^{(\ell+1)}\sigma(z^{(\ell)}(x))+b^{(\ell+1)}\in\mathbb{R}^{n_{\ell+1}},&\ell \geq 1\end{cases}.\]
We assume that \(W^{(\ell)}=(W^{(\ell)}_{ij}\), \(i=1,\ldots,n_{\ell}\), \(j=1,\ldots,n_{\ell-1})\) are independent:
\[W^{(\ell)}_{ij}:=\left(\frac{2}{n_{\ell-1}}\right)^{1/2}\widehat{W}^{(\ell)}_ {i,j},\qquad\widehat{W}^{(\ell)}_{i,j}\sim\mu,\,\text{iid},\]
where \(\mu\) is any fixed probability measure on \(\mathbb{R}\) that satisfying
* \(\mu\) has a density \(d\mu(x)\) relative to Lebesgue measure.
* \(\mu\) is symmetric around \(0\) in the sense that \(d\mu(x)=d\mu(-x)\) for all \(x\)
* \(\mu\) has variance \(1\) in the sense that \(\int_{\mathbb{R}}x^{2}d\mu(x)=1\).
The key result which allows for a specialized combinatorial analysis of ReLU networks evaluated a single input is the following:
**Proposition 5.2** (Exact Matrix Model Underlying Random ReLU Networks).: _The values of a random ReLU network at init evaluated at a single input is equal in distribution to a deep linear network with dropout \(p=1/2\):_
\[z_{a}^{(L+1)}\stackrel{{ d}}{{=}}W^{(L+1)}D^{(L)}W^{(L)}\cdots D^ {(1)}W^{(1)}x_{a},\]
_where_
\[D^{(\ell)}=\operatorname{Diag}\left(\xi_{1},\ldots,\xi_{n_{\ell}}\right), \quad\xi_{i}\sim\operatorname{Bernoulli}(1/2).\]
Sketch of Proof.: We always have
\[z_{a}^{(L+1)}\stackrel{{ d}}{{=}}W^{(L+1)}\widehat{D}^{(L)}_{a}W^ {(L)}\cdots\widehat{D}^{(1)}_{a}W^{(1)}x_{a},\]
where
\[\widehat{D}^{(\ell)}_{a}=\operatorname{Diag}\left(\mathbf{1}_{\left[\xi_{i}^ {(\ell)}>0\right]},\,i=1,\ldots,n_{\ell}\right).\]
Conditional on \(\ell\), the neuron pre-activations \(z_{i;a}^{(\ell+1)}\) are independent. Moreover, since the distribution of \(W^{(\ell+1)}_{ij}\) is symmetric around \(0\), we have
\[\mathbf{1}_{\left\{z_{ia}^{(\ell+1)}>0\right\}}\stackrel{{ d}}{{=}} \operatorname{Bernoulli}(1/2).\]
However, this distribution is independent of \(z_{a}^{(\ell)}\) and hence is also the unconditional distribution of the variables \(\mathbf{1}_{\left\{z_{ia}^{(\ell+1)}>0\right\}}\). This proves that they are independent. Finally, by symmetrizing the signs of all network weights, we have that, on the hand, the distribution of any function that is even in the networks weights is unchanged and, on the other hand, that the collection \(\mathbf{1}_{\left\{z_{ia}^{(\ell+1)}>0\right\}}\) runs through all possible values \(\left\{0,1\right\}^{\#\text{neurons}}\) configurations. Thus, they are independent.
In order to study random ReLU networks we will make use of the following notation.
**Definition 2**.: _The space of paths in a ReLU network with layer widths \(n_{0},\ldots,n_{L+1}\) is_
\[\Gamma:=[n_{0}]\times\cdots\times[n_{L+1}],\]
_where for any \(n\geq 1\) we have \([n]=\{1,\ldots,n\}\). A path \(\gamma=(\gamma(0),\ldots,\gamma(L+1))\in\Gamma\) determines weights and pre-activations:_
\[W_{\gamma}^{(t)}:=W_{\gamma(t-1)\gamma(t)}^{(t)},\qquad z_{\gamma;a}^{(t)}:=z_{ \gamma(t);a}^{(t)}.\]
These paths are useful because of the following well-known formula
\[z_{q;a}^{(L+1)}:=\sum_{p=1}^{n_{0}}x_{p;a}\sum_{\gamma\in\mathbb{P}_{q}}W_{ \gamma}^{(t+1)}\prod_{\ell=1}^{L}W_{\gamma}^{(\ell)}\xi_{\gamma;a}^{(t)}, \qquad\xi_{\gamma;a}^{(t)}:=\mathbf{1}_{\{z_{\gamma;a}^{(t)}>0\}}. \tag{123}\]
**Exercise.** Check that this formula is valid.
Note that Proposition 5.2 allows us to assume
\[\xi_{\gamma;a}^{(t)}\thicksim\text{Bernoulli}(1/2)\,iid.\]
### Formulas for Gradients Using Paths
In this section, we will record, in the form of exercises, some formulas for gradients.
**Exercise.** Show that
\[\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{p;a}}=\sum_{\gamma\in\mathbb{P}_{q }}W_{\gamma}^{(L+1)}\prod_{\ell=1}^{L}W_{\gamma}^{(\ell)}\xi_{\gamma;a}^{(t)}.\]
Conclude that the distribution of \(\partial z_{p;a}^{(L+1)}/\partial x_{q;a}\) is the same for all \(x_{a}\neq 0\).
**Exercise.** Show that
\[\frac{\partial z_{1}^{(L+1)}(x)}{\partial\widetilde{W}_{ij}^{(\ell)}}=\sum_{p =1}^{n_{0}}x_{p}\sum_{\begin{subarray}{c}\gamma\in\mathbb{P}_{p,1}\\ \gamma(t-1)=j,\,\gamma(t)=i\end{subarray}}\left(\frac{C_{W}}{n_{\ell-1}} \right)^{1/2}\frac{W_{\gamma}^{(L+1)}\prod_{\ell^{\prime}=1}^{L}W_{\gamma}^{( \ell^{\prime})}\xi_{\gamma;a}^{(\ell^{\prime})}}{W_{ij}^{(\ell)}}\]
and hence also that
\[\mathbb{E}\left[\left(\frac{\partial z_{1}^{(L+1)}(x)}{\partial \widetilde{W}_{ij}^{(\ell)}}\right)^{2}\right]\] \[\qquad=\sum_{p_{1},p_{2}=1}^{n_{0}}x_{p_{1},p_{2}}\sum_{ \begin{subarray}{c}\gamma_{1}:\gamma_{2}\in\mathbb{P}_{p,1},\mathbb{P}_{p,2} \\ \gamma_{k}(\ell-1)=j,\,\gamma_{k}(\ell)=j,\,k=1,2\end{subarray}}\frac{C_{W}}{n _{\ell-1}}\mathbb{E}\left[\prod_{k=1}^{2}W_{\gamma_{k}}^{(L+1)}\frac{\prod_{ \ell^{\prime}=1}^{L}W_{\gamma_{k}}^{(\ell^{\prime})}\xi_{\gamma_{k};a}^{(\ell ^{\prime})}}{\left(W_{ij}^{(\ell)}\right)^{2}}\right].\]
Assume \(n_{L+1}=1\) and use this to derive a sum-over-paths formula for the on-diagonal NTK
\[\Theta_{\alpha,a}^{(L+1)}=\sum_{\ell=1}^{L+1}\sum_{i=1}^{n_{\ell}}\sum_{j=1}^ {n_{\ell-1}}\left(\frac{\partial z_{1;a}^{(L+1)}}{\partial W_{ij}^{(\ell)}} \right)^{2}.\]
### Deriving \(L/n\) Behavior of Input-Output Jacobian
The purpose of this section is to prove that
\[\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{p;a}}\right)^{2 }\right]=\frac{2}{n_{0}},\qquad\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{( L+1)}}{\partial x_{p;a}}\right)^{4}\right]=\frac{\text{const}}{n_{0}^{2}}\exp \left[5\sum_{\ell=1}^{L}\frac{1}{n_{\ell}}+O\left(\frac{L}{n^{2}}\right)\right].\]
#### 5.5.1 Second Moment Computation
We will start with the second moment and will do it in several (unnecessarily many) steps to illustrate the general idea we'll need for the fourth moment. First, note that
\[\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{ p;a}}\right)^{2}\right] =\mathbb{E}\left[\sum_{\gamma_{1},\gamma_{2}\in\mathbb{P}_{q}} \prod_{k=1}^{2}W_{\gamma_{k}}^{(L+1)}\prod_{\ell=1}^{L}W_{\gamma_{k}}^{(\ell) }\xi_{\gamma_{k};a}^{(\ell)}\right]\] \[=\sum_{\gamma_{1},\gamma_{2}\in\mathbb{P}_{q}}\mathbb{E}\left[ \prod_{k=1}^{2}W_{\gamma_{k}}^{(L+1)}\prod_{\ell=1}^{L}\mathbb{E}\left[\prod_{k =1}^{2}W_{\gamma_{k}}^{(\ell)}\right]\mathbb{E}\left[\prod_{k=1}^{2}\xi_{ \gamma_{k};a}^{(\ell)}\right].\]
Next, note that
\[\mathbb{E}\left[\prod_{k=1}^{2}\left(W_{\gamma_{k}}^{(\ell)}\right)^{2}\right] =\frac{C_{W}}{n_{\ell-1}}\delta_{\gamma_{1}(\ell-1)\gamma_{2}(\ell-1)}\delta_{ \gamma_{1}(\ell)\gamma_{2}(\ell)}.\]
In other words, the paths \(\gamma_{1},\gamma_{2}\) have to "collide" in every layer. In particular, since they have the same starting and ending points, they must agree at all layers. In particular, since
\[\mathbb{E}\left[\xi_{\gamma;a}^{(\ell)}\right]=\frac{1}{2},\]
we find
\[\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{p;a}}\right) ^{2}\right]=\sum_{\gamma\in\mathbb{P}_{q}}\frac{2}{n_{L}}\prod_{\ell=1}^{L} \frac{2}{n_{\ell-1}}\cdot\frac{1}{2}=2\prod_{\ell=0}^{L}\frac{1}{n_{\ell}}\sum _{\gamma\in\mathbb{P}_{q}}1.\]
Note that
\[\left|\Gamma_{q,p}\right|=\prod_{\ell=1}^{L}n_{\ell}.\]
Hence, we may actually re-write
\[\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{p;a}} \right)^{2}\right]=\frac{2}{n_{0}}\mathcal{E}\left[1\right],\]
where \(\mathcal{E}\left[\cdot\right]\) denotes the expectation operator over the choice of a uniformly random path \(\gamma=(\gamma(0),\ldots,\gamma(L+1))\in\Gamma_{p,q}\) in which every neuron in every layer is chosen uniformly at random:
\[\gamma(0)=p,\quad\gamma(L+1)=q,\quad\gamma(\ell)\sim\text{Unif}(\{1,\ldots,n_ {\ell}\})\;iid.\]
Finally, since the average of \(1\) is \(1\), we conclude
\[\mathbb{E}\left[\left(\frac{\partial z_{q;a}^{(L+1)}}{\partial x_{p;a}}\right) ^{2}\right]=\frac{2}{n_{0}}, \tag{124}\]
as desired.
#### 5.5.2 Fourth Moment Computation
For simplicity, we will add a \(\sigma\) to the network output (it only changes things by a factor of 2). We have
\[\mathbb{E}\left[\left(\frac{\partial\sigma(z_{q;a}^{(L+1)})}{ \partial x_{p;a}}\right)^{4}\right] =\mathbb{E}\left[\sum_{\gamma_{1}\ldots,\gamma_{4}\in\Gamma_{p_{d} }}\prod_{\ell=1}^{L+1}W_{\gamma_{k}}^{(\ell)}\xi_{\gamma_{k};a}^{(\ell)}\right]\] \[=\sum_{\gamma_{1},\ldots,\gamma_{4}\in\Gamma_{p_{d}}}\prod_{\ell=1 }^{L+1}\mathbb{E}\left[\prod_{k=1}^{4}W_{\gamma_{k}}^{(\ell)}\right]\mathbb{E} \left[\prod_{k=1}^{4}\xi_{\gamma_{k};a}^{(\ell)}\right].\]
Just as in the 2nd moment case, we find that all weights must appear an even number of times, so let us write
\[\Gamma_{p,q}^{\text{\tiny{4,even}}}:=\left\{\gamma_{1},\ldots,\gamma_{4}\in \Gamma_{p,q}\mid\forall\ell,\text{ the multi-set }\left\{W_{\gamma_{k}}^{(\ell)},\,k=1,\ldots,4\right\}\text{ has even multiplicity}\right\}.\]
Thus,
\[\mathbb{E}\left[\left(\frac{\partial\sigma(z_{q;a}^{(L+1)})}{ \partial x_{p;a}}\right)^{4}\right] =\mathbb{E}\left[\sum_{\gamma_{1},\ldots,\gamma_{4}\in\Gamma_{p_{d }}}\prod_{\ell=1}^{L+1}W_{\gamma_{k}}^{(\ell)}\xi_{\gamma_{k};a}^{(\ell)}\right]\] \[=\sum_{(\gamma_{1},\ldots,\gamma_{4})\in\Gamma_{p,q}^{\text{\tiny{4, even}}}}\prod_{\ell=1}^{L+1}\mathbb{E}\left[\prod_{k=1}^{4}W_{\gamma_{k}}^{(\ell)} \right]\mathbb{E}\left[\prod_{k=1}^{4}\xi_{\gamma_{k};a}^{(\ell)}\right].\]
Let us now define the collision events
\[C^{(\ell)}=C^{(\ell)}(\gamma_{1},\ldots,\gamma_{4}):=\left\{\gamma_{1}(\ell)= \cdots=\gamma_{4}(\ell)\right\}.\]
Thus,
\[\mathbb{E}\left[\prod_{k=1}^{4}W_{\gamma_{k}}^{(\ell)}\right]\mathbb{E}\left[ \prod_{k=1}^{4}\xi_{\gamma_{k};a}^{(\ell)}\right]=\frac{1}{n_{\ell-1}^{2}} \left(1+\mathbf{1}_{C^{(\ell)}}+\mathbf{1}_{C^{(\ell)}C^{(\ell-1)}}2(\mu_{4}-1 )\right)\left(1+\delta_{\ell 1}\right),\]
where
\[\mu_{4}=\int_{\mathbb{R}}x^{4}d\mu(x)\]
and we have made use of the fact that \(\mathbf{1}_{C^{(\ell)}}\mathbf{1}_{C^{(\ell)}C^{(\ell-1)}}=\mathbf{1}_{C^{( \ell)}C^{(\ell-1)}}\). Putting this all together yields
\[\mathbb{E}\left[\left(\frac{\partial\sigma(z_{q;a}^{(L+1)})}{ \partial x_{p;a}}\right)^{4}\right]=2\left(\prod_{\ell=1}^{L+1}\frac{1}{n_{ \ell-1}^{2}}\right)\sum_{(\gamma_{1},\ldots,\gamma_{4})\in\Gamma_{p_{d}}^{ \text{\tiny{4,even}}}}\prod_{\ell=1}^{L+1}\left(1+\mathbf{1}_{C^{(\ell)}}+ \mathbf{1}_{C^{(\ell)}C^{(\ell-1)}}2(\mu_{4}-1)\right).\]
The trick is now to to change variables in the sum from four paths with even numbers of weights to 2 paths.
**Exercise**. Given \(\gamma_{1}^{\prime},\gamma_{2}^{\prime}\in\Gamma_{p,q}\) show there are exactly
\[\xi^{\text{\tiny{4,cells}}}=6\mathbb{Z}_{\ell=1}^{\ell+1}\,\mathbf{1}_{C^{( \ell)}}\]
collections \((\gamma_{1},\ldots,\gamma_{4})\in\Gamma_{p,q}^{\text{\tiny{4,even}}}\) which give rise to the same weight configurations (but doubled).
We therefore find
\[\mathbb{E}\left[\left(\frac{\partial\sigma(z_{q;\alpha}^{(L+1)})}{ \partial x_{p;\alpha}}\right)^{4}\right] =\frac{2}{n_{0}^{2}}\left(\prod_{\ell=1}^{L}\frac{1}{n_{\ell}^{2} }\right)\sum_{\gamma_{1},\gamma_{2}\in\Gamma_{p,\alpha}}\prod_{\ell=1}^{L+1} \left(1+5\mathbf{1}_{C^{(l)}}+\mathbf{1}_{C^{(l)}C^{(l-1)}}6(\mu_{4}-3)\right)\] \[=\frac{2}{n_{0}^{2}}\mathcal{E}\left[\prod_{\ell=1}^{L+1}\left(1+5 \mathbf{1}_{C^{(l)}}+\mathbf{1}_{C^{(l)}C^{(l-1)}}6(\mu_{4}-3)\right)\right],\]
where the expectation \(\mathcal{E}\) is now over the choice of two iid paths \(\gamma_{1},\gamma_{2}\in\Gamma_{p,q}\) in which every neuron in very layer is selected uniformly:
\[\gamma_{k}(\ell)\sim\text{Unif}\left(\{1,\ldots,n_{\ell}\}\right)\quad\text{ iid.}\]
Finally, we'll be a bit heuristic: note that
\[\mathcal{P}[C^{(l)}]=\frac{1}{n_{\ell}},\qquad\mathcal{P}[C^{(l-1)},\,C^{(l)} ]\approx\frac{1}{n_{\ell}n_{\ell-1}}=O(n^{-2}).\]
In particular, we find approximately
\[\mathbb{E}\left[\left(\frac{\partial\sigma(z_{q;\alpha}^{(L+1)})}{\partial x_ {p;\alpha}}\right)^{4}\right]\approx\frac{12}{n_{0}^{2}}\mathcal{E}\left[\prod _{\ell=1}^{L}\left(1+\frac{5}{n_{\ell}}+O(n^{-2})\right)\right]=\frac{12}{n_ {0}}\exp\left[5\sum_{\ell=1}^{L}\frac{1}{n_{\ell}}+O\left(\frac{L}{n^{2}} \right)\right],\]
as desired.
**Exercise.** Make the reasoning in the previous computation precise.
#### 5.5.3 Analogous calculations for the NTK
This concludes our analysis of the input-output Jacobian (i.e. the derivative of the network output with respect to the input). Virtually the same analysis can be applied to instead study the parameter-output Jacobian (i.e. the derivative of the network output with respect to a particular parameter), which will then, with a sum over all parameters, yield the NTK at a particular input. Similar calculations to those worked above quickly give the 2nd and 4th moments of the NTK. These yield that the partial derivatives of the output with respect to each parameter are independent (i.e. they have zero covariance), and so the mean NTK is simple and given by its infinite-width value, while the fluctuations about that mean scale as \(e^{\beta}-1\).
### Open questions and dreams
We have shown that various quantities of interest are straightforwardly calculable for random ReLU networks with a single input vector. We conclude with various open questions regarding related quantities.
1. How smooth is the random function at initialization? Can one obtain a Lipschitz constant? Is it adversarially attackable?
2. Can we obtain a clear picture of feature learning (even at a single step) beyond the mere fact that it occurs? How does this feature learning relate to those of e.g. mean-field networks [23] or the large-learning-rate regime of the NTK regime discussed in Yasaman's lectures?
3. Can we profitably perform similar path-counting arguments with activation functions besides ReLU? |
2307.09462 | NUTRIG: Towards an Autonomous Radio Trigger for GRAND | One of the major challenges for large-scale radio surface arrays, such as the
Giant Radio Array for Neutrino Detection (GRAND), is the requirement of an
autonomous online trigger for radio signals induced by extensive air showers.
The NUTRIG project lays the foundations for the development of a pure,
efficient, and scalable trigger in the context of GRAND. For this purpose, a
GRAND prototype setup of four detection units has been deployed at Nan\c{c}ay,
France, which currently serves as the main testing facility for the deployment
of this autonomous trigger. This work provides a detailed description of the
GRAND@Nan\c{c}ay setup, and a first analysis of background data gathered on
site. Initial tests of signal recovery in laboratory conditions are also
presented. Finally, near-future plans are outlined to scale NUTRIG to larger
pathfinder arrays such as GRANDProto300. | Pablo Correa | 2023-07-18T17:44:23Z | http://arxiv.org/abs/2307.09462v1 | # NUTRIG: Towards an Autonomous Radio Trigger for GRAND
###### Abstract:
One of the major challenges for large-scale radio surface arrays, such as the Giant Radio Array for Neutrino Detection (GRAND), is the requirement of an autonomous online trigger for radio signals induced by extensive air showers. The NUTRIG project lays the foundations for the development of a pure, efficient, and scalable trigger in the context of GRAND. For this purpose, a GRAND prototype setup of four detection units has been deployed at Nancay, France, which currently serves as the main testing facility for the deployment of this autonomous trigger. This work provides a detailed description of the GRAND@Nancay setup, and a first analysis of background data gathered on site. Initial tests of signal recovery in laboratory conditions are also presented. Finally, near-future plans are outlined to scale NUTRIG to larger pathfinder arrays such as GRANDProto300.
Introduction
The Giant Radio Array for Neutrino Detection (GRAND) [1, 2] is a proposed surface array that primarily targets the detection of ultra-high-energy (UHE; \(>\)100 PeV) neutrinos. In particular, Earth-skimming UHE tau neutrinos can induce very-inclined extensive air showers with zenith angles near 90\({}^{\circ}\). It is the transient (\(\lesssim\)100 ns) radio emission (coherent between \(\sim\)10 MHz up to several hundreds of MHz) produced by these very-inclined air showers via geomagnetic and Askaryan effects that GRAND aims to detect.
Due to the low expected flux of UHE neutrinos, GRAND is planned to instrument a total surface area of 200,000 km\({}^{2}\) with detection units (DUs). A DU consists of a three-armed butterfly antenna and its front-end electronics. The inter-DU spacing will be relatively sparse (of the order of 1 km) since the radio footprint of very-inclined air showers is typically of the order of several 10 km\({}^{2}\)[3]. More concretely, in its final stage, GRAND is envisaged to consist of 20 complementary arrays of 10,000 DUs each, spread across the globe for maximal sky coverage. Currently, the GRANDProto300 (China) and GRAND@Auger (Argentina) prototype arrays serve as pathfinders for GRAND10k arrays in the Northern and Southern Hemispheres, respectively.
A cost-efficient deployment of gigantic radio arrays such as GRAND strongly relies on the development of an autonomous online radio trigger for air-shower signals, without the usage of external particle detectors. Previous efforts by CODALEMA [4], AERA [5, 6], and TREND [7] have investigated the feasibility of utilizing an autonomous radio trigger at the antenna level. Although some positive results were achieved using this autonomous technique, the data-acquisition (DAQ) systems of these projects were not designed to handle the high trigger rates due to transient radio-frequency interference (RFI) experienced at their experimental sites.
The NUTRIG project presented in this work lays the foundation for the development of an autonomous radio trigger for GRAND. It is a joint effort between KIT (Germany), and the GRAND Paris group (LPNHE and IAP, France). Both the principle and strategy of NUTRIG are outlined in Section 2, as well as a description of the dedicated GRAND@Nancay prototype (France). A preliminary analysis of on-site (transient) RFI at GRAND@Nancay is also presented in Section 3. Finally, an outlook of short-term NUTRIG plans is given in Section 4.
## 2 NUTRIG
### Principle
The NUTRIG principle is based upon the realization of three major objectives. These objectives, which are interconnected and being developed in parallel, are outlined below:
* **Radio-emission model**. A detailed modeling of the radio emission of very-inclined (neutrino-induced) air showers is required in order to exploit its features at the trigger level. Efforts to model this very-inclined air-shower emission have previously been performed in the 30-80 MHz frequency band [8, 9]. However, our signal model needs to be expanded to the 30-230 MHz range in which GRAND operates. Moreover, the signal model needs to be adapted to the specific location of a GRAND (prototype) array, since it will, amongst other aspects, depend on the local geomagnetic field, atmospheric conditions, and the altitude of the site.
* **First-level trigger (FLT)**. At the DU level, an air-shower signal will manifest itself as a transient voltage pulse in the recorded time trace. Such an air-shower pulse is expected to have specific characteristics (e.g. time structure and signal polarization) which will be exploited by the FLT. Doing so, we aim at improving the background-rejection efficiency compared to a signal-over-threshold trigger, where the threshold is a few times above the stationary-noise level. Furthermore, to handle the limitations posed by the DAQ and the data-communication bandwidth, the target rate of the FLT can be no more than 100 Hz. For the same signal-selection efficiency as the threshold trigger currently used in GRAND prototypes, which saturates the DAQ at a trigger rate of 1 kHz, the FLT would therefore yield up to a factor 10 improvement in signal purity.
* **Second-level trigger (SLT)**. At the array level, the SLT will use data from the DUs where the FLT condition has been satisfied. Using the dedicated radio-emission model of air showers described above, we aim to significantly increase the signal purity by reducing the contamination of anthropogenic RFI and thermal noise. The quantity of information passed on from the FLT to the SLT will depend on the available communication bandwidth between the DUs and the DAQ system. Data of events that fulfill the SLT requirements will be recorded on disk for further offline analysis (see also [10]).
### GRAND@Nancay
GRAND@Nancay is a prototype array primarily dedicated to the NUTRIG project. It consists of four GRAND-prototype DUs, and it is located at the Nancay Radio Observatory, deep in the forest of the French Sologne region. This protected radio-quiet environment was previously home to the CODALEMA experiment, which managed to successfully detect radio emission of cosmic-ray air showers [11]. Although GRAND@Nancay is not designed to identify air-shower signals, its relatively radio-quiet location provides an excellent setting for the on-site development of NUTRIG, and the FLT in particular.
The GRAND@Nancay setup and layout are illustrated in Fig. 1. Each of the four prototype DUs operate using a butterfly-antenna design with three orthogonal arms, which are oriented along the East-West axis (\(X\)), the North-South axis (\(Y\)), and upwards (\(Z\)). For each of the antenna arms, a captured signal is first amplified with a low-noise amplifier (LNA) located inside the antenna nut. Subsequently, the amplified signal is sent via coaxial cables from the LNA to the front-end electronics board (FEB) placed at the foot of the antenna.
Inside the FEB, both a band-pass filter of 30-230 MHz and a band-stop filter1 in the FM band (87-108 MHz) are applied to the signal. After that, the signal passes through a variable gain amplifier (VGA; with an adjustable gain up to 23.5 dB) before arriving at a 14-bit analog-to-digital converter (ADC). This ADC has four channels, three of which are labeled \(X\), \(Y\), and \(Z\) and connected to the corresponding antenna arms, while the fourth serves as a floating channel. The signal is digitized by the ADC at a rate of 500 Msamples/s, and it is finally processed by the systems-on-chip (SoC) consisting of one field-programmable gate array (FPGA) and four central processing units (CPUs), which are used to implement the trigger logic and to build events at the DU level. In addition, four notch filters are implemented at the FPGA level.
DU events are transmitted to a computer in the central data-acquisition (DAQ) center via optical fibers2. The DAQ computer not only allows us to store data, but also to configure various FEB components, such as the ADC, VGA, and trigger logic. In addition, the central DAQ center also hosts an adjustable DC-power supply, which powers the FEBs via coax cables. Note that the LNA is powered via the FEB through the cable connected to the \(Z\) channel.
Footnote 2: In GRANDProto300 and GRAND@Auger, the DU-DAQ communication is performed via a WiFi connection. However, this is not possible for GRAND@Nancy, where we would otherwise pollute the other experiments at the Nancay Radio Observatory. The optical fibers and FM band-stop filter are the only modifications to the FEB design deployed at GRANDProto300 and GRAND@Auger.
### Trigger-Implementation Strategy
The testing and optimization of the FLT and SLT algorithms requires a detailed characterization of both background RFI and expected air-shower signals at the DU level. For the description of the background, we use experimental data taken at the GRAND-prototype sites (GRAND@Nancay in particular), while air-shower signals are described using simulations. These simulations are currently based on the CoREAS and ZHaireS frameworks [12, 13], and also contain a complete description of the antenna response and front-end electronics chain. As such, we simulate the ADC voltage expected for air-shower pulses, which are superposed to recorded background data to obtain realistic signal-plus-background voltage traces. Such traces are currently being used to investigate different implementations of the FLT, such as machine-learning (see [14] for more details), template-fitting, and wavelet-analysis techniques.
Whereas the SLT algorithm will be implemented at the DAQ level of a GRAND array, the FLT algorithm will be encoded on the SoC of a FEB. Next, a first test of the online performances of this FLT algorithm will be performed with dedicated test bench at the LPNHE in Paris. Using a custom-wave function generator, simulated air-shower pulses are fed to the FEB, such that candidate FLT algorithms can be tested under controlled laboratory conditions.
Figure 1: Schematic of the GRAND@Nancay prototype, which is dedicated to the development of NUTRIG. The setup currently consists of three active DUs (labeled 96, 97, and 100) with a fourth DU (indicated in light orange) planned to be deployed over the Summer of 2023. The DAQ center contains a DC power supply as well as a computer for data readout and storage. Data transfer and power supply to the four DUs is performed via optical fibers and coaxial cables, respectively (solid brown lines). The inset on the right shows a more detailed illustration of a single DU, where each of the three butterfly-antenna arms is connected to its corresponding channel on the FEB. Note that the antenna arm oriented along the \(X\) axis is not shown for illustrative purposes.
To validate our test-bench setup at the LPNHE, we use a simulated air-shower pulse for a DU located at the GRANDProto300 site. The electric field of the air shower at the DU position is generated with ZHaireS3, convoluted with the antenna response function for the \(X\) arm, and further processed through the front-end electronics (see [14] for more details on the complete simulation chain). First, we process the air-shower pulse through the complete chain up to the ADC, as shown in the left panel of Fig. 2. Note that electronic noise produced by the front-end components is not yet included in the simulation. Next, we compute the pulse shape for the same signal at the output of the antenna nut. This intermediate pulse shape is then injected with our custom-wave function generator to an FEB, with which we record the measured ADC voltage, as shown in the right panel of Fig. 2. We find that we can successfully measure the injected air-shower pulse produced with our custom-wave function generator. In addition, we find that our current simulation of the electronic chain after the antenna-nut output provides a reasonable description of the real electronics.
Footnote 3: For our test, we use a \(10^{18}\) eV proton primary with zenith and azimuth angles of \(85.5^{\circ}\) and \(253^{\circ}\), respectively.
Subsequently, four FEBs with the updated FLT algorithms will be deployed at GRAND@Nanoay to be tested in the field. This will also allow us to determine which parameters to pass on from the FLT to the SLT algorithm, which will also utilize the dedicated radio-emission model for very-inclined air showers. Finally, the implementation of the complete FLT and SLT setup will be tested at GRANDProto300, which currently consists of 13 active DUs with 70 more ready to be deployed in the coming months. When completed, it will consist of 300 DUs which will allow to test the scalability of our trigger functionality. In any case, the GRANDProto300 pathfinder will be pivotal to ensure the scalability of NUTRIG to GRAND10k arrays.
## 3 Analysis of GRAND@Nancay Background Data
Between 26-27 June 2023, we used the setup described in Section 2.2 to perform a preliminary characterization of the RFI background at GRAND@Nancay. The data for this analysis was taken
Figure 2: _Left_: Time trace of a simulated air-shower pulse that has been processed through the entire front-end-electronics chain up to the ADC. Electronic noise is not included. _Right_: Time trace of the same air-shower pulse recorded by an FEB at the LPNHE test bench in Paris, after being simulated at the antenna-output level and injected to the FEB by a custom-wave function generator.
between roughly 00:49 and 09:33 local time (UTC+2). Every ten seconds, each DU in the setup was forced to trigger the acquisition of data. This data-taking mode has the benefit that it allows us to test the stability of our setup during relatively long data-taking runs.
Figure 3 shows the typical mean power spectral density (PSD) observed at GRAND@Nancay, in this case recorded by DU 100. The average was taken over the entire data sample. This PSD was obtained using all collected data and averaging out the spectrum over time. A first observation is that the PSD of channel \(Z\) is up to a factor 50 higher than the PSD of channels \(X\) and \(Y\), most notably at frequencies below \(\sim\)70 MHz. This could be a consequence of the antenna transfer function, which is different for the \(Z\) arm than for the symmetric \(X\) and \(Y\) arm. In addition, the LNA currently used at GRAND@Nancay has a different design for channel \(Z\) compared to the other two channels, because the corresponding antenna arm is monopolar. An updated LNA is currently being tested at GRANDProto300 [15].
In the features of the PSD in Fig. 3, we can clearly observe the effect of both the 30-230 MHz band-pass filter and FM band-stop filter. However, short waves below 30 MHz and FM lines from nearby radio stations are still detected despite the filters. The two spectral lines at 72 MHz and 79 MHz are well-known radio emitters of the Nancay Radio Observatory, while the 120-140 MHz band is used for aeronautic communications. Finally, the peaks detected at 178 MHz and 210 MHz correspond to digital audio broadcasting lines. Nevertheless, we note that the notch filter implemented on the FPGA of the FEB will allow us to filter out constant wave emitters.
One of the main quantities relevant to NUTRIG is the ambient rate of transient RFI at GRAND@Nancay, which will be the main background for the FLT. To roughly estimate this rate, we define a transient pulse as follows:
* At least one sample of a recorded time trace must exceed \(\pm 5\sigma\), with \(\sigma\) the standard deviation of the trace, in either of the three channels.
Figure 3: Mean PSD observed by DU 100 of the GRAND@Nancay setup. The spectra of channels \(X\), \(Y\), and \(Z\) are shown in blue, magenta, and red, respectively. The various features of the PSD are discussed in the text.
* If two 5\(\sigma\) crossings occur in the same trace, but are more than 50 samples (100 ns) apart, they correspond to two different pulses.
A typical time trace of a transient RFI pulse recorded at GRAND@Nancay is shown in Fig. 4. It is worth noting that in this example, the pulse is wider than that expected for air-shower signals, which have more short-lived peaked signatures. In our complete data-taking run, which suffers from limited statistics, only a handful of RFI transients were recorded. Given that each recorded trace spans 4032 ns (2016 samples), and that six traces were recorded per minute during our data-taking run (spanning almost 9 hours), we roughly estimate a transient-background rate of the order of several 100 Hz.
## 4 Summary and Outlook
The goal of the NUTRIG project is to develop an autonomous online radio trigger for GRAND. This work presented the principle and strategy of NUTRIG, which consists of the development of a first-level trigger (FLT) at the detection-unit (DU) level, a second-level trigger (SLT) at the array level, and a dedicated radio-emission model of very-inclined air showers. A detailed description was given of the GRAND@Nancay prototype, which is dedicated to the NUTRIG project. In particular, it will be used to test the trigger algorithms of the FLT in field conditions.
A brief analysis was performed to characterize the RFI noise at the GRAND@Nancay site. The observed spectrum showed typical features of short waves, FM, aeronautic communications, digital audio broadcasting, and local emitters. In addition, a handful of RFI transients were recorded, defined as pulses that exceed five standard deviations of a time trace. However, only a rough estimation could be made of the overall RFI-pulse rate, which is estimated to be several 100 Hz.
Figure 4: Time trace of a transient RFI event at GRAND@Nancay recorded by DU 96. The top, middle, and bottom panels show the ADC voltages recorded in channels \(X\), \(Y\), and \(Z\), respectively. In each panel, the dashed lines correspond to \(\pm 5\sigma\), where \(\sigma\) is the standard deviation of the trace. In this example, there are six \(\pm 5\sigma\) crossings for channel \(X\), and one \(\pm 5\sigma\) crossing for channel \(Z\). Note that the RFI transient is also slightly visible in channel \(Y\), even though it does not cross the \(\pm 5\sigma\) threshold.
The next step for the NUTRIG project is to further develop candidate FLT algorithms. These algorithms will first be tested at the LPNHE in Paris, where we already showed that we can feed simulated air-shower pulses to an FEB and recover the pulses in the recorded data. Subsequently, they will be tested in GRAND@Nancay, and further optimized. Doing so, we will also determine which FLT information to provide to the SLT algorithm. The complete FLT+SLT methodology will be tested in the mid-term future at GRANDProto300.
## Acknowledgments
This work is part of the NUTRIG project, supported by the Agence Nationale de la Recherche (ANR-21-CE31-0025; France) and the Deutsche Forschungsgemeinschaft (DFG; Projektnummer 490843803; Germany). In addition, this work is supported by the CNRS Programme Blanc MITI (2023.1 268448 GRAND; France) and the Programme National des Hautes Energies (PNHE; France) of CNRS/INSU with INP and IN2P3, co-funded by CEA and CNES. Computations were performed using the resources of the CCIN2P3 Computing Center (Lyon/Villeurbanne, France), a partnership between CNRS/IN2P3 and CEA/DSM/Irfu.
|
2304.10463 | Quantized Hall current in topological nodal-line semimetal | Photocurrent acts as one of measurable responses of material to light, which
has proved itself to be crucial for sensing and energy harvesting. Topological
semimetals with gapless energy dispersion and abundant topological surface and
bulk states exhibit exotic photocurrent responses, such as novel quantized
circular photogalvanic effect observed in Weyl semimetals. Here we find that
for a topological nodal-line semimetal (NLSM) with nodal ring bulk states and
drumhead surface states (DSS), a significant photocurrent can be produced by an
electromagnetic (EM) wave by means of the quantum Hall effect. The Hall current
is enabled by electron transfer between Landau levels (LLs) and triggered by
both the electric field and magnetic field components of an EM wave. This Hall
current is physically connected to an unusually large quantum-Hall conductivity
of the zeroth LLs resulting from quantized DSS. These LLs are found to be
highly degenerate due to the unique band-folding effect associated with
magnetic-field-induced expansion of a unit cell. Furthermore, we observe that
the Hall current induced solely by an in-plane linearly-polarized EM wave
becomes a quantized entity which allows for possible direct measurement of the
DSS density in a topological NLSM. This work paves a way toward designing
high-magnetic-field-sensitivity detection devices for industrial and space
applications, such as the development of self-detection of
current-surge-induced overheating in electronic devices and accurate Earth's
magnetic-anomaly maps for guiding a self-navigating drone or an aircraft. | Po-Hsin Shih, Thi-Nga Do, Godfrey Gumbs, Danhong Huang, Hsin Lin, Tay-Rong Chang | 2023-04-20T17:08:44Z | http://arxiv.org/abs/2304.10463v1 | # Quantized Hall current in topological nodal-line semimetal
###### Abstract
Photocurrent acts as one of measurable responses of material to light, which has proved itself to be crucial for sensing and energy harvesting. Topological semimetals with gapless energy dispersion and abundant topological surface and bulk states exhibit exotic photocurrent responses, such as novel quantized circular photogalvanic effect observed in Weyl semimetals. Here we find that for a topological nodal-line semimetal (NLSM) with nodal ring bulk states and drumhead surface states (DSS), a significant photocurrent can be produced by an electromagnetic (EM) wave by means of the quantum Hall effect. The Hall current is enabled by electron transfer between Landau levels (LLs) and triggered by both the electric field and magnetic field components of an EM wave. This Hall current is physically connected to an unusually large quantum-Hall conductivity of the zeroth LLs resulting from quantized DSS. These LLs are found to be highly degenerate due to the unique band-folding effect associated with magnetic-field-induced expansion of a unit cell. Furthermore, we observe that the Hall current induced solely by an in-plane linearly-polarized EM wave becomes a quantized entity which allows for possible direct measurement of the DSS density in a topological NLSM. This work paves a way toward designing high-magnetic-field-sensitivity detection devices for industrial and space applications, such as the development of self-detection of current-surge-induced overheating in electronic devices and accurate Earth's magnetic-anomaly maps for guiding a self-navigating drone or an aircraft.
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
+
Footnote †: Corresponding author: _E-mail_: [email protected]
The response of solids to external fields, such as photocurrent, has been a central topic in solid state physics. Photocurrent can be triggered by the photon absorption through various photoelectric effects like photovoltaic [1] and photogalvanic [2]. Optical quantum Hall effect (OQHE) has recently emerged as an alternative mechanism for the photocurrent generation. The optical quantum Hall current is generated by the charge pumping in the magnetic-field-induced Landau levels (LLs) in the systems with finite Hall conductance [3; 4; 5]. Up to now, the quantum Hall plateaus in the Terahertz regime have been investigated in a two-dimensional electron gas system [3] and graphene [4; 5]. However, these OQHE require an external magnetic field, in addition to an incident light. Though realization of OQHE engendered entirely by electromagnetic (EM) wave can be enabled by the recent advanced development of the modern materials fabrication and Terahertz light source technology, probing the OQHE still remains challenging because the induced current response is relatively weak and sensitive to the EM oscillation. Seeking materials with efficient EM wave-current conversion is highly desirable for sensing and energy harvesting.
Topological semimetals (TSM) are promising materials to engender exotic photocurrent due to their special topological surface and bulk states. TSM are characterized by band crossing in the Brillouin zone (BZ) at or in the vicinity of the Fermi level (\(E_{F}\)). The conventional TSM can be classified into three different categories, namely, Dirac semimetal (DSM), Weyl semimetal (WSM), and nodal-line semimetal (NLSM) [6]. The crossings of the bulk conduction and valence subbands of these systems form, respectively, the Dirac points [6; 7; 8], Weyl points [6; 8; 7], and one-dimensional (1D) nodal lines [6; 8; 9; 10] in the BZ. The TSM support unique surface states, which have been identified for the Fermi-arc surface states in DSM [11] and WSM [12], and drumhead surface states (DSS) in NLSM [13; 14; 15]. The photocurrent of TSM has been demonstrated unique, especially since the discovery of the quantized circular photogalvanic effect in WSM [2], for which the photocurrent depends only on fundamental constants and the monopole charge of a Weyl node.
Magnetic quantization is an important phenomenon which could be exploited for achieving the essential understanding of the topological behaviors in materials. This feature in TSM has enabled the realization of the Dirac fermions [16; 17; 18] and the chiral anomaly in DSM and WSM [19; 20; 21]. Although the magnetic quantization of NLSM was previously predicted [22; 23] and discovered [24], it has been limited to bulk LLs. Meanwhile the DSS LLs, which could bear crucial topological fingerprints of the system, remain largely unknown. magnetic quantization is the part of QHE that describes the high-order response to external magnetic field of a system. In general, a full theory of the electron response to higher order in external fields can be established via the Berry curvature and a first order correction to the band energy due to the orbital magnetic moment. The derivation of the second-order field correction to the Berry curvature of Bloch electrons in external EM fields [25] is essential for the study of response functions. So far, enormous attention has been devoted for the nonlinear effects of high-order electric field, such as the nonlinear Hall effect in the absence of magnetic field up to second order [26] and third order [27]. Now, it is natural to ask if the high-order responses to both magnetic and electric fields can be addressed? and will it lead to any novel physical phenomena? Here, we answer these questions by presenting the optical quantum Hall current of high-order magnetic and electric fields in NLSM.
The main achievements of this paper are threefold: (1) We observe the unique quantization phenomena of the NLSM DSS with unusually large, field-dependent surface QHC that have never been observed in the explored materials. By introducing a concept dealing with field-induced band folding, we uncover a fundamental understanding of the magnetic quantization mechanism in solids. We find that the degeneracies of LLs directly reflect the distribution of DSS within the folded BZ. (2) We derive semiclassically a general formula for the high-order current density and discover its relationship with the Berry curvature. This formula serves as an important basis for the investigation of photocurrent of materials. (3) We find a novel quantized signature in physics, which is the optical Hall current induced by an applied linearly polarized EM field on the DSS. The current response can be determined by only the fundamental constants and area of DSS.
We model the bulk NLSM by single atoms with two orbitals in the simple cubic unit cell, as seen in Fig. 1a. The surface states can be formed in a slab containing multi-layers along the [001] axis. Figure 1b shows the bulk BZ with high symmetry points on the \(k_{z}=0\) plane [\(\Gamma\), X, S, Y] and \(k_{z}=\pi\) plane [Z, U, R, T] as well as the (001)-projected surface BZ. We assume that the system has spin degeneracy. The minimum tight-binding Hamiltonian \(2\times 2\) matrix within the \((s,p_{z})\) basis for the bulk system can be written as
\[H=\begin{bmatrix}t(f_{1}+f_{2})+t_{z}f_{3}^{+}+\epsilon_{0}&t_{z^{\prime}}f_{3 }^{-}\\ -t_{z^{\prime}}f_{3}^{-}&t(f_{1}+f_{2})+t_{z}f_{3}^{+}-\epsilon_{0}\end{bmatrix} \tag{1}\]
In Eq. (1), \(f_{1}=e^{i\mathbf{k}.\mathbf{r_{1}}}+e^{-i\mathbf{k}.\mathbf{r_{1}}}\), \(f_{2}=e^{i\mathbf{k}.\mathbf{r_{2}}}+e^{-i\mathbf{k}.\mathbf{r_{2}}}\), and \(f_{3}^{\pm}=e^{i\mathbf{k}.\mathbf{r_{3}}}\pm e^{-i\mathbf{k}.\mathbf{r_{3}}}\) are the phase terms with \(\mathbf{k}\) the wave vector and \(\mathbf{r_{1,2,3}}\) the unit vectors along \(x\), \(y\), \(z\) directions. The hopping integral \(t=-2\) eV is for the horizontal interactions, while \(t_{z}=2\) eV and \(t_{z^{\prime}}=1\) eV are for the interactions between the same and different orbital domains in the vertical direction, respectively. \(\epsilon_{0}=10\) eV is the site energy.
The calculated bulk band structure along the high symmetry points is shown in Fig. 1c. The intersection of the conduction and valence bands forms a nodal ring encircling S on the \(k_{z}=0\) plane, giving the band crossing along the S-X and S-\(\Gamma\) directions in Fig. 1c. On the contrary, the conduction and valence bands disperse apart elsewhere in the whole first BZ. The Dirac lines feature four-fold degeneracies, associated with the band crossing and equivalence between spin states. The slab band structure, shown in Fig. 1d, consists of DSS around the \(\bar{S}\) point and numerous highly dispersive bulk bands. The DSS are bounded by the projected nodal ring and almost dispersionless energy band. They are four-fold degenerate where both the electron and hole surface states coexist with spin degeneracy.
When a 2D condensed-matter system is subjected to a perpendicular magnetic field \(\mathbf{B}\) = (0,0,B), electrons follow quasi-classical cyclotron motion, thus electronic states are quantized into LLs. The application of magnetic field changes the lattice periodicity so that the primitive unit cell is extended along the \(x\) direction. Particularly, the field-induced Peierls phase of the form \(G_{R}=(2\pi/\Phi_{0})\int\limits_{R}^{r}\mathbf{A}\cdot d\mathbf{\ell}\) needs to be included in the Hamiltonian [28]. Here, \(\mathbf{A}=(0,Bx,0)\) is the vector potential in the Landau gauge and \(\Phi_{0}=h/e\) is the flux quantum. The Peierls phases are repeated periodically along with the extended unit cell in the lattice when the total magnetic flux equals to \(\Phi_{0}\).
The \(\Phi\)-dependent spectrum of DSS LLs is demonstrated in Fig. 2. \(\Phi=B\mathcal{S}\) is the magnetic flux per unit cell with \(\mathcal{S}\) being the area of the primitive unit cell in real space. We observe that the quantized LLs of DSS behaves similarly to the zeroth LLs of graphene and surface states of topological insulator (TI). For these Dirac systems [16; 29; 30], it
was previously shown that the flat zeroth LLs at the Dirac-point energy are independent of magnetic field strength. The LLs at higher energy, which arise from the linear band, acquire the square root dependence on both the LL index and the magnetic field. For NLSM, the magnetic quantization of DSS yields a group of field-independent and non-dispersive zeroth LLs, which only exist at \(E_{F}=0\). Note that, the surface LLs of NLSM are mainly produced from DSS, in contrast to the zeroth LLs of graphene and surface states of TI which are quantized partially from nearby states. With the increase of magnetic field, the zeroth LLs gradually deform into bulk LLs at critical fields \(\Phi_{N_{L}}=\frac{A_{DSS}}{A_{0,BZ}}\frac{\Phi_{0}}{N_{L}-1/2}\), in which \(N_{L}\) is the number of degeneracies of the zeroth LLs and \(A_{DSS}/A_{0,BZ}\) defines the ratio of the DSS area over the BZ area at zero \(\mathbf{B}\) field. Consequently, the DSS LL degeneracies decrease by the number of peeled-off zeroth LLs. In fact, the surface LLs become highly degenerate only at low fields. This characteristic is unique for the DSS which, to our knowledge, has never been observed in any other explored materials, and it plays a critical role in transport properties. The LLs merging mechanism can be understood well through the band-folding effect [31; 32], which is accompanied by an enlarged real-space unit cell and a reduced size of the first BZ (details in Section I of Supplementary materials).
The probability function \(|\Psi|^{2}\), defined as the square of the magnitude of the wave function, is useful for identifying LLs. Figure 3a depicts \(|\Psi|^{2}\) for both surface and bulk LLs for \(\Phi=\Phi_{0}/56\) on both \(\alpha\) and \(\beta\) orbitals. Here, \(|\Psi|^{2}\) exhibits well-behaved oscillatory modes, and the number of zero nodes determines the corresponding LL index \(n\). Since each pair of conduction and valence LLs acquires the same \(|\Psi|^{2}\), there are only four different oscillation modes, labeled by \(n=0,\,1,\,2,\,3\) for the eight-fold-degenerate zeroth LLs. Based on \(|\Psi|^{2}\) of the LLs at zero energy, it is clear that these modes are dominated by the outer-three layers of the slab, i.e., they are the surface states. At higher energies,
Figure 1: **a,** Crystal structure of a NLSM. **b,** First Brillouin zone with high symmetry points for the bulk (lower) and slab (upper). For a 3D system, the first BZ is a cubic centered at \(\Gamma\) point, while it changes to a square centered at \(\bar{\Gamma}\) for a 2D slab. The red circles indicate the Dirac nodal rings of energy bands. **c,** Bulk band structure along the high symmetry points. **d,** Band structure of a 25-layer slab consists of (001) surface bands at zero energy and nearby bulk bands.
LLs (e.g., \(n=4\) LL) have nonvanishing \(|\Psi|^{2}\) on all layers, implying their bulk properties. Therefore, the distribution property of \(|\Psi|^{2}\) can be utilized to select out the surface LLs from bulk LLs in the system. This is considered as a key step in studying QHC of the DSS.
The QHE, one of the most essential electronic transport signatures for topological materials, exists a robust connection with magnetic quantization. The QHC is well quantized when \(E_{F}\) lies in the gap between two LLs. The \(E_{F}\)-dependent QHC, shown in Fig. 3b, is calculated by employing the Kubo formula in the form
\[\sigma_{xy}=\frac{ie^{2}\hbar}{S}\sum_{n}\sum_{n^{\prime}\neq n}(f_{n}-f_{n^{ \prime}})\frac{\langle\Psi_{n}|\hat{\mathbf{u}}_{x}|\Psi_{n^{\prime}}\rangle \langle\Psi_{n^{\prime}}|\hat{\mathbf{u}}_{y}|\Psi_{n}\rangle}{(E_{n}-E_{n^{ \prime}})^{2}+\Gamma_{0}^{2}}\,. \tag{2}\]
In this notation, \(E_{n}\) is the LL energy and \(|\Psi_{n}\rangle\) is the corresponding \(n\)th-LL wave function. They are evaluated from the tight-binding Hamiltonian in Eq. (1) and illustrated in Fig. 3a. \(\hat{\mathbf{u}}_{x,y}\) are the velocity operators, \(f_{n}\) is the
Figure 3: **a,** Selected spatial dependence of probability functions for LLs at low energies in a 25-layer slab. Here, \(\alpha\) and \(\beta\) label two different orbitals. **b,**\(E_{F}\) dependence of QHC at various \(\Phi\)’s.
Figure 2: The magnetic-flux-dependent LL energy spectrum for a 25-layer slab. The surface and bulk states are illustrated by yellow and blue dots, respectively. The surface spectral weights are defined as the states dominated by outer three layers of slab.
Fermi-Dirac distribution function, and \(\Gamma_{0}\) (\(\sim 1\) meV) is the broadening factor. The calculated QHC displays the step features in which the plateaus correspond to vertical transition from occupied to unoccupied LLs. We found an unusually large QHC step for the zeroth LLs with high degeneracy via the relationship \(\sigma_{xy}=C(e^{2}/h)\)=\(2N_{L}(e^{2}/h)\), which implies the huge Chern number \(C\) as well as enormous Berry curvature. On the contrary, the steps of \(2e^{2}/h\) are obtained for the bulk LLs due to the two-fold spin degeneracy. Such substantial variation of the QHC at a certain energy range has never been observed in the explored materials. For NLSM, the unique distribution of the DSS leads to the relation
\[N_{L}\cong\frac{A_{DSS}}{A_{B,BZ}}=\frac{A_{DSS}}{A_{0,BZ}}\frac{\Phi_{0}}{B \mathcal{S}}, \tag{3}\]
in which, \(A_{B,BZ}\) is the folded BZ area under \(\mathbf{B}\) field. This approximation is made within the limit of weak \(\mathbf{B}\) field (details in Section I of Supplementary Material). In fact, the occupation of DSS in the first BZ can be manipulated by tuning the tight-binding parameters. Explicitly, by increasing the vertical hopping terms \(t_{z}\) and \(t_{z^{\prime}}\), the area of DSS is enhanced accordingly, leading to the change of critical fields for LLs merging and QHC steps. Our modeling and computations reveal similar features in energy bands, LL spectra, and QHC at \(E_{F}=0\) for various sets of chosen parameters.
When a 2D system is subject to a linearly polarized EM field, there will occur a current across the material as a result of the OQHE or photoelectric effects [1; 2; 3; 4; 5]. The high-order anomalous equilibrium (Berry-Hall) current component can be expressed semiclassically as (see Section II of Supplementary Materials)
\[\mathbf{j}_{1}(t\,|\,E,B) = \frac{1}{\hbar}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}f_{0}[\varepsilon _{n}^{(0)}(\mathbf{k}\,|\,B)-\mu_{e}] \tag{4}\] \[\times \{\mathbf{\nabla}_{\mathbf{k}}[\varepsilon_{n}^{(0)}(\mathbf{k}\,|\,B)+e\mathbf{ E}(t)\cdot\mathbf{\mathcal{A}}_{n}^{(0)}(\mathbf{k}\,|\,B)\] \[+e\mathbf{E}(t)\cdot[\overleftrightarrow{\mathbf{\nabla}_{n}}(\mathbf{k}\,| \,B)\cdot\mathbf{E}(t)]]+e\mathbf{E}(t)\times\mathbf{\Omega}_{n}^{(0)}(\mathbf{k}\,|\,B)\] \[+e\mathbf{E}(t)\times[\mathbf{\nabla}_{\mathbf{k}}\times\overleftrightarrow{ \mathbf{\partial}_{n}}(\mathbf{k}\,|\,B)\cdot\mathbf{E}(t)]\}.\]
Here, \(\varepsilon_{n}^{(0)}(\mathbf{k}\,|\,B)\) represents the LL energies, \(\overleftrightarrow{\mathbf{\partial}_{n}}(\mathbf{k}\,|\,B)\) is the Berry-connection polarizability tensor, \(\mathbf{\mathcal{A}}_{n}^{(0)}(\mathbf{k}\,|\,B)\) is the unperturbed Berry connection and \(\mathbf{\Omega}_{n}^{(0)}(\mathbf{k}\,|\,B)\) is the unperturbed Berry curvature of Bloch electrons. The anomalous thermal-equilibrium current includes both the parallel (\(\mathbf{J}_{\mathbf{L}}\)) and perpendicular (\(\mathbf{J}_{\mathbf{T}}\)) components respect to the direction of the EM field, referring to Fig. 4. They are associated with the quantized transverse conductivity and continuous longitudinal conductivity, for unique all-electron thermal-equilibrium transports. Note that both \(\varepsilon_{n}^{(0)}(\mathbf{k}\,|\,B)\) and \(\overleftrightarrow{\mathbf{\partial}_{n}}(\mathbf{k}\,|\,B)\) are independent of \(k\), thus the first and third and fifth terms are vanishing (see Supplementary Material). The second term can not be physically observable since the Berry connection \(\mathbf{\mathcal{A}}_{n}^{(0)}(\mathbf{k}\,|\,B)\) is a gauge-dependent variable. In general, the substantial \(B\)-dependent Berry curvature \(\mathbf{\Omega}_{n}^{(0)}(\mathbf{k}\,|\,B)\) plays the key role in determining the current response of the system. On the other hand, \(\mathbf{\Omega}_{n}^{(0)}(\mathbf{k}\,|\,B)\) only has contribution to the longitudinal current. As a matter of fact, the \(B\)-dependent anomalous optical Hall current flowing in NLSM is dominated by \(\mathbf{J}_{\mathbf{L}}\), which can be written as (details in Section III of Supplementary Materials)
\[\mathbf{J}_{\mathbf{L}}=\sigma_{xy}E\vec{n}\cong 2\frac{A_{DSS}}{A_{0,BZ}}\frac{\Phi_{0 }}{\mathcal{S}}\frac{ce^{2}}{h}\vec{n}. \tag{5}\]
In this notation, \(\vec{n}\) is the direction of the incident EM field. It is clear that \(\mathbf{J}_{\mathbf{L}}\) depends only on the fundamental constants (\(e\), \(h\), \(c\), \(\Phi_{0}\)) and the intrinsic characteristics of the NLSM sample. In other words, the EM wave-generated Hall current in NLSM is a quantized signature. So far, the quantized current response has also been predicted in
Figure 4: Visual illustration of longitudinal (\(\mathbf{J}_{\mathbf{L}}\)) and transverse (\(\mathbf{J}_{\mathbf{T}}\)) currents in a NLSM under an EM field.
WSM, in particular, the injection current depends only on the fundamental constants and the topological charge of Weyl nodes [2]. Such current is induced by the circular photogalvanic effect under a circularly polarized light. The condition for quantization of current response in WSM is the breaking of inversion and mirror symmetries, different from the time-reversal symmetry breaking in NLSM under QHE.
Experimentally, a current density of about \(\frac{A_{DSS}}{A_{0,BZ}}\frac{ec}{\mathcal{S}}\) can be observed in NLSM when a suitable sample is placed under an EM field. This is up to five orders larger than that of graphene under the same condition [33]. Such topologically protected quantized Hall current will remain unchanged regardless of the incident in-plane linearly polarized EM field, as long as the DSS is quantized into LLs. For the experiment setup, the sample needs to be sufficiently large for the \(\mathbf{B}\)-induced cyclotron motion of electrons to be formed within the magnetic length \(l_{B}=\sqrt{\hbar/eB}\). For example, an EM field with \(E=3\ \times\ 10^{7}\) (V/m) and \(B=0.1\) T requires a minimum lattice sample of 400 \(\times\) 400 nm\({}^{2}\) for the observation of current [34]. From an application perspective, the significant and quantized optical Hall current can be put to technological applications and designing high-sensitivity detection devices. The robust connection between the current density and DSS enables the direct measurement of the density of DSS in topological NLSM. Furthermore, the strong dependence of the surface QHC on the \(\mathbf{B}\) field paves a way toward the development of \(\mathbf{B}\)-sensitive detectors for industrial and space applications. In particular, such a concise physics picture can be employed for developing anomalous-Hall-effect based compact and ultra-sensitive magnetometers [35] in measuring a weak magnetic field. Consequently, the anomaly measurement of Earth's magnetic field, aided by afterward machine-learning processing for the enhancement of edge contrast, enables extensively extracting ground-surface profile data quickly and accurately. This processes can be followed further by matching obtained ground-surface profile to available accurate magnetic-anomaly maps for guiding a self-navigating drone or an aircraft with reasonable accuracy in areas without accessible GPS signals as well as applicable to self-detection of overheating due to current surge in automobiles or power-supply networks based on Faraday effect.
In conclusion, we have shown that the novel EM wave-induced QHE of the topological NLSM is a quantized response based on the novel magnetic quantization of DSS and its connection with field-induced band folding. We found the unusually large, field-dependent surface QHC induced by the extremely high LL degeneracy. We set an important groundwork for the study of photocurrent by deriving a general semiclassical formula for the high-order photocurrent. This work has established a new hallmark for studies of NLSM, which could play a critical role in next-generation technology and high-performance device applications.
## Acknowledgement(S)
D.H. would like to acknowledge the financial support from the Air Force Office of Scientific Research (AFOSR). G.G. would like to acknowledge the support from the Air Force Research Laboratory (AFRL) through Grant No. FA9453-21-1-0046. T.-R.C. was supported by the Young Scholar Fellowship Program from the MOST in Taiwan, under a MOST grant for the Columbus Program, No. MOST110-2636-M-006-016, NCKU, Taiwan, and the National Center for Theoretical Sciences, Taiwan. Work at NCKU was supported by the MOST, Taiwan, under Grant No. MOST107-2627-E-006-001 and the Higher Education Sprout Project, Ministry of Education to the Headquarters of University Advancement at NCKU. T.-N.D. would like to thank the MOST of Taiwan for the support through Grant No. MOST111-2811-M-006-009.
|
2310.04589 | Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on
Linux | We present Shufflecake, a new plausible deniability design to hide the
existence of encrypted data on a storage medium making it very difficult for an
adversary to prove the existence of such data. Shufflecake can be considered a
``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly
improved: it works natively on Linux, it supports any filesystem of choice, and
can manage multiple volumes per device, so to make deniability of the existence
of hidden partitions really plausible.
Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler
but does not offer native protection against multi-snapshot adversaries.
However, we discuss security extensions that are made possible by its
architecture, and we show evidence why these extensions might be enough to
thwart more powerful adversaries.
We implemented Shufflecake as an in-kernel tool for Linux, adding useful
features, and we benchmarked its performance showing only a minor slowdown
compared to a base encrypted system. We believe Shufflecake represents a useful
tool for people whose freedom of expression is threatened by repressive
authorities or dangerous criminal organizations, in particular: whistleblowers,
investigative journalists, and activists for human rights in oppressive
regimes. | Elia Anzuoni, Tommaso Gagliardoni | 2023-10-06T21:06:06Z | http://arxiv.org/abs/2310.04589v3 | # Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux
###### Abstract
We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a "spiritual successor" of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries.
We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.
Keywords:Shufflecake TrueCrypt VeraCrypt plausible deniability privacy forensics disk encryption LUKS dm-crypt cryptsetup A 15-page abstract of this work appears (with the same title) in the proceedings of the _ACM Conference on Computer and Communications Security (CCS) 2023_. This is the authors' full version. This document supersedes any previous versions.
**Table of Contents**
* 1 Introduction
* 1.1 Motivation
* 1.2 Previous Work
* 1.3 Limitations of Existing Solutions
* 1.4 Our Contribution
* 1.5 Acknowledgements
* 2 Preliminaries
* 2.1 Cryptographic Primitives
* 2.2 Full Disk Encryption
* 2.3 Plausible Deniability
* 3 TrueCrypt and VeraCrypt
* 3.1 Design
* 3.2 Operational Model
* 3.3 Security
* 3.4 Other Limitations
* 4 The Shufflecake Scheme
* 4.1 Design
* 4.2 Operational Model
* 4.3 Security
* 5 Implementation and Benchmarks
* 5.1 Structure of the Implementation
* 5.2 Space Utilisation
* 5.3 Benchmarks
* 6 Conclusions and Future Directions
* 6.1 Crash Consistency
* 6.2 Multi-Snapshot Security
* 6.3 Shufflecake "Lite"
* 6.4 Corruption Resistance
* 6.5 Use of Disk Metadata
* 6.6 Reclaiming Unused Slices
* 6.7 Unbounded Number of Volumes
* 6.8 Hidden Shufflecake OS
Introduction
Privacy of personal and sensitive data is, now more than ever, a topic of major public interest. In today's heavily interconnected world where data are, often by default, entrusted to an online third party, the last bastion of data confidentiality is local storage, as the physical disconnection greatly helps in reducing the room for abusive access. Even there, however, some protection measures need to be implemented, to guard against adversaries who might close that gap. The most trivial example of such an adversary is a thief stealing a user's personal hard disk and reading its raw contents; this is a very simple and well-studied threat model, for which many robust disk encryption solutions exist.
However, disk encryption alone is not enough to handle adversaries empowered by repressive laws or other, less legal methods (e.g., "rubber-hose"). Unlike the previous scenario, these adversaries gain more than simple "offline" access to the disk: they are in a position of power, which they can use to directly and aggressively confront the user about the contents of the protected storage, and by means of (physical, legal, psychological) coercion, they can obtain the encryption keys to any encrypted content identifiable on the user's device. The security goal in this scenario, then, becomes to still retain secrecy of some selected, "crucial" data on the disk, by making the presence of such data not even identifiable, thus allowing the user to make up a credible lie about the storage contents. This is exactly the aim of _plausible deniability (PD)_, a powerful security property, enabling users to hide the existence of sensitive information on a system under inspection by overreaching or coercive adversaries.
In the context of secure storage, PD refers to the ability of a user to plausibly deny the existence of certain data stored on a device even under interrogation or forensic inspection of the physical device. The underlying idea is that, if the adversary cannot conclude anything about the existence of hidden sensitive data, they have no motivation to further continue the coercion, thereby (hopefully) limiting the damage for the user. PD was first proposed in 1998 [1] and, since then, many different PD solutions have emerged, attempting to balance the security-efficiency trade-off. One of the most popular PD solutions was TrueCrypt [46], first released in 2004, discontinued in 2014, and replaced by its backward-compatible and technically similar successor VeraCrypt [49].
TrueCrypt and VeraCrypt remain the most well-known PD disk encryption tools available, probably because of the large open source community around them and the good performance, but they suffer from many drawbacks that have been left unaddressed for many years now, both in terms of security and operational model, such as: the possibility of only having _one_ layer of extra secrecy, limited filesystem support, and limited functionalities on Linux. In this work we present Shufflecake, a novel disk PD tool that aims at solving these and other limitations by still achieving a good security-performance tradeoff.
### Motivation
Albeit less extensively covered by the existing literature, coercive adversarial attacks unfortunately represent many diverse real-world situations. Most of the demonstrable facts, in this regard, concern national security provisions in countries like the USA, France, or the UK, that allow prosecutors to legally oblige citizens to disclose the passwords to their encrypted storage devices [21, 25, 38, 40, 48], under threats of harsh legal or economic penalties for non-compliance. These provisions are reported to have been often misused, sometimes to the point where people's rightful privacy has been arguably trampled on [2, 4, 8, 50].
The relative abundance of such reports coming from Western countries, however, is likely due to the comparatively attentive and critical public oversight on the government's operations. In countries with a less-developed system of checks and balances, the precise extent to which the sensitive data of activists, journalists, dissidents, and oppressed minorities are violated, can only be hinted at by the few sporadic cases that occasionally make it to the international headlines [7, 44]. These kinds of coercive attacks are not merely "tinfoil-hat paranoia", but a real-world concern.
### Previous Work
Different approaches to PD on storage have been proposed, starting from the layer one chooses to intervene at. Digital storage, in fact, is composed of many stacked layers, from the topmost "logical" layer (the filesystem) down to the more physical one. Such a layered structure complicates the security analysis, as different PD solutions focus on different layers, each of them leading to different approaches with pros and cons. Certain solutions work at the filesystem layer [35, 28]; they have to implement a rich interface, made of complex file- and directory-oriented methods (fileOpen, fileRead, mkDir...). Other schemes choose instead to go at the lowest level and modify the FTL [23] (flash-translation layer, for SSDs), but this approach clearly leads to highly vendor-specific solutions. Security of solutions designed for a specific layer might be defeated by adversaries with access to lower layers.
A versatile approach for a robust PD solution is arguably to operate at the _block layer_, whereby the scheme exposes a _block device_ interface (a common abstraction layer used by many operating systems to represent storage devices as arrays of fixed-size data _blocks_), providing just a bRead and a bWrite method. This is the approach used by solutions like TrueCrypt, and also by Shufflecake; in the remainder of this work, we will only focus on block-layer solutions. In this framework, a single underlying _disk_ or _device_ is formatted to host one or more _volumes_ (logical storage units, usually represented as virtual block devices), each encrypted with a different (usually password-derived) symmetric key. When confronted with the adversary, the user surrenders the passwords to some of these volumes, called _decoy_ volumes (because in some use cases they might contain deceptively innocent data): PD security guarantees that, even after these passwords have been given up, inspection of the disk by the adversary still yields
no clue whatsoever hinting at the presence of some further, _hidden_ volumes. The intuition behind the formal definition of these guarantees, is that the adversary cannot distinguish between the case where all the passwords have been surrendered, and the case where there is still undisclosed secret information left.
Security modelsEarly PD literature has focused mostly on _single-snapshot adversaries_, who are assumed to only check the storage device once. This was considered a natural assumption: in the typical scenario, an activist or journalist is stopped at a checkpoint or arrested and interrogated one time, and her electronics confiscated and analyzed. Provided she manages to escape, she will be on high alert for future investigation, and in particular she will _refresh_ the PD protection in use through some specific procedure (e.g., by reformatting the hard disk, or by buying a new device). So, in case of a second check, the adversary will de facto face a completely new _instance_ of PD storage, therefore falling back to the single-snapshot case. This is the threat model addressed by solutions like TrueCrypt and VeraCrypt.
The safety of the single snapshot threat model, however, has been questioned in the literature over time, not only because it relies on good user security practice, but also because the technological evolution of storage brought a new issue on the table. In modern devices, especially solid-state disks (SSDs), overwriting a logical sector often results in the underlying physical sector being simply marked as "unused" rather than being really overwritten, thereby leaving "traces", or "snapshots" of the data content at previous points in time. This in turn can (in theory) allow the adversary to break plausible deniability even with a _single_ inspection, because by analysing these traces that are left on the device one can see that content at certain locations has changed; since empty, unused space should not change over time, the presence of hidden information therein can be betrayed. This is the scenario considered in _multi-snapshot security_ models.
One could argue that multi-snapshot attacks are likely to be very complex, to the point that 100% evidence of the presence of a hidden volume based only on past sector traces is unlikely to be reached, and an accusation in this sense might not stand in court. In fact, we are not aware of a _single_ case in public literature of a conviction due to multi-snapshot attacks. On the contrary, there are many documented cases [36, 37, 47] where even a simple system such as TrueCrypt was enough to grant acquittal of a suspect. This is not to say that single- and multi-snapshot security are equivalent: the latter is stronger. However, one should question what price it is reasonable to pay (in terms of performance, etc) to achieve this stronger security.
ORAMsMulti-snapshot attacks are a well-known issue in PD systems (TrueCrypt and derivatives are also vulnerable in this sense) and designing countermeasures turns out to be challenging. As of today, only a few constructions [10] achieve multi-snapshot security, but at a hefty performance cost that makes them not practical for most use cases. Most of these solutions are based on _oblivious random access machines (ORAMs)_. ORAMs are cryptographic schemes that aim at hiding the access patterns (in addition to the data content itself) of a trusted
agent accessing an untrusted storage. The connection between ORAMs and PD has been investigated since 2014 with the HiVE construction [6]. In a nutshell, the idea is that if we use an ORAM to access a device, then nobody, not even a run-time backdoor in the device firmware, can know which (logical) location we access and how, thereby providing a solid method for implementing PD. However, ORAMs are extremely slow: it is known [26] that the bandwidth overhead of any secure ORAM of size \(n\) is \(\Omega(\log(n))\). The HiVE paper circumvented this problem with the following observation: If we are not worried by run-time backdoors in the device firmware, but are only concerned about "traditional" multi-snapshot adversaries, i.e. post-arrest investigation of the device physical layer, then we do not need a fully-fledged ORAM, because read operations do not change the state of the device. So all we need is a _"write-only" ORAM (WoORAM)_ that only obfuscates write requests. The advantage is that there is no known efficiency bound for WoORAMs, and in fact existing WoORAM constructions seem to be slightly better than full ORAMs. The WoORAM approach sparked a whole new line of research in multi-snapshot resistant PD solutions [9, 39], and it has been proven [10] that resistance to multi-snapshot security under certain assumptions is _equivalent_ to the use of WoORAMs.
### Limitations of Existing Solutions
So far, the landscape of available PD solutions presents many gaps, both in usability and in security, a fact also hinted at by the relatively scarce adoption of such solutions. By far the most widespread today is VeraCrypt, which comes with many limitations. WoORAM-based techniques have been studied in the last few years as promising alternatives to address TrueCrypt and VeraCrypt's security issues. However, it is important to stress that even the most performant WoORAM-based schemes are still very slow or wasteful. To put this in perspective: HiVE has a slowdown of roughly 200x I/O throughput and wastes 50% of the disk space, while some recent constructions such as DetWoOram [39] reach a slowdown of "only" 5x but at the cost of wasting 75% of the disk space. This leaves us with the dilemma of either choosing single-snapshot, efficient solutions with limited security, or WoORAM solutions with unacceptable performance loss and arguably stronger security.
Moreover, WoORAMs solutions themselves might not be bulletproof. In fact, we believe that the idea that read requests do not change the underlying state of the physical device is a somewhat strong assumption, and hard to justify with modern, complex SSDs that might, for example, cache read requests in some undocumented memory area of the firmware, or store read data on an ad-hoc buffer to improve performance, etc.
Another big problem of many plausible deniability solutions (including TrueCrypt) is that the OS itelf (or other applications installed therein) can unintentionally leak to an adversary the presence of hidden data when a hidden volume is unlocked. This can happen for example through the OS logging disk events, search agents indexing files within the hidden volume when this is unlocked, even applications such as image galleries or document readers caching previews
of opened documents. Customizing the OS' behavior in such a way to avoid these pitfalls is an almost hopeless task [11]. A proposed solution to this problem is to have the OS itself _inside_ a hidden volume, which is the idea that led to the concept of "hidden OS" on TrueCrypt. However, as far as we know, TrueCrypt (and VeraCrypt) remain the only implementation of this idea, limited to the Windows OS. Overall, we can say that a versatile PD solution able to balance security and usability has been sorely missing for years, especially for Linux, where no really practical solution exists.
### Our Contribution
In this work we present _Shufflecake_, a novel PD scheme that aims at striking a balance between the efficiency of TrueCrypt and the security of WoORAM-based solutions. Shufflecake operates at the block device layer, like TrueCrypt, but with important improvements:
1. It offers a virtually unlimited number of hidden volumes per-device, arranged hierarchically: for the user it is sufficient to unlock one volume, and all the "less secure" volumes will be unlocked automatically. This improves user experience and operational security, as we will see in Section4.2.
2. Unlike TrueCrypt, Shufflecake is _filesystem-agnostic_, meaning that all its features are available regardless of the filesystem chosen by the user.
3. It works natively on Linux, and can be integrated with the kernel for use at boot time and for _hidden operating systems_.
4. Unlike WoORAM-based solutions, Shufflecake is extremely fast (achieving only a minor slowdown compared to a bare, non-PD system) and wastes less than 1% of the disk space.
Moreover, Shufflecake not only achieves (provable) single-snapshot security, but also implements features that could make possible in the future to achieve a form of "operational" (i.e., weak) multi-snapshot security. These features are Shufflecake's hierarchical design and atomic block-rerandomisation, which are not available in tools such as TrueCrypt. We discuss this in Section6.
We implemented Shufflecake [45] in the C language, and we released it as a free software under the GNU General Public License v2+.
### Acknowledgements
We are grateful to Edouard Bugnion from EPFL for support and insightful discussions on the Shufflecake scheme, and in particular on the topic of crash consistency. We are also grateful to Vero Estrada-Galinanes from EPFL for insightful discussions on the topic of volume corruption. Part of this work was done by E.A. in the context of an EPFL M.Sc. thesis work in the Research Team of Kudelski Security, under official supervision of Edouard Bugnion and technical supervision of T.G..
Preliminaries
In this section we give the required preliminaries that are going to be used in the rest of this work. In the following, we use "iff" as "if and only if". Array indices start from 1. By (efficient) _algorithm_ or _procedure_ we mean a uniform family of circuits of depth and width polynomial in the index of the family. We implicitly assume that all algorithms take the index of the family as a first input, so we will often omit this. In the case of cryptographic algorithms, we call such index a _security parameter_, and we denote it by \(\lambda\). We will often label algorithms with names that reflect their role, e.g. "adversary", "distinguisher", etc. If an algorithm \(A\) is deterministic, we denote its output \(y\) on input \(x\) as \(y:=A(x)\), while if it is randomized we use \(y\gets A(x)\); when derandomising an algorithm we look at the deterministic algorithm obtained when considering explicitly the internal randomness \(r\) as an additional auxiliary input, and we write \(y:=A(x;r)\). We will call _negligible_ (and denote by \(\mathsf{negl}(x)\)) a function that grows more slowly than any inverse polynomial in \(x\), and _overwhelming_ a function which is 1 minus a negligible function. Given an event \(E\), we denote by \(\bar{E}\) its negation. Finally, we will write \(x\,{\stackrel{{\mathsf{s}}}{{\leftarrow}}}\,X\) if \(x\) is sampled uniformly at random from a set \(X\).
### Cryptographic Primitives
We assume familiarity with elementary cryptographic constructions and we refer the reader to, e.g., [24] for a more in-depth dive. Here we just recap informally the most relevant concepts.
#### 2.1.1 Cryptographic security.
We will often define the security of a cryptographic scheme \(\Pi\) in terms of a _game_, or _experiment_, that captures the 'difficulty in breaking the scheme', leading to so-called _game-based security_1. This usually entails comparing the probability of the adversary \(\mathcal{A}\) (modeled as an efficient algorithm) in winning a Game, that is, breaking the scheme, versus the baseline probability of 'winning by pure chance', for example by guessing randomly. We call this difference of probabilities the _advantage_ of the adversary winning the game for \(\Pi\), and we define (computational) security by requiring that this advantage is negligible in \(\lambda\) for any (computationally bounded) adversary.
Footnote 1: Other frameworks exist, such as simulation-based, but as a first approximation game-based security notions are very convenient for their intuitivity and simplicity.
\[\mathbf{Adv}_{\Pi}^{\textsf{Game}}(\mathcal{A}):=\Big{|}\mathsf{Pr}\left[ \mathcal{A}\left(\Pi,\textsf{Game}\right)\rightarrow\textit{win}\right]- \mathsf{Pr}\left[\textsf{Guess}\left(\Pi,\textsf{Game}\right)\rightarrow \textit{win}\right]\Big{|}\leq\mathsf{negl}.\]
#### 2.1.2 Hash functions and KDFs.
A _hash function_ is an algorithm that maps strings of arbitrary length into strings of fixed length (e.g., 256 or 512 bits). The most relevant security property for hash functions in our case is _collision resistance_, meaning that it is computationally difficult to find two distinct input strings that map to the same hash value. In order to add resistance against _pre-computation
attacks_[32], most implementations of hash functions use an additional parameter, called _salt_ (usually a non-secret, per-application string of 96-256 bits), to further randomize their mapping. Typical hash algorithms for cryptographic use are SHA256 [16], SHA-3 [12], and BLAKE2 [42].
Hash functions are designed to be very fast and efficient in term of required computational resources. This might actually be an undesirable property when using the function to store images of user-chosen passwords, because it allows for faster adversarial brute-force. In these cases, a _key derivation function (KDF)_, should be used. KDFs are functionally similar to hash functions, but are designed in such a way to be _uniformly expensive to compute_ on a broad range of computing devices, for example by requiring not only many CPU cycles but also large amount of memory and low latency. Typical KDFs for cryptographic use are Argon2id [5] and Scrypt [34].
**Symmetric-key encryption and authentication.** Regarding encryption, the most fundamental primitive is _symmetric-key encryption (SKE)_, also called _secret-key encryption_. An SKE scheme is a pair of algorithms (one for encryption and one for decryption) which define a bijection between a domain (_plaintext space_) and a co-domain (a subset of the _ciphertext space_) of strings. An additional input, the _secret key_, fixes the bijection across the set of all possible ones, and correctness of the SKE ensures that, if the same secret key is used, then the bijection offered by the decryption algorithm is the inverse of that offered by encryption (with overwhelming probability in the case of non-deterministic decryption). The size of a typical secret key in many real-world applications is 128 or 256 bit. This allows to index at most \(2^{128}\) or \(2^{256}\) unique bijections, which is generally much smaller than the number of possible bijections as the domain space gets larger. For this reason, it is generally impossible to ask the coverage of all possible bijections as a security property of SKEs. Instead, security is usually given in terms of _indistinguishability games_, with strength of the resulting notions depending on the additional power granted to the adversary. One of the most common security notions for SKEs is _indistinguishability under chosen plaintext attack_ (or IND-CPA in short). In such game, the adversary's goal is to distinguish between the encryption of two messages of her choice, given additional access to an encryption oracle (for the same, unknown secret key used in the game). The scheme is called IND-CPA secure iff no efficient adversary \(\mathcal{A}\) can successfully win with probability more than negligibly better than guessing at random. Notice that this, in particular, requires that the SKE must be _randomised_, i.e., encrypting twice the same plaintext with the same key will generally yield two different ciphertexts. We will denote encryption (resp., decryption) of a plaintext \(p\) (resp., ciphertext \(c\)) with a key \(k\) (and, optionally, a randomness \(r\)) as \(\mathtt{Encrypt}(p,k;r)\) (resp., \(\mathtt{Decrypt}(c,k;r)\)). The randomness \(r\) is generally not needed for decryption, so we will omit it in that case.
If, in addition to privacy, _authenticity_ of a message is also required, then SKEs are not enough, and _authenticated encryption (AE) schemes_ must be used instead. An AE scheme works in a similar way to a SKE, but decryption of a
given ciphertext _fails_ if the secret key used for decryption is not the same one used to encrypt the original plaintext. When this happens we write that the decryption procedure returns \(\bot\). This allows to check that a ciphertext has not been altered or replaced by a malicious adversary, thereby granting authenticity and integrity of the message. A typical way to implement AE is to append a _message authentication code (MAC)_ to a ciphertext. A MAC is a random-looking bitstring, for example computed through the _encrypt-then-MAC_ procedure with a hash function on a combination of ciphertext and secret key. MACs are useful, among other properties, to check whether a provided key is the correct one to decrypt a ciphertext (without having to actually decrypt the ciphertext first).
#### 2.1.2 Block ciphers.
As a building block for SKEs, _block ciphers_ are widely used. These are algorithms that typically offer two different interfaces, one for encryption and one for decryption. In encryption mode, they take as input a block of plaintext of fixed size \(B\) (the _block size_) and an encryption key of \(K\) bits (the _key size_) and return a block of ciphertext, also of size \(B\). This mapping is undone in decryption mode, provided the same key is used. In other words, block ciphers implement a subset of size at most \(2^{K}\) of the space of all possible \((2^{B})!\) permutations (and their inverses) over \(B\)-bit strings. One of the most widely used block ciphers are those from the AES family [20], identified by AES-\(K\) (with a block size of 128 bits, and a keysize \(K\) of 128, 192, or 256 bits).
In order to turn a block cipher into a generic SKE, a _mode of operation_ is required. This is a deterministic procedure that describes how to split input plaintexts or ciphertexts of arbitrary length into fixed size blocks, and iteratively applying the block cipher on these blocks. Typical modes of operation are ECB, CBC, and CTR [43]. Among these modes, CTR is usually preferred for its better characteristics. In order to achieve randomisation as a protection against known-plaintext attacks, many modes of operations also include an additional input called _inizialization vector (IV)_, typically a string of a fixed size, like 64, 96 or 128 bits, not necessarily secret but unpredictable and variable according to the message. Block ciphers can also be used to build AE, but with different modes of operations than the ones used for encryption only, such as GCM [29]. For a given block cipher \(\mathcal{B}\) with keysize \(K\) and a given mode of operation \(\mathcal{M}\), the resulting SKE is usually denoted as \(\mathcal{B}\)-\(\mathcal{M}\)-\(K\), for example AES-CTR-192 or AES-GCM-256.
### Full Disk Encryption
_Full disk encryption (FDE)_ is a security technique that protects the content of a digital storage device (such as a hard drive or SSD) by using encryption 'on the fly'. This can include applications, user files, and even the OS itself. The primary purpose of FDE is to prevent unauthorized access to sensitive information in the event of device theft, loss, or unauthorized physical access.
FDE works by employing a cipher (usually a block cipher) to encrypt data at rest on the storage device. A user (or the device manufacturer) must first _initialise_ the storage device, by providing an encryption key or passphrase to create
and write on the device a special metadata structure that represents an (initially empty) state encrypted with the provided key. In order to protect against space analysis attacks, a very first step before initialisation consists of completely overwriting the device with random noise. Then, every time the system is powered on or the device accessed, the user must provide the valid key or passphrase to decrypt and read/write data on the device. This key is not stored on the device itself and must be entered each time the device is prepared for use (_opening_), usually cached in a volatile and protected area of memory, and thereby erased when the device stops being used (_closing_) or the system is shut down. Except for the one-time initialisation phase (which can be quite slow depending on the device size), the encryption process is typically invisible to the user, as the OS handles the encryption and decryption of data as it is read from or written to the disk. Once the correct key or passphrase is provided, the user can interact with the device normally, without having to manually encrypt or decrypt files, as the OS only exposes a _virtualized_ device that looks unencrypted to the user.
FDE can be implemented using hardware-based or software-based solutions. Hardware-based FDE is typically performed by a dedicated encryption chip, while software-based FDE is achieved using encryption software that runs at the operating system level. Implementation is usually done using standard block ciphers contructions and modes of operations like AES-CTR-256. Some examples of software-based FDE solutions include BitLocker for Windows, FileVault 2 for macOS, and LUKS for Linux [15, 30]. All these implementations have typically only negligible impact on performance compared to a non-FDE system, also thanks to the widespread presence of dedicated CPU instructions to speed up AES computation on personal devices such as smartphones and laptops.
Notice that if the whole OS is protected this way with a software-based solution, then there is a _bootstrapping problem_, because the OS itself cannot natively run while encrypted. This is addressed by either having a small, unencrypted bootloader which launches a minimal FDE application before the rest of the OS can start, or with a lower-level solution usually provided through hardware support such as a _Trusted Platform Module (TPM)_.
#### 3.2.2 Cipher modes for FDE.
For block ciphers used in disk encryption, the XTS mode of operation [22] is the most widely adopted because of its performance and security. It avoids the need of explicitly writing IVs for every block on disk by deriving these IVs _pseudodeterministically_ from a global IV and sector-dependent metadata. Using CTR mode in a similar way would be a serious security mistake, unless care is taken in refreshing (and storing) IVs at every data write, which usually has an impact on performance and space usage. The latter approach, however, has a potential advantage: it gives the possibility of _re-randomising_ blocks, i.e., changing the ciphertext without changing the underlying plaintext.
#### 3.2.3 Caveats.
It's important to note that FDE primarily protects data at rest, meaning it is most effective when the device is powered off or in a locked state. It does not provide protection against unauthorized access or data breaches while
the system is running and the encryption key or passphrase has been entered. In particular, FDE is arguably less effective on devices such as smartphones, which stay most of the time in an "on" state, and offers no protection against malware such as _keyloggers_, which might intercept the password entered by the user, or even access the unencrypted content directly. For comprehensive security, FDE should be combined with other security measures, such as strong authentication, secure boot processes, and proper access control policies.
### Plausible Deniability
In this section, we present a formal game-based definition of PD security. It is worth noting that almost every paper in the field has given _its own_ security definition, always slightly different from the others; valid attempts have been made to unify them into a single framework [10], but here we will follow the arguably more intuitive one given in [6], which is well suited for the block-layer scenario we work in. In this setting, a user employs a PD scheme to multiplex a single storage device into \(\ell\) independent volumes \(V_{1},V_{2},\ldots V_{\ell}\), each \(V_{i}\) being associated to a different password \(P_{i}\). The PD scheme supports up to \(\mathsf{max}\) volumes per device (so \(1\leq\ell\leq\mathsf{max}\)); the value of \(\mathsf{max}\) is publicly known. Both the volumes and the underlying device are block-addressable, meaning that the \(\mathsf{read}\) and \(\mathsf{write}\) operations they support have the granularity of a block. The semantics of a scheme \(\Pi\) is given as follows.
Definition 1 (PD Scheme): Let \(\ell\leq\mathsf{max}\), and \(P_{1},\ldots P_{\ell}\) user-provided passwords. A _PD scheme_\(\Pi\) is a tuple of algorithms:
* \(\Pi.\mathsf{setup}(P_{1},\ldots,P_{\ell})\to\Sigma\): Initialises the disk to host \(\ell\) volumes \(V_{1},\ldots V_{\ell}\), encrypted with passwords \(P_{1},\ldots P_{\ell}\); returns a device instance description \(\Sigma\) which encapsulates everything.
* \(\Pi.\mathsf{read}(\Sigma,i,B)\to d\): Reads data \(d\) in block address \(B\) from volume \(V_{i}\) (we assume \(\mathsf{read}\)s to not modify the instance)4. Footnote 4: Note this is also the case for WoORAM-based constructions, but not necessarily for ORAM-based or other, arbitrary, PD schemes constructions.
* \(\Pi.\mathsf{write}(\Sigma,i,B,d)\to\Sigma^{\prime}\): Writes data \(d\) into block address \(B\) of volume \(V_{i}\), and updates the instance.
The following correctness requirement applies: for any fixed block \(B\) and volume \(V_{i}\), if \(\Pi.\mathsf{write}(\Sigma,i,B,d)\) is the most recent write query which precedes a query \(\Pi.\mathsf{read}(\Sigma^{\prime},i,B)\), then \(\Pi.\mathsf{read}(\Sigma^{\prime},i,B)\to d\) (for simplicity, we consider operations atomic).
#### 2.3.1 Access patterns.
Let us define an _access_ as the tuple \(o=(\mathsf{op},i,B,d)\), with \(\mathsf{op}\in\{\,\mathsf{read},\mathsf{write}\,\}\) (if \(\mathsf{op}=\mathsf{read}\), then \(d\) is the return value), and \(i\) being the index of the volume targeted by the access. Let us also define an _access pattern_ as a (chronologically) ordered sequence of accesses \(O=<o_{1},\ldots,o_{n}>\). An empty access \(o=\bot\) is also defined, which is simply ignored by the instance \(\Sigma\).
PD security
The security game for PD inherits some high-level concepts from the IND-CPA game (_ciphertext indistinguishability under chosen-plaintext attack_) for secret-key encryption. The adversary is a distinguisher, and is challenged with deducing whether she is interacting with a \(\Sigma\) encapsulating \(\ell\) or \(\ell-1\) volumes. Also, she is allowed to choose the read or write operations to be executed, to capture the idea that indistinguishability must hold no matter the accesses performed on the volumes, hence including adversarial ones. A secret bit \(b\) within the game determines whether \(\Sigma_{0}\) (containing \(\ell\) volumes) or \(\Sigma_{1}\) (containing \(\ell-1\) volumes) is first instantiated in the game and made to interact with the adversary. In both cases, we allow the adversary to choose the first \(\ell-1\) passwords5, and her goal is to guess \(b\).
Footnote 5: This represents the most unfavourable situation for the user, as we consider these passwords compromised anyway.
**Experiment 2** (PD game, generic): _For a PD scheme \(\Pi\) and an adversary \(\mathcal{A}\) the plausible deniability experiment \(\mathsf{PD}(\Pi,\mathcal{A})\) is defined as follows:_
1. \(\mathcal{A}\) _chooses_ \(\ell\leq\mathsf{max}\) _and chooses_ \(\ell-1\) _passwords_ \(P_{1},\ldots,P_{\ell-1}\)_._
2. _A secret random bit_ \(b\xleftarrow{s}\{0,1\}\) _is drawn. If_ \(b=0\)_, then an additional secret high entropy password is sampled_ \(P_{\ell}\xleftarrow{s}\{0,1\}^{\lambda}\)_, where_ \(\lambda\) _is the security parameter_6_._ Footnote 6: We abuse notation by representing a password as a binary string, but w.l.o.g. it is equivalent to the case of a user-chosen password with \(\lambda\) bits of entropy. We assume that the user will choose a high entropy password at least for the hidden volume.
3. _The game creates_ \(\ell-b\) _volumes:_ \(\Pi.\mathsf{setup}(P_{1},\ldots,P_{\ell-b})\to\Sigma_{b}\)__
4. \(\mathcal{A}\) _performs interactive rounds of queries. Every query works as follows:_ 1. \(\mathcal{A}\) _chooses access patterns_ \(O_{0}\) _and_ \(O_{1}\)_, where_ \(O_{0}\) _is, in the adversary's intentions, aimed at_ \(\Sigma_{0}\) _(thus potentially containing some operations on_ \(V_{\ell}\)_) and_ \(O_{1}\) _is aimed at_ \(\Sigma_{1}\) _(and so only contains operations on_ \(V_{1},\ldots,V_{\ell-1}\)_). She also chooses a bit_ \(v\)_, signalling whether she wishes a snapshot of the disk at the end of this round._ _These adversarial choices are subject to constraints, which we discuss in the next paragraph._ 2. _The game only executes_ \(O_{b}\) _(on_ \(\Sigma_{b}\)_, the only instance that was created in step 3). If requested, it sends the resulting disk snapshot_ \(D\) _to_ \(\mathcal{A}\)_._
5. _At the end of all rounds, the adversary outputs a bit_ \(b^{\prime}\)_._
6. _The game outputs_ \(1\) _iff_ \(b=b^{\prime}\)_,_ \(0\) _otherwise._
Here we have omitted the constraints that the adversary is subject to in step 3(a), when choosing the access patterns and when choosing the snapshot bit \(v\); we will present them now. Without any such constraints, we will see that security would be impossible to achieve; also, the exact set of constraints will modulate the induced threat model.
_Constraints on the bit \(v\)._ This constraint governs the snapshotting capabilities of the adversary, thus the adversary power. We only define two extreme cases:
1. Arbitrary - No constraint: the adversary is allowed to set \(v=1\) in all of the interactive rounds. This is the strongest form of multi-snapshot security, as the adversary can obtain a snapshot any time she desires.
2. One-Time. The adversary is single-snapshot, i.e. can only set \(v=1\) for one of the interactive rounds.
Constraints on the access patterns.These constraints define the adversary goal, by specifying which two exact situations she has to distinguish between: if a PD scheme is secure (i.e., the adversary cannot distinguish) under the game enforcing such a constraint, the implication is that a user, having performed some access pattern \(O_{0}\) including some operations on \(V_{\ell}\), can plausibly claim to instead have executed a corresponding \(O_{1}\), which only accesses the volumes \(V_{1},\ldots,V_{\ell-1}\) (whose passwords have already been surrendered).
Discussion on the constraints.Let us first clarify that some constraint is necessary in order to have any hope in the PD game against the adversary. Otherwise, the adversary could submit \(O_{0}\) and \(O_{1}\) containing completely different (logical) write accesses to the decoy volumes \(V_{1},\ldots,V_{\ell-1}\), and there would be no way of making the two outcomes indistinguishable, since the adversary holds the passwords to those volumes, so it could trivially verify which of the two patterns was executed. This suggests the need for a minimal rule, stating that, whether \(O_{0}\) or \(O_{1}\) is executed, the resulting logical contents of the decoy volumes \(V_{1},\ldots,V_{\ell-1}\) must be the same. From the user's perspective, this basic requirement means that we do not try to disguise the writes to decoy volumes as something else, both because we do not need to, and because there would be no way of doing it even if we wanted to.
Furthermore, we notice that many PD solutions (including those based on WoORAMs) treat write requests in a completely different way than read requests, i.e. a write request could _trigger allocation or reshuffling_ of a certain volume sector. This might happen without breaking the minimal rule described above: if an adversary first reads at a previously unallocated position \(B\), obtaining data \(d\), and then writes the same data \(d\) at the same location \(B\), this might cause a change of state in the instance without changing its logical content (as mandated by the minimal rule above). This, in turn, would enable an attack where the adversary merely checks whether this state change happened or not. However, we must consider this attack trivial: detecting this kind of state change on a decoy volume would actually not compromise security. Therefore, in order to capture this concept, we also demand that the set of all blocks that are touched by write requests be the same for both \(O_{0}\) and \(O_{1}\), for all volumes \(V_{1},\ldots V_{\ell-1}\). We stress that this extra constraint is also very minimal and does not weaken the security guarantees offered in practice; in fact, most schemes demand even stricter constraints, for example by demanding that \(O_{0}\) and \(O_{1}\) are of the same length, which we do not require here.
Single-snapshot security
In the single-snapshot case, the security game can be simplified. More formally, if the "One-Time" constraint is enforced on bit \(v\), then Experiment 2 is equivalent to the same game where the number of interactive rounds is set to 1 instead of being chosen by the adversary. This is because the access patterns submitted _before_ obtaining the (only) disk snapshot might as well be all concatenated into one, as they cannot be chosen adaptively; moreover, all the access patterns submitted _after_ obtaining the snapshot can be disregarded altogether, as their effect will not be detectable by the adversary.
Security in this one-round game can then be rephrased as follows:
**Definition 3** (Single-Snapshot Security): _A PD scheme \(\Pi\) is single-snapshot (SS)-secure iff for any PPT adversary \(\mathcal{A}\) which chooses \(\ell\leq\mathsf{max}\), passwords \(P_{1},\ldots,P_{\ell-1}\), and access patterns \(O_{0}\) and \(O_{1}\) subject to the constraints outlined in Section 2.3, it holds:_
\[\left|\mathsf{Pr}\left[\mathcal{A}\left(D_{0}\right)\to 1\right]-\mathsf{Pr} \left[\mathcal{A}\left(D_{1}\right)\to 1\right]\right|\leq\mathsf{negl}, \tag{1}\]
_where:_
* \(D_{0}\) _is the disk snapshot resulting from the application of_ \(O_{0}\) _to_ \(\Sigma_{0}\)_, where_ \(\Sigma_{0}\leftarrow\Pi.\mathsf{setup}(P_{1},\ldots,P_{\ell})\) _and_ \(P_{\ell}\xleftarrow{s}\{0,1\}^{\lambda}\)_._
* \(D_{1}\) _is the disk snapshot resulting from the application of_ \(O_{1}\) _to_ \(\Sigma_{1}\)_, where_ \(\Sigma_{1}\leftarrow\Pi.\mathsf{setup}(P_{1},\ldots,P_{\ell-1})\)_._
## 3 TrueCrypt and VeraCrypt
Given its relevance in the context of comparison to Shufflecake, we want to discuss here TrueCrypt [46], which was the first disk encryption software (now discontinued) to offer PD capabilities. It was developed around the early 2000s, before BitLocker [30] and LUKS [15] became the default standards for disk encryption on Windows and Linux, respectively. Its development has come to a sudden halt in 2014, but a backward-compatible successor exists (VeraCrypt [49]) that has kept most of the design principles, and improved on some minor aspects (like a stronger key derivation). For our purpose in this work, we will focus on TrueCrypt only, as all our considerations similarly apply to VeraCrypt.
TrueCrypt can operate in two main modes: with "standard" (sometimes called "outer") encrypted volumes, or with "hidden" volumes. In the former case, it is functionally similar to other FDE solutions like LUKS (but with random-looking encrypted headers). In the latter case, a hidden volume is embedded in the unused empty space left by the content of the decoy standard volume. Plausible deniability is given by the fact that disk headers and content are indistinguishable from random, which makes it hard to distinguish between the two cases without the correct passwords.
### Design
TrueCrypt (like Shufflecake and many other existing PD schemes) works as a _stacking driver_, that is, a device driver operating on top of another device driver. It exposes a _logical_ (virtual) storage space to the upper layer, which directs _logical_TcRead and TcWrite requests to it; the stacking driver then executes its algorithm to map these requests to _physical_ block bRead and bWrite requests to the underlying device driver, which manages the _physical_ storage space. Here the distinction between _logical_ and _physical_ is the distinction between before and after the translation operated by the stacking driver, regardless of whether the _physical_ storage space is also a virtual device.
The first initialisation operation performed by TrueCrypt when creating new volumes within a device is to fill the disk with random bytes, which is also the case for regular disk encryption tools including LUKS, as we already discussed. The first part of the disk contains the fixed-size encrypted header of the standard volume, and an equal-size empty slot filled with random bytes (remaining from the initialisation procedure). Then comes the actual encrypted data section of the standard volume, which includes some empty space, also filled with random bytes (coming from the initialisation procedure).
TrueCrypt optionally allows to "embed" a hidden volume in the (contiguous) empty space left by the standard volume: this is the mechanism providing plausible deniability. Its encrypted header then fits in the empty slot left after the header of the standard volume. The standard volume and the hidden volume are encrypted with two different passwords.
Figure 1: TrueCrypt’s disk layout (standard volume).
Figure 2: TrueCrypt’s disk layout (standard/decoy and hidden volumes).
One big limitation of TrueCrypt is that it can only support a hidden volume if the outer volume is formatted as a FAT filesystem. This is because we need the empty space left by the decoy volume to be contiguous. Most modern filesystems (like ext4 and NTFS) "jump back and forth" on the disk as they are written with data, leaving lots of empty blocks, or "holes", in the middle. Instead, the FAT filesystem is special, in that it grows incrementally and, up to the physiological holes created by deleted files, occupies all the space up until its last utilised block. This way, one can have the hidden volume start at a certain offset, after the end of the decoy volume, and then follow data allocation linearly. TrueCrypt automatically computes a convenient starting position for the hidden volume (leaving some leeway buffer after the end of the standard volume), and places it among the metadata of the hidden volume's header. Whereas the hidden volume is assigned a logical size that follows the physical space actually allocated to it, the standard volume is _not_ resized, and keeps logically mapping onto the whole disk. This is crucial in order to not defy deniability: if we resized the standard volume, this information (which leaks the existence of a hidden volume) would be written in the metadata of its file system, which is inspected by the adversary.
Another big limitation of this approach is that, from the moment that a hidden volume is embedded into a standard volume, the latter will be very limited in the possibility of content growth: as the hidden volume lives in the empty space of the standard volume, this seemingly empty space can never shrink (except for the leeway buffer), or the hidden volume will be corrupted.
### Operational Model
Using TrueCrypt comes with restrictions on what the user can do in order to preserve data integrity and plausible deniability.
As already mentioned, once the hidden volume is created, its starting position is final, and it "freezes" the end of the decoy standard volume, limiting the maximum size that it will ever be able to attain. Since the standard volume cannot be resized to accommodate for the hidden volume, and instead keeps mapping onto the whole disk, it is up to the user to not let it grow too much and overwrite the hidden volume. This is achieved both by frequent de-fragmentation of the FAT filesystem within the standard volume, and by actually not writing too much data into it. If one wants to be absolutely safe about data corruption within the hidden volume, the recommendation for the user is to never unlock only the standard volume for daily use (except if under coercion), but to always unlock either 1) the hidden volume only, or 2) both hidden and standard volumes, but keeping the latter in read-only mode.
The standard volume must contain "decoy" data, that will reasonably convince the adversary that it is the only volume existing on the disk. Clearly, the user should only surrender the password of the standard volume to the adversary when under coercion. This, in turn, opens the door for corruption of the hidden volume. Eliminating completely such corruption risk is unavoidable, in TrueCrypt this can can only be mitigated by frequent backups.
### Security
Here, we analyse TrueCrypt's security under different threat models.
#### 3.3.1 Pseudorandomness.
Unlike other FDE solutions such as LUKS and Bitlocker, TrueCrypt-formatted devices do not contain any cleartext header. This means that a TrueCrypt-formatted device is indistinguishable, when at rest, from a device completely filled with random noise. This feature is desirable in certain scenarios, for example it is more straightforward and less risk-prone if one wants to embed a TrueCrypt container file within another medium using steganography, but represents a tradeoff against ease of integration with other parts of the system, which is the approach preferred by all-purpose FDE solutions such as LUKS and Bitlocker. Anyway, it must be stressed that this feature is not relevant per se for PD, because in the PD scenario the adversary is always provided with at least _one_ decryption key.
#### 3.3.2 Single-snapshot security.
In TrueCrypt, once the user surrenders the password of the decoy volume and lets the adversary decrypt it, the only part of the disk contents that remains to be "interpreted" by the adversary is the non-decrypted space after the end of the decoy FAT file system. However, whether this space is actually empty or whether it contains a hidden volume, it will be filled with random bytes that are not readable with the decoy password alone. Therefore, even if a hidden volume is present, the user can _plausibly claim_ that the remaining space is empty and filled with random bytes: the adversary has no way to disprove this claim, _or even to question its likelihood_, based on the observed disk content.
#### 3.3.3 Multi-snapshot security.
It is easy to see why TrueCrypt is insecure in the multi-snapshot threat model: what happens if the adversary obtains two snapshots of the disk at two different points in time, and the user has made changes to the hidden volume in the meantime? By comparing the two snapshots, the adversary clearly sees that some of the allegedly "empty" blocks have changed, which immediately reveals that a second volume exists, because TrueCrypt never re-randomises the actually-free space.
#### 3.3.4 TrueCrypt's hidden OS.
Latest versions of TrueCrypt offer a solution to the OS' tendency to leak the existence of hidden partitions through a "hidden OS" feature. A decoy OS is installed within a standard volume, while a separate OS is installed within the hidden volume of another partition. In order to decide which OS is booted according to the provided password, the computer's bootloader is replaced by the ad-hoc _TrueCrypt bootloader_, which will first try to boot the decoy OS with the user-provided password and then, if unsuccessful, will try to boot the hidden one. Since the decoy OS itself never sees the hidden partition, there is no possibility for it to even be aware of the existence of hidden data. Notice, however, that this feature is only available for Windows.
### Other Limitations
Here we discuss other problematic aspects of TrueCrypt. It is to be noted that some of such considerations apply today, but were less relevant in the early 2000s, when TrueCrypt was first conceived.
First of all, as discussed, the standard volume must be formatted as a FAT filesystem if a hidden volume is desired. However, FAT is now outdated: it used to be very widespread, but today there is little plausible reason to use it anymore. Therefore, the mere fact that we are using FAT raises a red flag to the adversary.
Another problem arises from the fact that the user must avoid or limit the use of the decoy partition in order to not corrupt the hidden one. Yet, decoy volume(s) must "look legitimate": it must be plausible by looking at their content that they are the only ones. In particular, they must be reasonably up to date: if we only ever work on the hidden volume, and completely forget about the decoy, an adversary unlocking the decoy would become very suspicious seeing that the most recent updates are months if not years old.
In general, it is not within the scope of PD schemes to hide _themselves_, i.e., hide the very fact that that scheme is being used. We must assume that this fact is known to the adversary, who might, e.g., by searching the user's laptop, discover a TrueCrypt installation. Since a locked TrueCrypt volume is indistinguishable from random data, when asked for the first password, we could in theory even claim that the disk is not formatted with TrueCrypt at all, but is instead the result of a secure wiping procedure, or even that it is a pool of random data coming from other sources. However, a real-world adversary will arguably be unconvinced given the knowledge that a system like TrueCrypt is in use. So, a recommended course of action is to separate the TrueCrypt-supporting system (e.g., a laptop) and the encrypted media (e.g., a USB stick). This might be cumbersome for most use cases.
Another big limitation is the fact that only _one_ hidden volume within each standard volume is supported. This is a problem, because the adversary might reasonably suspect that TrueCrypt is in use _exactly_ to hide something through its PD feature: if we only meant to encrypt a volume, we would reasonably use a more widespread and supported solution like LUKS or BitLocker. A user could claim that they prefer using independent, niche open-source software to secure their data, and it would be relatively credible, but the safest course of action when designing a PD scheme is assuming that the adversary might not believe this claim, and ask for a second password.
The short answer to this problem is: a robust TrueCrypt-like PD solution should allow _simultaneous access_ to _more_ than two layers of volumes. That way, we can create a series of volumes with increasingly "private" contents (that could well be all decoys), so as to reveal more than one password to the adversary and convince them, based on the resulting decrypted contents, that we have effectively given up what we were hiding, while in fact we are still holding the password to one more top-secret volume whose existence they have no more reason to suspect.
The Shufflecake Scheme
In this section we present our Shufflecake scheme, we explain its way of operation, and we provide a security analysis.
### Design
By _device_, we mean the underlying disk, which exposes a _physical_ storage space. Instead, _volumes_ are the _logical_ storage units that map onto a device. The name 'Shufflecake' stems from the analogy of mixing up _slices_ of a cake (the device) in order to provide many stacking _layers_ of privacy (the volumes). Conceptually, Shufflecake's operation consists of four functionalities:
1. _Initialize_ a device: this is done only once, when a new device is first prepared for use in Shufflecake. It consists in overwriting the device with random data, asking the user to provide the number \(\ell\) of desired volumes and related passwords, and creating an encrypted header with metadata using this information. In our implementation, this functionality is provided by the init command.
2. _Instantiate_ a device: this is the preliminary stage of preparing a Shufflecake-initialized device for use. It consists of reading the user's provided password, trying to decrypt the device's header metadata with the derived key, and, if successful, recover information on the available volumes provided. In our implementation this functionality is invisible to the user (it gets executed together with open), so we don't provide an associated command.
3. _Open_ a volume: using the correctly derived volume key, volume-specific metadata is read from the relevant header section. This metadata is used to create a logical device which is presented to the user, and the user's OS can issue SflcRead and SflcWrite requests to this logical volume. In our implementation, this functionality is provided by the open command.
4. _Close_ a previously instantiated device: ephemeral state changes, if present, are written (encrypted) to disk, and then _all_ the open volumes provided by that device are removed from the user's view. In our implementation, this functionality is provided by the close command.
At its core, Shufflecake is a block indirection layer on top of an encryption layer. Our indirection layer realizes a mechanism which is already a strong improvement over TrueCrypt, since it fixes two of its crucial limitations: it allows multiple volumes, and it is filesystem-independent. Data decryption keys for every volume are derived by a password and other header-derived randomness. Furthermore, the decrypted payload of the header of volume \(\ell>1\) also contains a copy of the header decryption key for volume \(\ell-1\). This allows to recursively open all volumes present in a device by using a single password, which in turn improves security and user experience, as we will see.
**Disk Layout.** The device's physical storage space is statically divided into a _header section_ and a _data section_. The header section is found at the beginning of the disk, and it is composed of a fixed-size _device master block (DMB)_, and \(\mathsf{max}\) equal-sized _volume headers_ (each of them comprised of a _volume master block (VMB)_ and a _position map_), irrespective of how many volumes there are effectively. This mild waste of space is necessary in order to to prevent the adversary from trivially deducing the number of volumes by the size of the device header (which might be possible by analysing the data allocation pattern even when not all volumes are opened). Let us analyse all these sections, starting first with the data.
_Data section_. Instead of mandating that the volumes be physically adjacent on-disk, like in TrueCrypt, we randomly interleave them as encrypted (but not authenticated) fixed-size _slices_, where every slice belongs to one volume and contains a certain number of blocks. Metadata in the \(i\)-th volume header allows to reconstruct the logical content of \(V_{i}\) by mapping the corresponding slices as a (virtual) contiguous space.
We distinguish between _logical slices_, of size \(S_{L}\) blocks, and _physical slices_, of size \(S_{P}=S_{L}+\Delta_{S}\), where \(\Delta_{S}\) blocks are used to store the encryption IVs for that slice. Logical slices store the data of their respective volumes, while physical slices are used to reserve space on the disk to allocate the encrypted logical slice. A physical slice can either be unallocated (i.e., unclaimed by any volume), or mapped to a single logical slice belonging to some volume \(V_{i}\). In the latter case, the mapping from the _logical slice index (LSI)_\(\sigma\) of volume \(V_{i}\) to the device's _physical slice index (PSI)_\(\Psi\) is given by a function \(\mathtt{GetSliceMap}:(i,\sigma)\mapsto\Psi\) which is computed by simply looking up a _position map_ (basically, a per-volume array). I/O operations to a _logical block address_\(B\) of volume \(V_{i}\) are performed through the two interfaces SflcRead and SflcWrite. We will describe later these two interfaces, as well as the structure of the position map.
Figure 3: Shufflecake’s disk layout overview.
#### 4.2.2 Device master block (DMB)
The DMB encapsulates all password-related data. It begins with one single KDF salt, shared for all volumes: this salt, combined with a volume \(V_{i}\)'s password through a KDF, yields the volume's _key-encryption-key (\(\mathsf{KEK}_{i}\))_. Notice that we derive every \(\mathsf{KEK}_{i}\) by using just a single global salt for all \(\mathsf{max}\) volumes, otherwise we would incur in up to \(\mathsf{max}\) expensive different key derivations every time we instantiate a device7. This does not hamper security, as password hashes are never stored on disk, but only used to generate the \(\mathsf{KEK}\) which is in turn used as a decryption key.
Footnote 7: This limitation can actually be avoided by using some key derivation tricks, like ‘_re-salting’_, i.e. using the output of the KDF in combination with a (fast) hash using a per-header salt. We leave this approach for future consideration.
Then come \(\mathsf{max}\) DMB _cells_, each being an authenticated ciphertext (together with the corresponding IV), encrypted with the respective volume's \(\mathsf{KEK}\): the plaintext is itself another cryptographic key, encrypting the volume's VMB (the _volume master key_\(\mathsf{VMK}_{i}\)). This key decoupling allows us to possibly change a volume's password without having to re-encrypt all its content with a different key. For granularity and consistency, the overall size of the DMB is fixed to be exactly one block. The rest of the DMB, and the unused DMB cells, contain random noise.
Figure 4: Shufflecake’s data section layout and slicing.
Figure 5: Shufflecake’s Device Master Block (DMB).
#### Volume master blocks (VMBs).
The \(i\)-th volume header is composed of a Volume Master Block (VMB), followed by the encrypted (but not authenticated) position map of volume \(V_{i}\). We discuss the position map in the next paragraph. The VMB is a single block containing a (non-authenticated) ciphertext, encrypted with the volume master block's key \(\mathsf{VMK}_{i}\), and the associated IV. The underlying plaintext is composed of the following fields:
* The volume's _volume-encryption key (\(\mathsf{VEK}_{i}\))_, used for encrypting the actual data section and the position map.
* The previous volume's VMB key \(\mathsf{VMK}_{i-1}\) (or a random value if \(i=1\)).
* The number of slices \(\mathsf{NumSlices}\) contained in the device.
* The remaining space up to filling the block size is left random, but can be optionally used to embed additional volume-related metadata if needed.
The device-specific value \(\mathsf{NumSlices}\) defines and fixes the size of the position maps, even in the case that the device is resized; it is replicated across all volume headers in order to be decryptable with any provided password. The presence of \(\mathsf{VMK}_{i-1}\) is what allows us to impose a hierarchy on the otherwise-independent volumes (they are all treated equally in the data section). This way, once we open volume \(V_{i}\), we can iteratively walk the backwards-linked list induced by this field, and also open volumes \(V_{i-1}\) through \(V_{1}\). While this approach compromises deniability for a volume \(V_{i}\) "in the middle", once the password to \(V_{j}\), \(j>i\) has been provided, it does not harm deniability as defined by the security game: the volume we want to hide is the last one, \(V_{\ell}\), not the middle ones. The usefulness of this approach is discussed in Section 4.2.
#### Slice maps.
Slice maps are arrays \(\mathsf{SliceMap}_{i}\) of \(\mathsf{NumSlices}\) elements, where every element is a PSI: the index of each element is the LSI mapping to that PSI. Each volume's slice map is decrypted and loaded in memory when the volume is _instantiated_; these maps are kept entirely in RAM while the volumes are "live", and persisted on-disk (encrypted, together with their fresh IVs) when the volumes are closed. Slice maps are stored after the VMBs as equal-size ciphertexts, large enough to address all the possible physical slices. The RAM and disk space requirements are modest: if the underlying device is \(N\) blocks large, the size of a slice map is just \(O\left(\frac{N}{S_{P}}\log\frac{N}{S_{P}}\right)\), because there are at most \(\frac{N}{S_{P}}\) physical slices, each requiring \(O\left(\log\frac{N}{S_{P}}\right)\) bits to be indexed. This is in turn due to the choice of addressing the storage space at the slice granularity, instead of the block granularity, which would have entailed a position map of \(O(N\log N)\) bits.
Figure 6: Shufflecake’s Volume Master Block (VMB).
**Operation.** We now describe more in detail how Shufflelecake operates and explain the rationale behind certain design choices. We start by explaining how we use encryption to secure data on-disk. Then we look at the indirection layer between physical data allocation on disk and data on the logical volumes, i.e., the mechanism which creates the correspondence between physical and logical slices. Finally, we explain how this correspondence is updated when new logical slices are needed as data is written on the logical volumes.
Cryptographic layer.As in many disk encryption solutions, we encrypt with the block granularity, meaning that blocks are the unit of both I/O requests and encryption/decryption; in other words, one IV encrypts one block. Many disk encryption schemes generate these IVs pseudo-deterministically from some public context information and the volume's secret key (e.g., this is what happens in the XTS mode of operation) in order to save the space and the I/O overhead needed to store and retrieve them. Having such a deterministic procedure for generating IVs on the fly is enough for the threat model covered by FDE and single-snapshot secure PD solutions like TrueCrypt.
In our case, however, we stick to explicitly random IVs because we want to keep the future possibility to extend Shufflelecake with some degree of multi-snapshot security, requiring us to re-encrypt some blocks with a different IV while leaving their content unchanged. For this reason, we use CTR mode instead, and the IV of a block is refreshed at each write for that block, to avoid IV-reuse attacks. This means that all these IVs are stored on-disk.
This strategy introduces a potential performance issue: a naive implementation would translate each logical SflcWrite to a physical bWrite of the corresponding physical data block, plus an additional bRead-update-bWrite of the _whole_ corresponding IV block. This would be very wasteful in terms of I/O overhead, because we only need to update one IV (e.g., 16 bytes for AES-CTR), but we are forced to load and store whole blocks (typically 4096 bytes).
We avoid this problem by caching IV blocks in RAM, in an LRU cache of predefined depth (e.g., 1024 entries). For the performance reasons just discussed, this cache is not write-through. This way, we coalesce possibly many updates of the same IV block (triggered by many logical SflcWrites to the same data block, or by logical SflcWrites to many blocks within the same slice) into just one physical bWrite, thereby lowering the I/O overhead.
For each physical slice, we pack the IVs for the blocks contained therein into \(\Delta_{S}\) physical blocks at the beginning of the slice. There is a simple static correspondence between a physical block and the on-disk location of its IV: the \((m-\Delta_{S})\)-th block within a physical slice is encrypted by the \(m\)-th IV within the initial \(\Delta_{S}\) blocks. Hence, we assume the existence of functions LoadIV and SampleAndStoreIV (with self-explanatory behaviour) which take as input a physical address \(B_{\texttt{phys}}\) and return the corresponding IV. Analogously, we consider the functions Encrypt(ptxt, key; IV) and Decrypt(ctxt, key; IV) as acting on blocks according to the implied mode of operation.
_Indirection layer._ Consider a read or write operation to a logical block address \(B\) for volume \(V_{i}\). There are three possible cases:
1. The requested operation (read or write) for volume \(i\) happens on a block whose logical address \(B\) was previously allocated. In this case we need to map efficiently \(B\) to the corresponding physical address \(B_{\mathsf{phys}}\).
2. It is a SflcRead operation for a \(B\) which falls within a slice that was not allocated before. We need to specify the behavior in this case.
3. It is a SflcWrite operation for a \(B\) which falls within a slice that was not allocated before. We need to define how to allocate a new physical slice.
Notice that the offset of a block _within_ a slice is left unchanged by the position map. So, we can easily find the LSI \(\sigma\) to which \(B\) belongs as simply \(\sigma:=\lfloor B/S_{L}\rfloor\). Then we need to check whether \(\mathtt{GetSliceMap}(i,\sigma)\) is defined or not. Luckily, as we have seen above, position maps are not too large. So, when decrypting the position maps during device instantiation, we can store in memory a full view of these position maps for each volume as arrays of PSIs indexed by the related LSI, and define a special return symbol \(\bot\) for those LSIs which have not yet been assigned.
Then let's analyse the three cases above one by one. In the first case, we just need to consider the PSI \(\Psi\) just obtained and add a correct offset. This will give us the physical address \(B_{\mathsf{phys}}\) where we will find the data to decrypt. From what we have just discussed, this can be done as \(B_{\mathsf{phys}}:=\Psi\cdot S_{P}+\Delta_{S}+(B\mod S_{L})\).
In the second case, instead, SflcRead should return a default, non-error value, e.g., 0. Not throwing an error in this case is necessary for the semantic of a volume: although it has never been written before, the block logically exists, it is within the logical boundary of the volume, so it would be incorrect to return an error. Notice how we never allocate new slices on SflcRead requests: as we will see in Section4.3, this is necessary for security, and prevents logical read operations to leave a trace on the disk.
In the third and last case we need a way to allocate a new slice for volume \(V_{i}\) at a position consistent with \(B\). We do this through a function CreateSliceMap which returns a PSI uniformly at random among those not yet mapped. We will see in the next paragraph how to implement this function.
_Slice allocation._ Slice mappings are created _lazily_, only when the first request for a block belonging to a new, yet-unmapped slice arrives. At this point, we create the mapping for this slice by sampling a physical slice uniformly at random _among the free ones_: this guarantees that no conflicts arise between volumes, and their slices end up randomly interleaved on the disk. To make this lazy sampling possible, we need to implement efficiently a function NewSlice: given as input a volume's identifier \(V_{i}\) and an LSI \(\sigma\), it returns a corresponding PSI \(\Psi\) for the new slice. There are different ways to implement this, here we give a reference description using a permutation of the array representing the slices.
Concretely, we keep an in-memory, per-device array prmslices of PSIs of size \(\lfloor\frac{N}{S_{P}}\rfloor\) (the maximum possible number of PSIs), and a bit-array of the same size
bfld, an _"occupation bitfield"_ telling us which physical slices are occupied (the PSIs are the indexes of the array). Initially, at device instantiation, prmslices is initialized as prmslices\([i]:=i\) and then permuted using the efficient Fisher-Yates algorithm [13]. The bitfield bfld is initialized with all free elements. When a volume is opened, and we discover the physical slices it maps to, we mark them as occupied in bfld. The slice allocation algorithm then simply works by repeatedly taking the next element from the pre-shuffled array of PSIs until the bitfield tells us it is free (at which point we take it and mark it as occupied). A stateful occupation counter octr is kept to facilitate this task, initially set to 1, and then increased up to the first element of prmslices marked as free (i.e., the first octr elements of prmslices are guaranteed to be occupied in bfld). This way, when a new slice is required, it is immediate to advance to the next available one. We also need to update \(V_{i}\)'s position map PosMap\({}_{i}\) before returning (this is the only change that will eventually persist, encrypted, on disk).
The permutation of indexes changes at every device instantiation, so that the mapping is not static. The size of these in-memory supporting data structures (array and bitfield) is \(O\left(\frac{N}{S_{P}}\log\frac{N}{S_{P}}\right)\), as usual. The lazy allocation technique is also what allows us to _overcommit_ the total physical storage space: we can have the sum of the sizes of the logical volumes exceed the total physical storage space, as long as the sum of "actually used" spaces does not. However, it might be useful to intentionally limit this overcommitment (for the opened volumes) to decrease the risk of I/O errors and improve user experience. This can be done using the metadata field in the VMB ciphertexts, as explained in Section 6.5.
```
1:\(\sigma:=\lfloor B/S_{L}\rfloor\)
2:\(\Psi\leftarrow\texttt{GetPosMap}(V_{i},\;\sigma)\)
3:if\(\Psi=\perp\)
4:\(\Psi\leftarrow\texttt{NewSlice}(V_{i},\;\sigma)\)
5:if\(\Psi=\perp\)returnError
6:\(B_{\texttt{phys}}:=\Psi\cdot S_{P}\) + \(\Delta_{S}\) + \((B\mod S_{L})\)
7:IV\(\leftarrow\texttt{SampleAndStoreIV}(B_{\texttt{phys}})\)
8:\(c:=\texttt{Encrypt}(d,V_{i}.K;\textsf{IV})\)
9:returnbWrite\((B_{\texttt{phys}},\;c)\)
```
**Algorithm 2**SflcWrite\((V_{i},B,d)\)
```
1:\(\sigma:=\lfloor B/S_{L}\rfloor\)
2:\(\Psi\leftarrow\texttt{GetPosMap}(V_{i},\;\sigma)\)
3:if\(\Psi=\perp\)
4:\(\Psi\leftarrow\texttt{NewSlice}(V_{i},\;\sigma)\)
5:if\(\Psi=\perp\)returnError
6:\(B_{\texttt{phys}}:=\Psi\cdot S_{P}\) + \(\Delta_{S}\) + \((B\mod S_{L})\)
7:IV\(\leftarrow\texttt{SampleAndStoreIV}(B_{\texttt{phys}})\)
8:\(c:=\texttt{Encrypt}(d,V_{i}.K;\textsf{IV})\)
9:returnbWrite\((B_{\texttt{phys}},\;c)\)
```
**Algorithm 3**NewSlice\((V_{i},\;\sigma)\)
**Volume operations.** In principle, nothing in the scheme inherently prevents us from creating, opening, and closing volumes freely and independently, at any time. However, for real-world operations, we force volumes to be opened in a hierarchical way, by only providing _one_ password (for the most secret volume).
To create a new volume it is needed: the index \(i\) of the new volume \(V_{i}\), the chosen password, and the VMB key of volume \(V_{i-1}\) (if \(i>1\)). This way, one can format the header by generating the relevant keys, filling the \(\mathsf{VMK}_{i-1}\) field, and initialising the slice map as empty. No operation is needed on the data section.
To open a volume, only its password is needed in order to decrypt the header, which then allows to load its slice map and to decrypt its slices. Finding the right header for a provided password is done simply by trying every one of them, until the authenticated ciphertext in the related DMB cell decrypts correctly.
Closing a volume mainly modifies the state of the Shufflecake instance in RAM, by removing the relevant volume information (and securely erasing its key). The only required disk operations are the ones needed to persist some possibly-unsynchronised data.
No specific operation is needed to destroy a volume, i.e. to remove it from the disk. It is enough to just forget the password, or to overwrite the header with random bytes: by the PD guarantees, there is no way to then even prove that there was a volume in that slot, let alone to decrypt it.
### Operational Model
In this section, we define the operational model of Shufflecake, to provide a safe _mode of use_ allowing the user to retain both plausible deniability and data integrity. Besides some general constraints, we specify what the user has to do in ordinary working conditions, and how instead they must behave when confronted with the adversary.
#### 4.2.1 Risk of data corruption.
A simple observation shows how a legitimate-looking usage mode of Shufflecake actually entails a high risk of data corruption. If we do not open all \(\ell\) existing volumes, and instead only open the ones we plan to use, we do not load all the slice maps in RAM, which leads to an incorrect reconstruction of the complete device's bitfield bfld of free physical slices. The physical slices belonging to the still-closed volumes will be counted as free, and will therefore possibly be allocated to the open volumes during data write, which would then overwrite their content. This can only be avoided with certainty by always opening _every_ volume, regardless of which ones we are going to use: if the password to a volume is not provided, Shufflecake has no way of detecting its existence. It then follows from the overcommitment of the physical storage space that we risk re-using its physical slices for some other volume.
However, mitigation does need to be perfect. It could be possible to reduce the risk of corruption when not opening all volumes by using some form of _error-correction_ on the unopened volumes (hence sacrificing some space), and then trying to recover the volume if corruption happens. We discuss this in Section 4.2.
**General constraints.** The first thing to do when initialising a device with Shufflecake is to fill it completely with random bytes. Though long and tedious, this operation is crucial even for single-snapshot security, as we will see in Section4.3, just like it is for TrueCrypt. The most sensitive data should be placed in a volume of sufficiently high order. We cannot, of course, give precise indications of the form "use at least 3 volumes", or "6 volumes should be safe enough", because, by Kerchoff's principle, we assume that the adversary knows about Shufflecake, and in particular reads this document. The volumes of lower order, that will be disclosed to the adversary, should be filled with "mildly incriminating" data, so as to convince the attacker that one had a plausible reason to hide them. We do not specify more precisely what kind of content would be suited to this end, partly for the same reasons as before (the adversary would immediately flag it as decoy content, and ask for more passwords), partly because it heavily depends on the context and on who the adversary concretely is. The decoy volumes must also be otherwise "credible", in particular they must be formatted with realistic file systems, and they must be reasonably up to date. Periodic updates can be delegated to a background daemon or offloaded to the user.
**Home alone.** In normal operating conditions, when not confronted with the adversary, the recommended course of action for the user is to unlock all the volumes present on the device, in order to prevent data corruption as explained before. The reason behind the design choice of chaining the volumes into a linked list to help the user in that regard: this way, the user is able to open all volumes by just providing the password to \(V_{\ell}\). In our implementation, this is actually the mandated semantic of the open operation: the user only provides the password of the _last_ volume they want to open, the previous ones are automatically opened. Other implementations are of course free to ignore the VMK\({}_{i-1}\) field in the volume header, and give more flexibility to the user, if aware of the risks entailed.
**Under interrogation.** When questioned by the adversary and forced to reveal passwords, the user must obviously not surrender more than \(\ell-1\) of them (otherwise there would be nothing left to protect). Although irrelevant for the cryptographic security of the scheme, we stress that, in order for the user's lie to be credible, they must only reveal the decoy passwords under a certain amount of pressure, or after some time has passed. Notice that a responsible and safe use of Shufflecake puts on the user the burden of being able recall quickly and reliably these decoy passwords, even under distress. This might be hard to get right given that the recommended course of action for the user in daily use is to only open the most hidden volume. It is up to the user to define the maximum number of volumes \(\ell\) that makes them comfortable in this task. Shufflecake implementations might include additional features to aid the user in this sense, for example a function to check the password of a decoy volume without actually opening it, or even a puzzle which, with some random probability, prompts the user to also insert the password for a decoy volume when opening a hidden one.
**Safeword.** As we previously discussed, one big operational difference between Shufflecake (but also other solutions like, e.g., HiVE) and TrueCrypt is that with Shufflecake "the adversary does not know when she can stop questioning you", because there is no way to prove that a given password unlocks all the existing volumes on a given device (unless it's the password unlocking the max-th volume). In TrueCrypt, instead, there is either just a regular volume, or a decoy and a hidden volume. This distinction might be important in those scenarios where a user wants to avoid the possibility of looking uncooperative to a certain adversary. In such scenarios, the user might want to have the choice of surrendering all the volumes, and a method to convince the adversary that no other volumes exist, possibly using an additional "full disclosure password" that we will call _safeword_.
One simple way to implement this, even in Shufflecake, is to actually create all max volumes when initializing a device. Clearly, remember max passwords would be quite cumbersome for the user, so the solution is to only remember \(\ell+1\) passwords instead: those for the \(\ell\) volumes that are actually desired, and the extra one for the last max-th volume, which is going to be the safeword. In fact, the user will never need to open more than \(\ell\) volumes for regular use. As discussed in Section4.2, this might harm the consistency of the other, unopened volumes, but this is not important: by using the safeword, the user would still be able to convince the adversary that all volumes have been revealed due to the linkage between them and the ciphertext authentication in the headers. Analogously, in solutions like TrueCrypt, a simple way to implement a safeguord is to actually always create a hidden volume, even if only a standard volume is desired.
We stress, however, that using this feature is a dangerous proposition: If such possibility exists, and users are allowed to do that, then why not to? The adversary might arguably assume that a user _must_ have a safeword, and pressure for its disclosure. This would put at risk those users who decide to not use this feature, who might then be pushed to its adoption. This, in turn, would ruin plausible deniability for everyone, because now we have a system where everyone has a safeword by default.
We believe there is no simple solution to this dilemma: One has either to accept the risk of looking uncooperative and be subject to further interrogation, or to give up PD at all. We remark that, as far as we know, the issue of a safeword feature (or even just its possibility) for plausible deniable filesystems has not been addressed in the literature before, as all implementations we are aware of (including WoORAM-based ones) employ some form of architectural hard limit on the number of possible nested levels of secrecy. We believe this to be a serious operational problem for the security of PD solutions. For this reason, not only do we discourage the use of this feature, but we also propose a way to make the implementation of _any_ safeword-like system impossible. This is discussed in Section6.7, and boils down to the idea of having an unbounded number of possible volumes per device.
### Security
In this section, we prove that Shufflecake achieves single-snapshot security, as defined in Section 2.3.
Theorem 4.3 (Single-snapshot security of Shufflecake): _The Shufflecake scheme as described in Section 4.1 is a single-snapshot (SS) secure PD scheme according to Definition 3._
#### 4.3.1 Assumptions.
In proving Theorem 4.3 we will make some assumptions in order to keep the proof compact and intuitive. We assume w.l.o.g. that all passwords are encoded as bitstrings of length \(\lambda\). Notice how throughout all Section 4.1 we avoided giving concrete security parameters. Although in the real-world instantiation of Shufflecake we are going to have cryptographic primitives with input and output of fixed size (e.g., 128-bit IVs, 256-bit keys, etc), in the context of this proof we can consider them of variable length. This will allow us to produce an asymptotic bound, and to apply it as in Definition 3 in order to prove that the advantage of any (computationally bounded) adversary is indeed negligible in the security parameter. In so doing, we will treat the cryptographic primitives used in the Shufflecake design as _ideal_. More specifically:
* The KDF will be replaced by a _random oracle_\(\mathcal{O}_{K}\), mapping \(\lambda\)-bit passwords and salts to truly-random \(\lambda\)-bit strings.
* The symmetric encryption scheme will be replaced by an _ideal cipher_\(\mathcal{E}\), mapping \(\lambda\)-bit keys and IVs to truly-random permutations over \(\{0,1\}^{\lambda}\).
* The authenticated encryption used in the DMB will be replaced by a pair of oracles: the oracle \(\mathcal{O}_{AE}\), mapping \(\lambda\)-bit keys and IVs to truly-random injections between plaintext and ciphertext spaces, and its inverse which returns a constant \(\bot\) failure symbol if queried outside of the codomain.
All these oracles will be initialised by the game and provided to the adversary \(\mathcal{A}\) to be queried freely (at a unitary time cost).
#### 4.3.2 The proof.
Let us consider Experiment 2 under the constraints explained in Section 2.3, and let \(D\) be the "challenge disk snapshot" provided to the adversary \(\mathcal{A}\) by the game (i.e., either \(D_{0}\) or \(D_{1}\) according to the secret bit \(b\)). For the given \(\mathcal{A}\), we will consider \(\ell\), the decoy passwords \(P_{1},...,P_{\ell-1}\), and the access patterns \(O_{0}\) and \(O_{1}\) as public parameters of the game instance.
Let us first notice that all the oracle queries performed by \(\mathcal{A}\)_before_ receiving the challenge disk snapshot \(D\) cannot change \(\mathcal{A}\)'s advantage, because they are completely uncorrelated with \(D\) (and the secret bit \(b\)). We can, therefore, safely disregard those queries in our analysis.
Then, let us define \(Q\) to be the ordered sequence of all queries \(\{q_{i}\}_{i}\) made by \(\mathcal{A}\) to the oracles _after_ receiving the challenge \(D\), and define \(n:=|Q|\). Also define \(Q_{i}\) to be the sequence of queries \((q_{1},\ldots,q_{i})\) up to query \(q_{i}\). Analogously, let us define \(R\) to be the ordered sequence of all responses \(\{r_{i}\}_{i}\) returned by the
oracles; also define \(R_{i}\) to be the sequence of responses \((r_{1},\ldots,r_{i})\) up to response \(r_{i}\). We consider the execution of \(\mathcal{A}\) as the execution of a sequence of single-query stateful adversaries, where the state is just the 'history' of the previous queries: \(\mathcal{A}_{1}\left(D\right)\to q_{1}\), \(\mathcal{A}_{2}\left(D,Q_{1},R_{1}\right)\to q_{2}\), and so on until \(\mathcal{A}_{n-1}\left(D,Q_{n-1},R_{n-1}\right)\to q_{n}\) and \(\mathcal{A}_{n}\left(D,Q_{n},R_{n}\right)\to b^{\prime}\).
Let \(\mathsf{KEK}_{i}\), \(\mathsf{VMK}_{i}\) and \(\mathsf{VEK}_{i}\) be, respectively, the key-encryption key, the VMB key, and the volume encryption key of volume \(V_{i}\). Rigorously speaking, in the security game, the values \(P_{\ell},\mathsf{KEK}_{\ell},\mathsf{VMK}_{\ell}\), and \(\mathsf{VEK}_{\ell}\) are only sampled if \(b=0\). Let us instead consider them to be sampled anyway, and left unused in the case \(b=1\). Let us denote by \(\mathcal{S}\) the tuple \((P_{\ell},\mathsf{KEK}_{\ell},\mathsf{VMK}_{\ell},\mathsf{VEK}_{\ell})\in\{0, 1\}^{4\lambda}\). Let us define the event \(E_{i}\) as the event that either of \(P_{\ell}\), \(\mathsf{KEK}_{\ell}\), \(\mathsf{VMK}_{\ell}\), or \(\mathsf{VEK}_{\ell}\) appear in query \(q_{i}\) (we say that query \(q_{i}\)_strikes_). Finally, let us define \(E:=E_{1}\cup\ldots\cup E_{n}\) the event that _at least one_ query strikes. We will first prove two lemmata.
Lemma 5: \(\mathsf{Pr}\left[E\right]=\mathsf{Pr}\left[E\,|\,b=0\right]=\mathsf{Pr}\left[E \,|\,b=1\right]=\mathsf{negl}(\lambda)\)_. (The adversary can only guess one of the secrets of \(V_{\ell}\) with negligible probability.)_
Proof: Let us prove that \(\mathcal{S}\) is _statistically independent_ from any query \(q_{i}\in Q\).
This tuple gets sampled uniformly at random from \(\{0,1\}^{4\lambda}\), so its distribution does not depend on the public values. Since the oracles implement _ideal_ cryptographic primitives, it follows that their outputs are statistically independent from their inputs; this does not just hold _marginally_ for single input-output pairs, but _jointly_: any tuple of inputs is statistically independent from the tuple of corresponding outputs (for the ideal cipher \(\mathcal{E}\), this only holds for the _key_ inputs). In particular, \(\mathcal{S}\) is statistically independent from the whole tuple \((D,R)\). Since \(\mathcal{A}_{i}\)'s inputs, namely the tuple \((D,Q_{i-1},R_{i-1})\), are a (randomised) function of \((D,R)\), we deduce that its output, namely \(q_{i}\), must also be independent of \(\mathcal{S}\).
Therefore, since \((P_{\ell},\mathsf{KEK}_{\ell},\mathsf{VMK}_{\ell},\mathsf{VEK}_{\ell})\) are uniform, the probability that \(q_{i}\) contains, say, \(P_{\ell}\), is \(2^{-\lambda}\). Thus, by the union bound, \(\mathsf{Pr}\left[E_{i}\right]\leq 4\cdot 2^{-\lambda}\). Using the union bound again, we get:
\[\mathsf{Pr}\left[E\right]\leq\sum_{i=1}^{n}\mathsf{Pr}\left[E_{i}\right]\leq n \cdot 4\cdot 2^{-\lambda}\]
This expression is clearly \(\mathsf{negl}(\lambda)\), since \(n\) must be at most a polynomial in \(\lambda\). The final claim on the equality of the conditional probabilities follows by simply observing that this reasoning holds irrespective of the value of \(b\).
Lemma 6: \(\mathsf{Pr}\left[\mathcal{A}\left(D\right)\!\rightarrow\!1|b\!=\!0\wedge\bar {E}\right]=\mathsf{Pr}\left[\mathcal{A}\left(D\right)\!\rightarrow\!1|b\!=\!1 \wedge\bar{E}\right]\)_. (Unless she can guess one of \(V_{\ell}\)'s secrets, the adversary has the exact same _view_ in the cases \(b=0\) and \(b=1\).)_
Proof: It is sufficient to prove that \(\mathcal{A}_{n}\)'s inputs, namely \(D\), \(Q_{n}\), and \(R_{n}\), follow the same _joint conditional distribution_, conditioned to the event \(\bar{E}\), regardless of whether \(b=0\) or \(b=1\).
We prove this by induction. Using a concise notation, we want to prove that the following quantity does not depend on the bit \(b\):
\[\mathsf{Pr}\left[D,Q_{n},R_{n}\,|\,\bar{E},b\right]= \mathsf{Pr}\left[q_{n},r_{n}\,|\,D,Q_{n-1},R_{n-1},\bar{E},b\right] \cdot\mathsf{Pr}\left[D,Q_{n-1},R_{n-1}\,|\,\bar{E},b\right]\]
Showing that the first factor is independent of \(b\) will prove the inductive step. To this end, let us further rewrite it as:
\[\mathsf{Pr}\left[q_{n}\,|\,D,Q_{n-1},R_{n-1},\bar{E},b\right]\cdot\mathsf{Pr} \left[r_{n}\,|\,q_{n},D,Q_{n-1},R_{n-1},\bar{E},b\right]\]
The first factor is independent of \(b\) because \(q_{n}\) is the output of \(\mathcal{A}_{n-1}\), which only takes \(D,Q_{n-1},R_{n-1}\) as inputs, all of which are among the conditioning terms already. The second factor is independent of \(b\) because, given that \(\bar{E}\) holds, the oracles behave the same whether \(b=0\) or \(b=1\). This is because striking queries are the only oracle inputs that could trigger responses with unequal distributions (i.e., correlated to \(D\)) in the cases \(b=0\) and \(b=1\). But if, instead, we rule these queries out by conditioning on \(\bar{E}\), then the oracles instantiated when \(b=0\) are perfectly interchangeable with the ones instantiated when \(b=1\).
We are now left to prove the base step for induction, corresponding to \(\mathsf{Pr}\left[D\,|\,\bar{E},b\right]\). Let us rewrite it, using Bayes' theorem, as:
\[\mathsf{Pr}\left[D\,|\,\bar{E},b\right]=\frac{\mathsf{Pr}\left[\bar{E}\,|\,D, b\right]\cdot\mathsf{Pr}\left[D\,|\,b\right]}{\mathsf{Pr}\left[\bar{E}\,|\,b \right]}\]
The term \(\mathsf{Pr}\left[\bar{E}\,|\,b\right]\) is independent of \(b\) by Lemma5. The same is true of \(\mathsf{Pr}\left[\bar{E}\,|\,D,b\right]\): the same proof applies as for Lemma5, because the reasoning is unchanged when we condition the probabilities on a particular realisation for \(D\).
We only have to prove that \(\mathsf{Pr}\left[D\,|\,b\right]\) does not depend on the bit \(b\), i.e. that the disk snapshot follows the same _a-priori_ (non-conditioned) distribution, whether \(b=0\) or \(b=1\). To prove this, we will use the actual properties of the Shufflecake scheme. Let us proceed by analysing the disk layout region by region.
The blank spaces (empty DMB cells, unmapped slices, etc.) are filled with equally-distributed uniformly-random noise. The spaces occupied by volume \(V_{\ell}\), when \(b=0\), are filled with oracle responses to queries containing one of \(V_{\ell}\)'s secrets. These responses are fresh randomnesses, which follow the same distribution as the noise filling the same spaces when \(b=1\).
We are only left to prove that the (decrypted) logical contents and metadata of the decoy volumes follow the same distribution when \(b=0\) and when \(b=1\). Indeed, the _logical_ contents of the volumes are fixed and identical in the two cases, determined by \(O_{0}\) and \(O_{1}\) (this is because they have to follow the constraint defined in Section2.3). Also, the DMB cells and the VMBs contain equally-distributed uniformly-random oracle outputs (keys, etc.).
The last step is to show that the position maps of the decoy volumes follow the same distribution in the two cases \(b=0\) and \(b=1\). By the second constraint on the access pattern, defined in Section 2.3, we get that slice allocation is triggered for the same LSIs of the decoy volumes in both the two cases. Even though some more slice allocations are performed on \(V_{\ell}\)'s LSIs when \(b=0\), this does not impact the resulting observable distribution on the PSIs assigned to decoy LSIs. This is because slice allocation always takes a PSI randomly among the free ones, therefore the order in which the LSIs are mapped can be permuted freely without impacting the distribution. Thus, even in the case \(b=0\), we can equivalently imagine that the decoy LSIs get all mapped before \(V_{\ell}\)'s ones, yielding the same distribution as when \(b=1\). This concludes the proof.
Proof (Proof of Theorem 4.1): We use Lemmata 5 and 6 to prove that the advantage of \(\mathcal{A}\), as defined in Equation 1 of Definition 3, is negligible. By conditioning both terms of Equation 1 to the events \(E\) and \(\bar{E}\), we get:
\[\left|\Pr\left[\mathcal{A}\left(\mathsf{D}\right)\!\to\!1|b\!=\!0 \right]-\Pr\left[\mathcal{A}\left(D\right)\!\to\!1|b\!=\!1\right]\right|=\] \[=\] \[+ \Pr\left[\bar{E}\right]\left(\Pr\left[\mathcal{A}\left(D\right) \!\to\!1|b\!=\!0\wedge\bar{E}\right]-\Pr\left[\mathcal{A}\left(D\right)\!\to\! 1|b\!=\!1\wedge\bar{E}\right]\right)\right|\leq\] \[\leq \Pr\left[E\right]\cdot 1+\Pr\left[\bar{E}\right]\cdot 0=\mathsf{negl }(\lambda),\]
which concludes the proof.
## 5 Implementation and Benchmarks
We implemented the Shufflecake scheme in the C language as an open-source device-mapper-based driver for the Linux kernel. We published our code under the GPLv2+ license. The current release is v0.4.1 [45]. This section describes the programming environment and the structure of our implementation, and presents concrete performance measurements taking other popular disk encryption solutions as a baseline for comparison.
### Structure of the Implementation
Our implementation consists of two components: a dm-sflc kernel module (which does most of the job), and a shufflecase companion userland application (used to correctly manage the volumes). The kernel module is the component that actually implements the scheme, translating logical requests into physical requests, and persisting slice maps into the respective headers.
**Cryptography.** Cryptographic primitives are provided by the Libgcrypt library [19]. We target 128 bits of security. We use Argon2id as a KDF, which was implemented in Libgcrypt recently [18]. We use AES-GCM-256 as an authenticated cipher, and AES-CTR-256 for data encryption, with 128-bit IVs.
**The userland application.** This component is used to manage volumes creation, opening, and closing. To this end, it manages the DMB and VMB of each volume header. The \(\mathsf{VEK}_{i}\) is passed to the kernel module to decrypt the slice map and data section of that volume, while \(\mathsf{VMK}_{i-1}\) is used to iteratively open all the less-secret volumes, as described in Section4.1.
This is offloaded to the userland application because key management is arguably better handled in user space: for example, we need to react to an incorrect password by asking the user to try again, not by emitting a kernel log message. There is also another technical hindrance to delegating everything to the kernel module: state-of-the-art KDFs like Argon2id [5] are not currently implemented in the Linux Kernel Crypto API [31], while they are available in user-space software libraries like Libgcrypt [19].
The other blocks of the volume header, which contain the slice map encrypted with the \(\mathsf{VEK}\), are managed by the kernel module (except at init time, when an empty position map is written by the userland tool).
**Volume operations.** The shufflecase init command takes as input a device path, and then interactively asks the user a number \(\ell\leq\mathsf{max}\) and \(\ell\) passwords as input, correctly formats the first \(\ell\) volume headers, and fills the remaining \(\mathsf{max}-\ell\) slots with random bytes; this way, the pre-existing volumes are wiped by erasing their headers (crypto-shredding). Unless a --skip-randfill option is provided (e.g., for testing or debugging purpose), the whole disk is filled with random bytes before formatting the header section. This command only formats the disk: it does not create the Linux virtual devices associated to the volumes.
The shufflecase open command takes a device path as input and asks one single password to the user, then looks up the volume headers, and opens all the volumes starting from the one whose password is provided, backwards up to the first one (walking up the chain using the \(\mathsf{VMK}_{i-1}\) field in the VMB). This is the command that actually creates the Linux virtual devices representing the volumes, under /dev/mapper: the names are generated algorithmically. Notice that these virtual devices are not automatically mounted, it is up to the user to mount them and format them with a filesystem of choice when required.
Finally, the shufflecase close command takes a device path as input, and closes all the volumes open on that device.
Additional functionalities.In addition to standard features such as command-line usage help and printing on screen the current version, our implementation also offers two additional functionalities: a changepwd action, which allows the user to change a volume's password as described in Section5.1, and a testpwd action, which tests whether a provided password unlocks a certain volume (and which one) without actually opening that volume. This might be helpful for the scrupulous user who wants to regularly recall the passwords to decoy volumes, as suggested in Section4.2.
### Space Utilisation
A few factors influence the disk and RAM space efficiency of Shufflecake, i.e., what part of the storage contains actual data coming from the upper layer, and what part contains metadata, or is otherwise wasted. Overall, with a sensible choice of the parameters, and with reasonable assumptions about the behaviour of the upper layer, we can attain a very low space overhead.
For our implementation, we fixed the block size to 4096 bytes, so as to better amortise the per-block space overhead determined by the IVs. We chose \(S_{L}=256\), and \(\mathsf{max}=15\). Since we use AES-CTR-256 as the underlying encryption scheme, we need 16-byte IVs. This led to a choice of \(\Delta_{S}=1\): a single 4-KiB IV block (containing 256 IVs) encrypts a 1-MiB slice. To provide a numerical summary of the space utilisation of Shufflecake, we observe that in the case of a 1 TiB disk, the resulting theoretical maximum utilisable space is 1019.91 GiB, equal to more than 99.6% of the physical storage space.
Headers.With the above parameters, the total size of a volume header is around \(\frac{N}{S_{P}}\log\frac{N}{S_{P}}\), roughly equal to 4 MiB per volume header, for a 1-TiB disk: about 60 MiB for the total device header size.
IVs.As previously discussed, we store IVs on-disk. With the concrete choice of parameters of our implementation, we have 16-byte IVs encrypting 4096-byte blocks (256 times as much); therefore, we only use \(\frac{1}{257}\) (\(<0.4\%\)) of the physical data section to store IVs.
Internal fragmentation of slices.Internal fragmentation is a frequent problem in space allocation, and it is particularly well known and studied in file systems theory. For performance reasons, the block layer only works with the block granularity; the file system, therefore, has to allocate a whole block even if it needs less space to, e.g., host a file. Internal fragmentation is the problem arising from this "over-allocation". On top of this, Shufflecake adds another layer of internal fragmentation through its slice mechanism: when a volume requests a block, we reserve a whole slice of \(S_{L}\) blocks just for that volume. Moreover, we have no means of communicating this over-allocation to the file system layer, which therefore has no way of adapting its behaviour. Thus, we have to hope that a file system does not jump back and forth too wildly, and that it generally tries to fill a group of slices before requesting a new one.
Luckily, some file systems do exhibit this behaviour. For example, the commonly used ext4 file system defines the concept of a _block group_, i.e., a group consisting of 32768 consecutive blocks (which amounts to 128 MiB for 4096-byte blocks). The block allocator of ext4 tries its hardest to keep related files within the same block group; specifically, whenever possible, it stores all _inodes_ of a directory in the same block group as the directory; also, it stores all blocks of a file in the same block group as its inode [27]. This feature plays nicely with the value of \(S_{L}=256\) we chose: a block group encompasses a whole number of slices, which will therefore not be too fragmented in the long run.
_Releasing unused slices._ Our implementation currently does not have a way to reclaim physical slices that were assigned to some volume but are no longer used, when all of the blocks within the corresponding logical slice have been deallocated by the file system. We discuss this in Section 6.6. We note, however, that deallocation of slices can occur more or less frequently depending on the filesystem in use: filesystems with good contiguity will tend to free up consecutive blocks (and hence whole slices) as the data is moved or erased, while filesystems with higher granularity might 'leave blocks around' more often. We cannot release a slice until all the physical blocks therein are freed up. Therefore, the efficiency of any slice-releasing mechanism must be evaluated carefully.
### Benchmarks
We tested our implementation looking at I/O performance and space efficiency. The test environment was a fresh installation of Ubuntu 23.04 running kernel version 6.2.0 on a laptop equipped with an AMD Ryzen 7 PRO 6850U CPU with Spectre mitigations enabled, 32 GiB 4-channel 6400 MHz DDR5 RAM, and a low-power 1 TiB NVMe Micron MTFDKCD1T0TFK SSD. We tested the amount of slice fragmentation of Shufflecake (v0.4.1), its I/O performance, as well as the I/O performance of other two relevant disk-encryption tools for comparison: dm-crypt/LUKS (v6.2.0-26) and VeraCrypt (v1.25.9). All the tests were performed sequentially, on a physical primary SSD partition of size 8 GB, using the ext4 filesystem (which is the one most relevant for Shufflecake's envisioned final use case). In the case of Shufflecake, we initiated the partition with two volumes (one decoy and one hidden), and performed all tests on the hidden one. Analogously, in the case of VeraCrypt, we formatted the partition as a standard no-FS VeraCrypt volume, and created a 6.5 GB ext4 volume therein. In order to aid reproducibility, we also included in our implementation a suite of benchmark scripts performing the tests described here.
#### 5.3.1 Fragmentation
In order to evaluate the fragmentation caused by Shufflecake's allocation of slices, we filled the ext4 filesystem with incrementally large amount of random files and directories up to saturating the space, and at every step we measured the space given by the increasing number of slices allocated by dm-sflc for the hidden volume VS the total amount of data written therein. We define _space efficiency_ as the ratio between real data written on disk and slice-allocated space (0 = bad, 1 = good).
The results are shown in Figure 7. As we can see, even when the disk is initially empty, some slices are immediately allocated for the ext4 journal and metadata. However, as data is written on disk, the effect of fragmentation quickly disappears: already at 10% of data capacity the space efficiency is above 90%, and at 25% of data written it reaches 95%. We conclude that the slicing algorithm of Shufflecake behaves very well in our simulated random usage pattern, at least with the ext4 filesystem, and slice fragmentation can be considered negligible.
**I/O and bandwidth.** For testing the I/O performance of Shufflecake against dm-crypt/LUKS and VeraCrypt, we used the Rio benchmarking tool, which can flexibly measure various metrics. For each of the three disk encryption tools, we performed both random and sequential read/write operations with large amount of data on the filesystem. We fixed the same parameters for all tests, such as a queue of 32 operations and a block size of 4 kiB, which are commonly recommended to evaluate real-world performance of disks. Under these conditions, we found no observable difference in metrics between IOPs (I/O operations per second) and bandwidth (expressed in MB/s), hence we report the results looking at the bandwidth only.
The results are shown in Table 1. As we can see, Shufflecake incurs in an I/O slowdown of roughly 30% compared to the other tested tools. We believe this overhead to be acceptable in daily use.
Comparison with WoORAMs.The comparison with some popular ORAM-based PD solutions more convincingly shows the real-world efficiency advantages offered by Shufflecake. Of course it has to be stressed that these ORAM-based solutions aim at achieving PD in a more strict scenario than the single-snapshot security offered by the current version of Shufflecake. We have just seen how Shufflecake achieves a slowdown of roughly 30% I/O throughput over dm-crypt and uses almost all space available. On the other hand, HiVE [6] has a heavy 200x I/O overhead and wastes 50% of the disk space. DetWoOram [39] has an overhead of 2.5x for reads and 10x-14x for writes, and wastes 75% of space.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Shufflecake & dm-crypt/LUKS & VeraCrypt \\ \hline random write & 26.77 & 38.43 & 39.07 \\ \hline random read & 26.78 & 38.44 & 39.09 \\ \hline sequential write & 176.87 & 247.14 & 247.75 \\ \hline sequential read & 177.10 & 247.43 & 248.04 \\ \hline \end{tabular}
\end{table}
Table 1: I/O performance (in MB/s) of Shufflecake, dm-crypt/LUKS, and VeraCrypt (higher = better).
Figure 7: Shufflecake space efficiency as the ext4 filesystem fills up.
Conclusions and Future Directions
We have seen how Shufflecake represents a usable PD scheme with many operational advantages over solutions like TrueCrypt. We released it as an open source tool in the hope of building trust and adoption in the community, and possibly encouraging contribution to future work. In fact, many possibilities for further improvement exist. We are going to mention some in this section.
### Crash Consistency
As it is right now, the main obstacle to reach maturity and adoption for daily use is that Shufflecake is not crash-consistent. This means that if the program crashes during operation with one or more open volumes, data corruption is possible, because some volume state changes happen in-RAM and are cached for some time before being written on disk. This is a problem for Shufflecake, because if a crash occurs between a write of encrypted data to disk and its associated IV, the encryption becomes unrecoverable. The situation is different for solutions like LUKS or TrueCrypt, which use the XTS mode of operation and are therefore immune to this problem because they do not use explicit random per-block IVs. To fix this while keeping the property of block re-randomisation, we should make the individual logical write requests atomic, i.e. we should mask the fact that they map to several physical requests (which need to be assumed atomic): if a crash happens at any point between two of these physical requests, the old content of the logical block should still be recoverable, the disk should not be left in a "limbo" state that does not correspond to any of the logical contents written by the upper layer. We discuss here some ideas for future improvements to address these concerns.
Shufflecake incurs crash-inconsistencies when a crash happens in the time window between the update of a data block and the update of the corresponding IV block. As was discussed, Shufflecake adopts a write-on-flush approach for the IV cache, whereas data blocks are immediately written to disk, encrypted with the new IV (which is not immediately persisted on-disk); therefore, the disk is in a "vulnerable" (inconsistency-prone) state whenever the upper layer has written on a file and has not yet synced it: this is, reasonably, a large fraction of the total operating time. To solve this, it would be necessary to make the IV cache write-through; the performance impact of such a choice has not been evaluated but it would probably be heavy. It would not be sufficient, anyway, because it would only reduce the "vulnerability window" between the update of a data block and that of the IV block, it would not eliminate it.
The final solution would be to also duplicate each IV block into a circular log of length 2: the update of the IV block synchronously precedes the update of the data block, and overwrites the older of the two versions; this way, if the crash happens right afterwards, and the data block is not updated, it is still decryptable because the corresponding IV block has not been touched. Disambiguation (i.e., deciding which of the two versions of the IV to use) would be based on an additional MAC on the data block (stored alongside the IV); this would only be
needed when the block is read for the first time since volume opening: afterwards, the state can be kept in RAM (it is just one bit for each slice).
An alternative solution, which wastes more disk space but we believe to be overall better, would be to store the IV alongside the data block itself, so that the two updates can be merged into a single physical request. We believe that, in addition to mitigate the issue of crash inconsistency, this approach would probably lead to better I/O performance, as it would not need separate operations for writing IV blocks during data writes. The minimum addressable unit of disk storage space, at least on Linux systems, is usually the 512-byte sector. Therefore, the least wasteful option is to map a logical 4096-byte block (8 sectors, as was already the case for our implementation) onto 9 consecutive physical sectors: the first one contains the IV, the other ones constitute the data block. This would lead to a waste of disk space (fraction of disk not used for upper-layer data) equal to \(\frac{1}{9}=11.1\%\). Since an IV only occupies 16 bytes, much of the first sector would be left unused; we could use the rest of the free space to also contain additional useful data, for example a MAC to detect IV alterations, and a "reverse map", indicating which logical block \(B\) of which volume \(V_{i}\) that physical block corresponds to (this information would of course be encrypted). Additionally, we could build an even more fault-resilient system by again having a circular log of two IVs in the first sector (each accompanied by the corresponding MAC), plus 16 MACs, one for each IV and for each of the 8 data sectors. This would allow us to disambiguate with the sector granularity, in case the underlying disk can ensure atomicity of the sector writes, but not of the physical requests writing on several adjacent sectors.
### Multi-Snapshot Security
The way it has been presented so far, Shufflecake is completely vulnerable to multi-snapshot attacks in the exact same way as TrueCrypt (and its successor, VeraCrypt): when the adversary sees "empty" slices change across snapshots, the only possible explanation is that there still is a hidden volume whose password has not been provided. We have already argued in Section 1.2 why, in practice, this level of security might be enough in most cases, and also why we believe current WoORAM-based solutions base their promise of stronger security on somewhat hard to justify assumptions. We already know [10] that achieving "complete" multi-snapshot security requires the use of WoORAMs, which have serious performance drawbacks. Regardless, as mentioned in Section 4, we designed Shufflecake with the idea of being able to add features that might help to reach an unproven, "operational" level of multi-snapshot security. In this section, we explore the possibility to achieve this goal through some separate, "orthogonal" pattern obfuscation procedure that operates independently of the main scheme and does not interfere with its single-snapshot security. We present three high-level ideas to accomplish this. They are not currently implemented, nor are they precisely specified from a conceptual point of view; instead, they are left as pointers for future research, since this area is greatly under-studied.
**Security Notion Revisited.** Let us first clarify what we mean by "operational" multi-snapshot security. The rationale is the hope that even though the distinguishing advantage of the adversary in the multi-snapshot game is not negligible (and so the scheme is not cryptographically secure in the strict sense), it is still low enough for an investigation to be inconclusive against an adversary in practice. In other words, we argue that _legal_ or anyway _operational_ security, in this context, is not the same as _cryptographic_ security in the theoretical sense.
Physical slices have known boundaries, so the adversary can compare the disk snapshots she has on a slice basis. This amounts to inspecting the subsequent changes, or _diffs_, each physical slice goes through across snapshots. Recall that a physical slice is essentially an array of \(S_{P}\) blocks; when one of these blocks changes, it can either be because of the re-encryption of a data block (with a different IV, and possibly a different content), or because of the re-randomisation of an empty block: the IND-CPA security of the encryption scheme guarantees the indistinguishability of these two situations. This means that no hint on the nature of a block (i.e., whether it is a data or an empty block) is leaked by its encrypted content or by the history of its encrypted contents. Therefore, when comparing two snapshots of a physical slice, the only information that an adversary gets is _which_ of the \(S_{P}\) blocks have changed: _how_ they have changed is completely inconsequential and uninformative. In other words, the diff between two snapshots of a physical slice boils down to a bitmask of \(S_{P}\) bits, representing which blocks have changed and which remained the same.
The task of the adversary then becomes to distinguish between "data slices" (i.e., belonging to some volume) and "free slices" (not mapped to any volume) based on a sequence of such diffs for each slice. The point is that, after unlocking the first \(\ell-1\) volumes, the rest of the space may or may not contain another volume (there may or may not be some more data slices among the free slices): if we want the adversary to be incapable of distinguishing between the two cases, we need the diffs of data slices and free slices to look the same. Once we frame the problem in this way, we can rephrase the weakness of Shufflecake as follows: the diff bitmask of a free slice is always all-zeros, and as such is clearly distinguishable from that of a data slice, which might have some bits set to 1.
Our task then becomes to "obfuscate" the changes that occurred in the data slices (especially those belonging to \(V_{\ell}\)) by artificially creating a non-zero diff bitmask in the free slices, through a re-randomisation of some selected blocks. This way, a user will hopefully be able to claim that all the changes happened to the "empty" blocks are due to this obfuscation procedure, and not to the existence of \(V_{\ell}\). Nothing prevents us, of course, from also touching the data slices during the obfuscation procedure, if that helps making the diff bitmasks look more alike. It is to be noted, however, that if a block was modified by the upper layer during the normal operational phase, the corresponding bit will be set to 1 in the diff bitmask and there is no way for the obfuscation procedure to "undo" that change: when we touch a data slice, we cannot turn the 1s of the diff bitmask into 0s. Instead, we can turn some 0s into 1s by simply re-encrypting the same content of a block with a different IV.
**Trivial Random Refresh.** A very simple first idea for an obfuscation procedure is to take all physical blocks belonging to free slices, and re-randomise them all, independently at random, each with probability \(p\). This operation could be either performed upon volume close or, for better resilience, spread across the normal operations. It is the easiest way to achieve a non-zero diff bitmask for free slices, but it is definitely too crude to work: nothing guarantees that the diff bitmasks of data slices will "look random" like the ones artificially generated for free slices. Also, it might very well be the case that many data slices do not change across two snapshots, in which case it becomes very easy to tell them apart from free slices, which often have a non-zero diff bitmask.
Such consideration suggests a refinement: besides re-randomising some blocks in the free slices, we could re-encrypt (with a different IV but same plaintext) some blocks in the data slices of existing volumes, again independently at random, each with probability \(q\). This way, we also randomise the diff bitmasks of the data slices, making them more similar to those generated for free slices.
_Insecurity with many snapshots._ The procedure we just illustrated could well succeed, for a suitable choice of \(p\) and \(q\), in rendering the diff bitmasks of data slices and free slices roughly indistinguishable, but only if the adversary gets just _one_ diff for each slice, i.e., if she only gets two snapshots. This is because we are essentially playing a hopeless game: roughly speaking, we are aiming at making signal+noise (the diffs of a data slice) indistinguishable from noise alone (the diff of a free slice). As discussed, we cannot turn the 1s of the diff bitmasks of data slices into 0s: the "signal" given by which blocks were modified by the upper layer stays there, we can only hope to bury it in enough noise by turning some 0s into 1s through re-encryption. However, with enough snapshots, the signal will eventually emerge. Imagine, for instance, that there is one particular block in a data slice that is very often modified (maybe it contains some sort of file system index): the corresponding bit in the diff bitmask will often be set to 1, which would be hard to justify through the obfuscation procedure, which only hits one given block with probability \(p\) each time.
**Subsampling.** The previous discussion teaches us a valuable lesson: in our setting, we cannot hope to disguise the accesses performed by the upper layer as random noise, as is commonly the case for ORAMs. The only option we have left is to take the opposite approach: let us make the diffs of free slices look like they were _also_ generated by some file system workload. This way, we are still making the diffs for the two kinds of slices similar, but we are not trying to erase the signal from existing data slices, which we cannot. Instead, we "copy this signal" onto the free slices, so that the adversary will _always_ see this signal, even if the last \(\ell\)-th volume has been surrendered. A tedious, convoluted, and yet very imprecise way of doing it would be for us to sit down, study the access patterns resulting from typical file system workloads, model them as a probability distribution, and hardcode that into the obfuscation procedure. Probably, a better idea would be to use a ML approach and let a daemon run in the background to adaptively learn and simulate such distribution.
Instead, a simpler method that is likely to capture the patterns we want to imitate is to have the scheme itself "learn" them online, by simply subsampling the stream of incoming logical requests. More concretely: for each incoming logical SflcWrite request, we "imitate" it with probability \(p\). Imitating a request means "learning" that the affected logical block \(b\) is likely to be written by file system workloads, and copy this signal onto a free slice: we retain the offset (\(b\mod S_{L}\)) of the block within the slice, we choose a "target" free slice, and we re-randomise the block with the same offset within the target slice. This approach guarantees that, if a particular block is often updated by the upper layer, then we are very likely to catch this signal and correctly carry it over to a free slice. Note, however, that we need to choose the target free slice deterministically from some context information. Only in this way can we replicate the signal consistently across snapshots, always onto the same free slice; also, this allows us to correctly capture and copy a signal in case it consists of not just one, but several blocks in a slice being frequently updated.
Counting attacks.If we assume that the obfuscation procedure just described really succeeds in making the empty space look like it is occupied, we are still left with one problem. The special blocks that are often updated by the upper layer leave a very clear trace in the snapshot history, since their bits in the diff bitmasks are almost always set to \(1\). If we have \(\ell\) volumes, the obfuscation procedure will generate \(\ell\) such clear traces in the free space. Therefore, a simple attack would consist in counting these traces, and checking whether there are as many of them as there are disclosed volumes. To thwart this attack, we can rework the obfuscation procedure in such a way that the device's data section always looks like it's hosting max volumes. We can ideally assign the free slices to \(\texttt{max}-\ell\) pairwise-disjoint sets, each one representing a "fake" volume, and have the obfuscation procedure be aware of this partitioning when choosing the target free slice, so as to really simulate \(\texttt{max}-\ell\) volumes with the imitated logical SflcWrite requests.
**Ghost File System.** The obfuscation procedure described above may already offer good protection, although it might be non-trivial to translate the rough idea of "being aware of the partitioning into \(\texttt{max}-\ell\) fake volumes" into a concrete algorithm for choosing a target free slice when we imitate a SflcWrite request. A valid, if "exotic", alternative, would be to actually create \(\texttt{max}-\ell\) additional "ghost" Shufflecake volumes on the device, that behave in the exact same way as regular volumes, except that they do give up their slices when needed. On these volumes, a separate component (a daemon) could mount a file system and perform some typical sequences of accesses. The advantage of this solution is that, by definition, it will always look like there are max volumes on the device. However, when slices are reclaimed from ghost volumes, their file systems might suddenly get corrupted and start complaining quite loudly, thus impairing the practical usability of the system. A daemon operating on these ghost filesystems might therefore need to be aware of the new slice allocation requests from "real" volumes, and move or remap the ghost slices accordingly at runtime.
### Shufflecake "Lite"
As we have seen in Section 6.1, crash inconsistency is a serious problem of the current embodiment of Shufflecake, caused by the use of the CTR mode of encryption which, in turn, is necessary to achieve block re-randomisation. And we have seen in Section 6.2 how this block re-randomisation is an essential ingredient for achieving some form of multi-snapshot security. However, currently no multi-snapshot security measure is implemented in Shufflecake, or even formally defined. We have already seen in Section 5.3 how the current CTR design brings some (minimal) performance hit, and how this performance hit will likely be exacerbated both by the discussed mechanisms for achieving crash consistency and by the proposed ideas for multi-snapshot security. Furthermore, we have argued in Section 1.2 how single-snapshot security might already offer a good enough security margin in many scenarios.
This brings us the following idea: proposing a _"lite"_ mode of Shufflecake which sacrifices the feature of block re-randomisation (with all its pros and cons) and employs the XTS mode of operation instead of CTR for encryption, just like TrueCrypt. This would give up every hope of achieving any form of security better than single-snapshot, but it would bring the following advantages:
* It would avoid the need of writing IVs on disk, therefore avoiding completely the (minimal) space waste and I/O slowdown measured in Section 5.3.
* It would be _natively_ crash-consistent, so all the countermeasures discussed in Section 6.1 would be unnecessary.
* Compared to TrueCrypt/VeraCrypt, it would still offer huge operational advantages: providing "real" plausible deniability by offering many nested layers of secret volumes, unlocking a whole hierarchy of volumes with a single password, and being filesystem-agnostic.
The idea would be eventually to provide both modes for Shufflecake, "lite" and "full" (the latter including both crash consistency countermeasures and multi-snapshot capabilities), and let the user choose which one is desired during the init operation. In practice, one of the two would be the default choice and the other one could be selected as optional, but deciding which of the two shall be the default and which one optional will require a careful security and usability study. We leave this decision for future work, after proper confrontation with representatives of the envisioned final user demographics.
### Corruption Resistance
As discussed in Section 6.4, writing data to decoy volumes while not all the hidden volumes are open entails a high risk of corrupting the hidden volumes. There cannot be perfect mitigation to this problem, save for frequent back-ups. However, it could be possible to reduce this risk of corruption by using some form of _error-correction_ on the unopened volumes (hence sacrificing some space), and then trying to recover the volume if corruption happens. We tested positively this idea using RAID[33], namely by partitioning a Shufflecake hidden
volume into different equal-sized logical partitions, and using them to assemble a RAID device. Other methods might be more suitable, such as _alpha entanglement codes_[14], but the support should be probably "baked-in" into Shufflecake itself in order to be really robust.
### Use of Disk Metadata
We have seen in Section 4.1 how there is space for embedding volume-specific metadata in each volume's VMB. Here we discuss a couple of useful ideas on how to employ this extra space.
One option could be to embed a string specifying a user-defined name for the volume. Currently, our implementation assigns volume names procedurally in order to avoid name collisions, but a user might prefer to assign these names statically or with mnemonic IDs, to facilitate scripting etc.
Likewise, one could embed a string specifying a desired _mountpoint_ for that volume. Notice in fact that, given the PD requirements, one cannot let these volumes be assigned at static mountpoints in a regular Linux way, e.g. using fstab or crypttab. Rather, the desired mountpoint should be hidden within the context of the volume itself. Then, if the implementation supports it, once the volume is opened, it can also be automatically mounted at a given position.
Another idea could be to embed _virtual quotas_, in order to artificially limit the maximum available size of decoy volumes. As it is now, Shufflecake performs maximum overcommitment on the visible available space of all volumes: each of them will appear as large as the underlying device. This can cause issues if the user (or the OS) mistakenly assumes that that space is actually available, and starts writing too much data on the volumes. In order to mitigate this, metadata could be used to limit the size of the block device seen by the OS. Importantly, this must only hold for decoy volumes, because overcommitment is substantially what allows for PD. Therefore, the correct way to implement this is: _each volume's VMB should specify a virtual quota for the volume below itself in the secrecy hierarchy, but not for itself._ When an \(\ell\)-th volume is opened, the virtual quota for all less secret volumes in the hierarchy will be recursively read this way (the lowest volume, e.g. volume 1, will not have this assigned metadata, or anyway it will be ignored). Then, the \(\ell\)-th volume could be assigned a virtual size equal to the maximum available size on the device, minus the _sum_ of the virtual quotas of all other volumes. This way, it will always be impossible to accidentally write too much data on the volume hierarchy, but an adversary will always see the most secret unlocked volume as _the_ last one present.
It might also be possible, although probably overkill and exceedingly complex to implement, to have different security or redundancy policies assigned _per-volume_ rather than _per-device_, and use the metadata to disambiguate them. For example, it could in theory be possible to have different features in terms of crash consistency (as discussed in Section 6.1), security (Section 6.2) or corruption resistance (Section 6.4) assigned to different volumes within the same hierarchy.
Finally, all this metadata could be embedded in raw text, or a more robust and machine-friendly encoding such as JSON could be used.
### Reclaiming Unused Slices
Currently, our implementation of Shufflecake does not have a mechanism for reclaiming slices that are no longer used: once a slice is allocated for a certain volume, it will always belong to that volume, even if the volume's filesystem is emptied of all data. It would be desirable to implement an operation to reassign empty slices to the pool of free available ones, in order to make space allocation across volumes more efficient and limiting the risk of overcommitment.
Clearly, we need some sort of hint from the upper layer in order to trigger this operation. To that effect, we need to intercept the trim requests emitted by the file system. These commands are effectively a third instruction accepted by hard disks, besides read and write; they serve as a way for the file system to indicate to the disk that some sectors no longer contain user data, and so the internal disk controller can avoid copying them over when reshuffling its own internal indirection layer [17]. These commands are also vital for the efficiency of disk virtualisation systems, such as Shufflecake, that overcommit the total underlying space and thus need to exploit every occasion to optimise the resource allocation.
Once we have this mechanism in place, we can design a way to reassign a freshly freed-up physical slice to the pool of available ones at a random position, so that the function NewSlice (Algorithm 3) will return it with uniformly random probability when a new slice is required. More concretely: suppose that Shufflecake intercepts an OS signal telling us that a logical slice for volume \(V_{i}\) at LSI \(\sigma\) is now free. We define a function ReclaimSlice (Algorithm 4) which operates on the same structures used by NewSlice, and also on the position map of the interested volume. This function clears the occupation bitfield of the reclaimed slice and the entry in the position map, then moves the PSI at a random position of prmslices (after octr) by doing another Fisher-Yates iteration. We use a subfunction ReverseShuffle which, given as input a PSI, returns the index of prmslices where this PSI is found, or error if not present. The way to implement this subfunction can vary, e.g. by keeping in-memory a reverse map of prmslices, or by doing a linear search every time.
```
1:\(\Psi\leftarrow\texttt{GetPosMap}(V_{i},\;\sigma)\)
2:\(\texttt{bfld}[\Psi]:=\texttt{free}\)
3:\(\texttt{PosMap}_{i}[\sigma]:=\bot\)
4:\(k\leftarrow\texttt{ReverseShuffle}(\Psi)\)
5:if\(k>\texttt{octr}\)then:return\(\triangleright\) No need to reshuffle in this case.
6:swap(prmslices[\(k\)],prmslices[octr])
7:\(j\xleftarrow[\texttt{octr},\texttt{MaxSlices}]\)
8:swap(prmslices[\(j\)],prmslices[octr])
9:ifbfld[prmslices[octr]] = free then:octr := octr - 1
10:return
```
**Algorithm 4**ReclaimSlice\((V_{i},\sigma)\)
### Unbounded Number of Volumes
Shufflecake assumes a number max of possible volumes that can be provided by any device. Even if this limit can be chosen freely by implementations, it would be desirable to have a way for creating unlimited numbers of volumes (subject to space availability) per device. This would not only remove an artificial limitation on the scheme, but would also strengthen its operational security by making any kind of safeguord-like technique (as discussed in Section 4.2) impossible.
Clearly, for this to be achievable, volume headers cannot be adjacent and packed at the beginning of the disk. One idea for further investigation would be to embed every header (except the first, 'less secret' one, which is still going to be at the beginning) at random positions within the device, and having them linked by the previous volume header through an ad-hoc pointer field which is _always_ present, and indistinguishable from random without the correct password. Traversing this list of linked headers, however, presents some challenges. In particular, when the user provides one password on volume instantiation, how do we know whether the password is wrong? And, if not, how do we reach the right header unlocked by that password? It might be possible to devise some complex linking scheme for an arbitrary number of "bogus" headers on the device, but in any case the following limitations would apply.
First, bogus headers cannot "reserve" an area of the disk, otherwise we would waste too much space. They can be placed at any position during device initialization, but when a new slice allocation falls on their position, that space should be released. One can think of different ways to handle this, for example by dynamically moving out bogus headers to another free position if they are about to be overwritten, or simply accepting the risk of breaking the list at some random point during use (as this would not impact consistency for the "real" volumes, and it would still be enough to justify the impossibility of an a-priori generated safeguord). In any case, except for this difference in reserving allocation, "bogus" and "real" headers should either be treated equally, or extra precautions should be adopted to maintain PD.
Second, once the user inserts a password to instantiate a device, we might not be able to tell anymore whether the password unlocks something or it's wrong (e.g., a typo). Instead, depending on the chosen solution, the program might continue to traverse the linked list in search of something to decrypt with that password, until either it finds the right header, or the list is broken (e.g., by a bogus header which was overwritten), in which case we can say the password was wrong. We might even envision that the user should expect to terminate manually the program in case a provided password does not succeed after some time, because any hardcoded timeout in the implementation could nullify this feature by inserting a de-facto artificial limit to the number of possible headers.
A possible way to implement this idea could be the following:
1. Coalesce DMB and VMBs into unified, per-volume headers.
2. Each header is one slice large.
3. Every header also contains a field with a random value nxtptr.
4. Except for the first one, headers are found at random disk positions that are a (public) function of the previous header's nxtptrr.
5. During init, in case there is a collision during a nxtptrr generation over the location of another pre-existing header, the currently generated nxtptrr value is discarded and sampled again, until a suitable one is found by brute-forcing (since the total header size is supposed to be negligible in comparison to the device size, this should be very efficient).
6. All the headers are functionally equivalent and contain the same fields.
7. The first header contains a value that is the KDF's salt, while the same field in other headers is either left unused, or used to re-salt the password-derived key for every header using a (fast) hash function.
8. Shufflecake would allocate slices for the volumes in the usual way, just considering the slices at header locations as permanently occupied.
The above idea might very well work, but it remains to specify an efficient way to embed this way also the position maps, which might be larger than one slice. Many options are open to evaluation here, from linked lists starting at the header, to multiple branching pointers.
There might be other good ways to implement the possibility of having a virtually unlimited number of volumes, we leave this for future exploration.
### Hidden Shufflecake OS
As discussed in Section 1.3, a PD solution that only provides volumes for data storage will never achieve a satisfying level of operational security due to leakage from the OS and other applications installed therein. In order to solve this issue, it is important that the OS itself is run from within a hidden volume, as it was done with TrueCrypt's concept of hidden OS. The natural evolution for Shufflecake would be to be launched at boot time (e.g., as a GRUB module [3]) and boot a whole Linux distribution installed within a volume. Alternatively, an ad-hoc, minimal Shufflecake bootloader could be deployed.
More concretely, eventually Shufflecake could become itself a full PD-focused Linux distribution, where during installation the user is guided in the process of creating volumes and installing other distributions therein in a guided way. For operational efficiency and security, every OS at layer \(k\) should be aware of the filesystem and OS in the volume at layers \(j<k\) (which is made possible by the hierarchy among Shufflecake volumes). This would also allow a _butler daemon_ to run from the currently running OS and operate in the background on lower-hierarchy OSes, e.g. by performing system updates, downloading emails, etc., so that all these decoy systems are kept up-to-date even if the user neglects to use them regularly. This would in turn allow to ease the suspicion of an adversary when surrendering a decoy password.
As an alternative to having a full Linux distribution for every volume, a hypervisor-based solution like Qubes OS [41] might be used instead. However, in order to validate this approach, further analysis is required to ensure that the hypervisor (which is not designed with PD in mind) does not leak the existence of hidden volumes. |
2305.17025 | Quantum Spread Complexity in Neutrino Oscillations | Quantum information theory has recently emerged as a flourishing area of
research and quantum complexity, one of its powerful measures, is being applied
for investigating complex systems in many areas of physics. Its application to
practical physical situations, however, is still few and far between. Neutrino
flavor oscillation is a widely studied physical phenomena with far reaching
consequences in understanding the standard model of particle physics and to
search for physics beyond it. Oscillation arises because of mixing between the
flavor and mass eigenstates, and their evolution over time. It is an inherent
quantum system for which flavor transitions are traditionally studied with
probabilistic measures. We have applied quantum complexity formalism as an
alternate measure to study neutrino oscillations. In particular, quantum spread
complexity revealed additional information on the violation of charge-parity
symmetry in the neutrino sector. Our results indicate that complexity favors
the maximum violation of charge-parity, hinted recently by experimental data. | Khushboo Dixit, S. Shajidul Haque, Soebur Razzaque | 2023-05-26T15:34:31Z | http://arxiv.org/abs/2305.17025v3 | # Quantum Spread Complexity in Neutrino Oscillations
###### Abstract
Quantum information theory has recently emerged as a flourishing area of research and quantum complexity, one of its powerful measures, is being applied for investigating complex systems in many areas of physics. Its application to practical physical situations, however, is still few and far between. Neutrino flavor oscillation is a widely studied physical phenomena with far reaching consequences in understanding the standard model of particle physics and to search for physics beyond it. Oscillation arises because of mixing between the flavor and mass eigenstates, and their evolution over time. It is an inherent quantum system for which flavor transitions are traditionally studied with probabilistic measures. We have applied quantum complexity formalism as an alternate measure to study neutrino oscillations. In particular, quantum spread complexity revealed additional information on the violation of charge-parity symmetry in the neutrino sector. Our results indicate that complexity favors the maximum violation of charge-parity, hinted recently by experimental data.
## 1 Introduction
In recent years, quantum complexity, a widely recognized measure in information theory, has found application in various branches of physics, encompassing quantum many-body systems, quantum field theory, and even cosmology. The interest in quantum complexity stemmed from the study of anti-de Sitter/conformal field theory (AdS/CFT) duality, also known as the gauge/gravity duality. Complexity is considered a useful probe [1] to investigate the physics behind the horizon of an eternal AdS black hole, employing proposals such as "complexity = volume" and "complexity = action" [2; 3; 4; 5].
From the standpoint of the dual quantum (field) theory, complexity has emerged as a valuable tool for characterizing quantum chaos [6; 7; 8; 9; 10; 11], detecting quantum phase transitions [12], quantum decoherence [13; 14], and more. For example, recent studies [15; 16; 17] have
delved into the cosmological perturbation model and the evolution of the universe, utilizing Nielsen's approach [18; 19; 20; 21; 22] to complexity. Interestingly, in reference [16], it was discovered that de Sitter space, which offers the most popular model for inflation, exhibits the highest rate of complexity growth among expanding backgrounds that satisfy the null energy condition. It would be intriguing to investigate whether this maximization of complexity occurs in other natural processes of evolution.
In our work, we will use a more recent approach to measuring complexity, known as spread complexity [23; 24], to understand the evolution of neutrino flavor states. Spread complexity offers a clear definition that is valid in arbitrary quantum systems and is relatively straightforward to compute. It has already demonstrated its usefulness in diagnosing quantum chaos [23] and quantum phase transitions [25]. In this paper we will apply this information theoretic tool to gain insight about neutrino oscillations. Specifically, we will investigate if spread complexity can be used as an alternative to the oscillation probabilities for different flavors of neutrinos.
The phenomena of neutrino oscillations are due to mixing of the flavor eigenstates \(\nu_{\alpha}\) (\(\alpha=e,\mu,\tau\) for three generations) in the mass eigenstates \(\nu_{i}\) of masses \(m_{i}\) (\(i=1,2,3\) for three generations). The former are associated with weak interactions -- neutrinos with definite flavor are created in charge current interactions -- while the latter are associated with the propagation of massive neutrinos governed by a Hamiltonian. The flavor states are superposition of the mass states and vice versa. The proportions of mass states in a neutrino with definite flavor \(\nu_{\alpha}\) change while propagation from the creation point and can be identified as a different flavor \(\nu_{\beta}\) in a detector at a distance. This is the essence of neutrino oscillations. Detection of these phenomena, first by the solar neutrino experiments [26; 27; 28], and subsequently by the atmospheric [29; 30] and reactor neutrino experiments [31; 32; 33] provide the first signal for physics beyond the standard model. Mixing of the flavor states in mass states, and subsequently probabilities for oscillations between flavors, is governed by the well known Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [34; 35], which is a \(3\times 3\) unitary matrix admitting one Charge-Parity (CP) violating Dirac phase (\(\delta\)), and is typically parameterized by three mixing angles \(\theta_{12},\theta_{23}\) and \(\theta_{13}\).
The neutrino oscillations are driven by two independent mass-squared differences \(\Delta m^{2}_{21}\equiv m^{2}_{2}-m^{2}_{1}\) and \(\Delta m^{2}_{31}\equiv m^{2}_{3}-m^{2}_{1}\) for three neutrino masses. The absolute mass scale does not affect oscillations, however, the hierarchy of masses, whether \(m_{3}>m_{2}>m_{1}\) (normal hierarchy) or \(m_{2}>m_{1}>m_{3}\) (inverted hierarchy), is unknown.1 The CP phase \(\delta\) is also unknown, apart from a hint from the T2K experiment at \(\delta\sim-2.14\) radian [36], which however is in tension with the NOvA experiment excluding this value [37]. The angles \(\theta_{12}\) and \(\theta_{13}\) are known with good accuracy, while \(\theta_{23}\) is not. Neutrino experiments measure events, typically from \(\nu_{e}(\bar{\nu}_{e})\) and/or \(\nu_{\mu}(\bar{\nu}_{\mu})\) induced interactions in a detector, given a flux of neutrinos produced by an accelerator or a reactor. The measured events are fitted with simulations based on oscillation probabilities by varying the mixing parameters and mass-squared differences.
Footnote 1: From the solar neutrino experiments, it is known that \(m_{2}>m_{1}\).
Moreover, neutrinos participate in weak interactions only and hence they have little
chance to experience the effects such as decoherence during their travel to a distant detector. It makes these particles efficient candidates to be utilized to perform several tasks related to quantum information & computation. In this line, many aspects of quantumness embedded in the neutrino system have been analyzed thoroughly in previous studies [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. For example, an indirect test of \(Leggett-Garg\) (LG) inequalities, which can verify the temporal quantum correlations, has been performed using the oscillation data coming from the MINOS and Daya-Bay experiments [43; 44]. Furthermore, several measures of entanglement, spatial and temporal correlations have been studied for neutrinos and these measures have also been found to provide important pieces of information regarding open issues in the neutrino sector as discussed above. For example, in refs. [45; 46] it has been discussed that the test of Bell-type and LG inequalities can indicate the specific choice of neutrino mass ordering. LG inequalities have also been shown to discriminate between the Dirac and Majorana nature of neutrinos [47]. Some measures of quantumness have also been seen to be sensitive to the new physics effects due to non-standard neutrino-matter interactions [53; 54; 55; 56].
In this paper, we compute the spread complexities of neutrinos of a particular flavor oscillating to other flavors after propagation. We show that the cost function, which is automatically minimized in the Krylov basis used in our computations, gives an alternate description of the neutrino flavor oscillations and is sensitive to the oscillation parameters. In particular, we explore the CP phase value, mass hierarchy and \(\theta_{23}\) value predicted by the spread complexity.
In Section 2 we discuss dynamics of neutrino oscillations and mixing in two- and three-flavor scenarios. In Section 3 we introduce a basic description of spread complexity and cost function, and in Section 4 we apply it to neutrino oscillations. We show our results from numerical calculations and discuss them in Section 5, summarize our findings in Section 6 and conclude our study in Section 7.
## 2 Dynamics of Neutrino Oscillations
Here we discuss the evolution of neutrino flavor states both in the case of two flavor approximation and the complete three flavor neutrino oscillations scenario. In the neutrino-system, the flavor states are not the mass eigenstates, and in fact the flavor states mix via a unitary matrix \(U\) to generate mass eigenstates as given below
\[\ket{\nu_{\alpha}}=U^{*}\ket{\nu_{i}}\,, \tag{1}\]
where \(\ket{\nu_{\alpha}}\) and \(\ket{\nu_{i}}\) are column vectors with neutrino flavor and mass eigenstates as their components, respectively. Here, we discuss the time evolution of neutrino flavor states for both two and three flavor oscillation scenarios.
### Two-flavor Neutrino Oscillations
Evolution of the flavor states is represented by Schrodinger equation as 2
Footnote 2: We have used natural units: \(\hbar=c=1\) throughout the paper.
\[i\frac{\partial}{\partial t}\begin{pmatrix}|\nu_{e}(t)\rangle\\ |\nu_{\mu}(t)\rangle\end{pmatrix}=H_{f}\begin{pmatrix}|\nu_{e}(t)\rangle\\ |\nu_{\mu}(t)\rangle\end{pmatrix}\]
where \(H_{f}=UH_{m}U^{-1}\), \(U\) is a \(2\times 2\) mixing matrix and \(H_{m}\) is the Hamiltonian (diagonal) that governs the time evolution of neutrino mass eigenstate. The forms of \(H_{m}\) and \(U\) are given below
\[H_{m}=\begin{pmatrix}E_{1}&0\\ 0&E_{2}\end{pmatrix},\ \ \ \ \ U=\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}.\]
The Hamiltonian in flavor basis can be expressed as
\[H_{f}=\begin{pmatrix}E_{1}\cos^{2}\theta+E_{2}\sin^{2}\theta&(E_{2}-E_{1}) \sin\theta\cos\theta\\ (E_{2}-E_{1})\sin\theta\cos\theta&E_{1}\sin^{2}\theta+E_{2}\cos^{2}\theta \end{pmatrix}\,. \tag{2}\]
Finally, we have a system of coupled differential equations to solve in the neutrino flavor-basis, _i.e.,_
\[\frac{\partial}{\partial t}\,|\nu_{e}(t)\rangle =-i(E_{1}\cos^{2}\theta+E_{2}\sin^{2}\theta)\,|\nu_{e}(t)\rangle- i(E_{2}-E_{1})\sin\theta\cos\theta\,|\nu_{\mu}(t)\rangle\] \[\frac{\partial}{\partial t}\,|\nu_{\mu}(t)\rangle =-i(E_{2}-E_{1})\sin\theta\cos\theta\,|\nu_{e}(t)\rangle-i(E_{1} \cos^{2}\theta+E_{2}\sin^{2}\theta)\,|\nu_{\mu}(t)\rangle\,\,.\]
Let us consider an \(M\) matrix defined as
\[M=-i\begin{pmatrix}E_{1}\cos^{2}\theta+E_{2}\sin^{2}\theta&(E_{2}-E_{1})\sin \theta\cos\theta\\ (E_{2}-E_{1})\sin\theta\cos\theta&E_{1}\cos^{2}\theta+E_{2}\sin^{2}\theta \end{pmatrix}\]
that has eigenvalues \(\lambda_{1}=-iE_{1}\) and \(\lambda_{2}=-iE_{2}\) with corresponding eigenvectors as \((-\cot\theta,1)^{T}\), and \((\tan\theta,1)^{T}\), respectively. It implies that we can write
\[\begin{pmatrix}|\nu_{e}(t)\rangle\\ |\nu_{\mu}(t)\rangle\end{pmatrix}=\begin{pmatrix}-\cot\theta&\tan\theta\\ 1&1\end{pmatrix}\begin{pmatrix}ce^{-iE_{1}t}\\ de^{-iE_{2}t}\end{pmatrix}.\]
Then, we proceed to get the time evolved neutrino flavor states as
\[|\nu_{e}(t)\rangle = c\ (-\cot\theta)e^{-iE_{1}t}+d\ (\tan\theta)e^{-iE_{2}t}\] \[|\nu_{\mu}(t)\rangle = c\ e^{-iE_{1}t}+d\ e^{-iE_{2}t}. \tag{3}\]
where \(c\) and \(d\) are constants whose values we can obtain by applying the initial conditions (at \(t=0\)) and can be expressed as
\[c=-\sin\theta\cos\theta\,|\nu_{e}(0)\rangle+\sin^{2}\theta\,|\nu _{\mu}(0)\rangle\] \[d=\sin\theta\cos\theta\,|\nu_{e}(0)\rangle+\cos^{2}\theta\,|\nu _{\mu}(0)\rangle\,.\]
Therefore, Eq. (3) takes the form
\[|\nu_{e}(t)\rangle= (\cos^{2}\theta e^{-iE_{1}t}+\sin^{2}\theta e^{-iE_{2}t})\,|\nu_{e}(0 )\rangle+\sin\theta\cos\theta(e^{-iE_{2}t}-e^{-iE_{1}t})\,|\nu_{\mu}(0)\rangle\] \[|\nu_{\mu}(t)\rangle= \sin\theta\cos\theta(e^{-iE_{2}t}-e^{-iE_{1}t})\,|\nu_{e}(0) \rangle+(\sin^{2}\theta e^{-iE_{1}t}+\cos^{2}\theta e^{-iE_{2}t})\,|\nu_{\mu}(0 )\rangle\,. \tag{4}\]
We can see that the time evolved flavor states are now superpositions of the initial flavor states at time \(t=0\), hence, their coefficients can be used to obtain the survival and oscillation probabilities for each flavor, after propagation over a distance \(L\), as
\[P_{\alpha\alpha}=1-\sin^{2}2\theta\sin^{2}\left((E_{2}-E_{1})L/2\right)\,.\]
### Three-flavor Neutrino Oscillations
It is straightforward to obtain the time evolution of the flavor states in case of three flavor oscillations. In this case, the Schrodinger equation takes the following form
\[i\frac{\partial}{\partial t}\begin{pmatrix}|\nu_{e}(t)\rangle\\ |\nu_{\mu}(t)\rangle\\ |\nu_{\tau}(t)\rangle\end{pmatrix}=H_{f}\begin{pmatrix}|\nu_{e}(t)\rangle\\ |\nu_{\mu}(t)\rangle\\ |\nu_{\tau}(t)\rangle\end{pmatrix}\,, \tag{5}\]
where \(H_{f}=UH_{m}U^{-1}\) and \(H_{m}=diag(E_{1},E_{2},E_{3})\) is the Hamiltonian of neutrino energies \(E_{i}\), with \(i=1,2,3\). For relativistic neutrinos of momentum \(p\), \(E_{i}=\sqrt{p^{2}+m_{i}^{2}}\simeq p+m_{i}^{2}/2E\). Therefore, \(E_{j}-E_{i}\simeq(m_{j}^{2}-m_{i}^{2})/2E=\Delta m_{ji}^{2}/2E\) and the Hamiltonian, after subtracting \(E_{1}\) and removing the identity term that do not affect oscillations, can be written as
\[H_{m}=\frac{1}{2E}\begin{pmatrix}0&0&0\\ 0&\Delta m_{21}^{2}&0\\ 0&0&\Delta m_{31}^{2}\end{pmatrix}\,.\]
In the three-flavor case, \(U\) is a \(3\times 3\) unitary matrix, called the PMNS mixing matrix [34; 35]. It is parametrized by three angles and a complex phase, and is of the form [57]
\[U=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{pmatrix}=\begin{pmatrix}c_{12}c_{13}&s_{1 2}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{13}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}\,. \tag{6}\]
Here, \(c_{ij}=\cos\theta_{ij}\), \(s_{ij}=\sin\theta_{ij}\) with mixing angles \(\theta_{ij}\) and \(\delta\) is the \(CP\)-violating Dirac phase. There are, therefore, six parameters in three-flavor oscillations: two mass-square differences (\(\Delta m_{21}^{2}\) and \(\Delta m_{31}^{2}\)), three mixing angles (\(\theta_{12}\), \(\theta_{13}\) and \(\theta_{23}\)) and one CP phase (\(\delta\)). In case the neutrinos are Majorana particles, there are two additional complex phases in the mixing matrix, which however do not affect oscillations.
Hence, for three flavor oscillation scenario, after solving the set of three coupled differential equations, we get the time-evolved flavor states of neutrinos as
\[|\nu_{e}(t)\rangle = A_{ee}(t)\,|\nu_{e}(0)\rangle+A_{e\mu}(t)\,|\nu_{\mu}(0)\rangle+A _{e\tau}(t)\,|\nu_{\tau}(0)\rangle\] \[|\nu_{\mu}(t)\rangle = A_{\mu e}(t)\,|\nu_{e}(0)\rangle+A_{\mu\mu}(t)\,|\nu_{\mu}(0) \rangle+A_{\mu\tau}(t)\,|\nu_{\tau}(0)\rangle\] \[|\nu_{\tau}(t)\rangle = A_{\tau e}(t)\,|\nu_{e}(0)\rangle+A_{\tau\mu}(t)\,|\nu_{\mu}(0) \rangle+A_{\tau\tau}(t)\,|\nu_{\tau}(0)\rangle\,\,. \tag{7}\]
The explicit expressions of the amplitudes \(A_{\alpha\beta}(t)\) with \(\alpha,\beta=e,\mu,\tau\) for standard vacuum oscillations are given in the Appendix. It is straightforward to follow the dynamics of antineutrino oscillations by applying the change \(\delta\to-\delta\) in the amplitudes \(A_{\alpha\beta}\) obtained for neutrinos. Hence, the parameter \(\delta\) can induce \(CP\)-violation in neutrino sector that is measured in terms of \(\Delta CP\) as
\[\Delta CP=P_{\alpha\beta}-P_{\bar{\alpha}\bar{\beta}}.\]
Here, \(\Delta CP\) becomes maximum for \(\delta=\pm 90^{o}\).
## 3 Spread Complexity and Cost Function
We will be interested in the complexity of some general quantum state \(\ket{\psi(t)}\). The evolution of this state can be obtained from the Schrodinger equation as
\[i\frac{\partial}{\partial t}\ket{\psi(t)}=H\ket{\psi(t)}.\]
The solution gives the time evolution of the state \(\ket{\psi}\) as follows
\[\ket{\psi(t)}=e^{-iHt}\ket{\psi(0)}, \tag{10}\]
where \(\ket{\psi(0)}\) is the initial state at \(t=0\). The spread complexity can be defined as the spread of \(\ket{\psi(t)}\) in the Hilbert space relative to \(\ket{\psi(0)}\), where the former, often referred to as "target state", and the latter, often referred to as "reference state", are connected by unitary transformations [23, 58].
We expand Eq. (10) in series and write
\[\ket{\psi(t)}=\sum_{n=0}^{\infty}\frac{(-it)^{n}}{n!}H^{n}\ket{\psi(0)}=\sum_{ n=0}^{\infty}\frac{(-it)^{n}}{n!}\ket{\psi_{n}},\]
where, \(\ket{\psi_{n}}=H^{n}\ket{\psi(0)}\). Hence, we can see that the time evolved state \(\ket{\psi(t)}\) is represented as a superposition of infinite \(\ket{\psi_{n}}\) states. However, in this representation, the \(\ket{\psi_{n}}\) states are not necessarily orthonormal. Hence, we use Gram-Schmidt procedure to obtain an ordered orthonormal basis from these \(\ket{\psi_{n}}\) states. We have the following forms of \(\psi_{n}\) states as
\[\ket{\psi_{0}} = H^{0}\ket{\psi(0)}\] \[\ket{\psi_{1}} = H^{1}\ket{\psi(0)}\] \[\ket{\psi_{2}} = H^{2}\ket{\psi(0)}\]
and so on. These states \(\{\ket{\psi_{0}}\), \(\ket{\psi_{1}}\), \(\ket{\psi_{2}}\),...\}\) are not orthonormalized yet. Following the Gram-Schmidt procedure we subtract the component of \(\ket{\psi_{n}}\) (parallel to the previous state \(\ket{\psi_{(n-1)}}\))) from the given state \(\ket{\psi_{n}}\). Hence, we have
\[\ket{K_{0}} = \ket{\psi_{0}},\] \[\ket{K_{1}} = \ket{\psi_{1}}-\frac{\bra{K_{0}}\psi_{1}}{\bra{K_{0}}K_{0}}\ket{K_ {0}},\] \[\ket{K_{2}} = \ket{\psi_{2}}-\frac{\bra{K_{0}}\psi_{2}}{\bra{K_{0}}K_{0}}\ket{K_ {0}}-\frac{\bra{K_{1}}\psi_{2}}{\bra{K_{1}}K_{1}}\ket{K_{1}},\]
and so on. These orthonormal set of vectors form the Krylov basis [23].
The extent of spread of the evolved state \(|\psi(t)\rangle\) in the Hilbert space depends on how complex the time evolution is. A cost function is defined as a measure of this complexity from a minimum of all possible basis choices [23]. Therefore, this cost function is an immediate candidate for measuring the spread complexity. More explicitly, for a time evolved state \(|\psi(t)\rangle\) and the Krylov basis defined as \(\{|K_{n}\rangle\}\), the cost function can be defined as
\[\chi=\sum_{n=0}^{\infty}n|\langle K_{n}|\psi(t)\rangle|^{2}, \tag{10}\]
where \(n=0,1,2,\dots\) The complexity is minimized for this cost function, constructed using the Krylov basis.
## 4 Complexity for Neutrino Oscillations
In this section, we aim to explore the role of spread complexity as an alternative measure to various transition (and survival) probabilities in the context of neutrino oscillations. We will initially focus on the two-flavor oscillation scenario and subsequently extend our analysis to the three-flavor case. As mentioned earlier, in the context of spread complexity, we begin with a specific flavor state and evolve it into a superposition state involving all flavors. Since the weight factor for the reference state is zero according to Eq. (10), the reference state does not really contribute to complexities. Hence, in the case of two-flavor oscillations, we can directly compare the spread complexity with the transition probabilities between two flavors.
In the case of three-flavor oscillations, however, evolved states become superposition states comprising all flavors. Consequently, a natural comparison for complexity would be with unity minus the survival probability of a given flavor. For instance, we can compare the spread complexity \(\chi_{e}\) with \(1-P_{ee}\), which is directly applicable to neutrino oscillation experiments. This approach allows us to directly compare the information obtained from complexity with experimental results.
Since the transition probabilities in the three-flavor case involve two distinct flavor states, such as electron (initial) to muon (final), a direct comparison between spread complexity \(\chi_{e}\) (with electron as the initial state) and the final evolved state (a mixed state) is not feasible. Nonetheless, we will separately compare both \(P_{e\mu}\) and \(P_{e\tau}\) with \(\chi_{e}\), and likewise for other flavors. This analysis aims to determine if the information extracted from these transition amplitudes is comparable to the information obtained solely from spread complexity.
### Complexity for Two-flavor Neutrino Oscillations
Spread complexity measures how an initial state is spread in the Hilbert space by a unitary evolution. Here, we will consider the spreading of both the \(|\nu_{e}\rangle\) and \(|\nu_{\mu}\rangle\) initial states. As we will see shortly, there are exactly two non-zero Krylov states, which are the same as the flavor states.
#### 4.1.1 Initial electron-neutrino (\(\nu_{e}\)) state
For the initial state \(\ket{\nu_{e}}\), we will consider \(\ket{\nu_{e}(0)}=(1,0)^{T}\). Then we get the basis \(\ket{\psi_{n}}\), (\(n=0,1,2,\dots\)) as
\[\ket{\psi_{0}} = \ket{\nu_{e}(0)}\] \[\ket{\psi_{1}} = H_{f}\ket{\nu_{e}(0)}\] \[\ket{\psi_{2}} = H_{f}^{2}\ket{\nu_{e}(0)}\] \[\ket{\psi_{3}} = H_{f}^{3}\ket{\nu_{e}(0)}\]
and so on. The Hamiltonian \(H_{f}\) is defined in Eq. (2). It turns out that the Krylov basis for this two-flavor oscillations scenario is \(\{\ket{K_{n}}\}=\{\ket{K_{0}},\ket{K_{1}}\}\) where, \(\ket{K_{0}}=(1,0)^{T}\) and \(\ket{K_{1}}=(0,1)^{T}\), _i.e.,_\(\ket{K_{n}}=\{\ket{\nu_{e}},\ket{\nu_{\mu}}\}\). Hence, for the initial \(\ket{\nu_{e}}\) flavor the complexity takes the form
\[\chi_{e} = 0\times|\langle K_{0}|\nu_{e}(t)\rangle|^{2}+1\times|\langle K_{ 1}|\nu_{e}(t)\rangle|^{2} \tag{14}\] \[= 0\times|\langle\nu_{e}|\nu_{e}(t)\rangle|^{2}+1\times|\langle\nu _{\mu}|\nu_{e}(t)\rangle|^{2}\] \[= |\langle\nu_{\mu}|\nu_{e}(t)\rangle|^{2}\] \[= P_{e\mu},\]
which is the \(\nu_{e}\rightarrow\nu_{\mu}\) transition probability. The time evolved state \(\ket{\nu_{e}(t)}\) is defined in Eq. (4).
#### 4.1.2 Initial muon-neutrino (\(\nu_{\mu}\)) state
Similarly, if the initial state is \(\ket{\nu_{\mu}}\), then we can start by considering \(\ket{K_{0}}=(0,1)^{T}\) and find out that \(\ket{K_{1}}=(1,0)^{T}\)_i.e.,_ the Krylov basis is now \(\{\ket{K_{n}}\}=\{\ket{K_{0}},\ket{K_{1}}\}=\{\ket{\nu_{\mu}},\ket{\nu_{e}}\}\). Then, in this case, the complexity can be calculated as
\[\chi_{\mu} = 0\times|\langle K_{0}|\nu_{\mu}(t)\rangle|^{2}+1\times|\langle K _{1}|\nu_{\mu}(t)\rangle|^{2} \tag{15}\] \[= 0\times|\langle\nu_{\mu}|\nu_{\mu}(t)\rangle|^{2}+1\times|\langle \nu_{e}|\nu_{\mu}(t)\rangle|^{2}\] \[= |\langle\nu_{e}|\nu_{\mu}(t)\rangle|^{2}\] \[= P_{\mu e}\]
Again, the time evolved state \(\ket{\nu_{\mu}(t)}\) is defined in Eq. (4).
Hence, we see that in the case of two-flavor oscillations the complexity comes out to be equal to the flavor transition probabilities \(P_{e\mu}\) (in case of initial \(\ket{\nu_{e}}\)) and \(P_{\mu e}\) (in case of initial \(\ket{\nu_{\mu}}\)). It means the complexity is also higher if the probability of transition from one flavor to the other is higher. Also, since \(P_{e\mu}=P_{\mu e}\) in the case of standard vacuum two-flavor neutrino oscillations, the complexity embedded in this system comes out to be same for both cases of initial flavor, _i.e.,_ in this case the complexity of the system doesn't depend on the initial flavor of neutrino.3 In summary, complexity does not reveal additional information compared to probability in the two-flavor neutrino oscillation scenario.
### Complexity for three-flavor neutrino oscillations
In this case, we have three choices of initial states as \(|\nu_{e}\rangle\), \(|\nu_{\mu}\rangle\) and \(|\nu_{\tau}\rangle\). These states can be represented as \(|\nu_{e}\rangle=(1,0,0)^{T}\), \(|\nu_{\mu}\rangle=(0,1,0)^{T}\) and \(|\nu_{\tau}\rangle=(0,0,1)^{T}\). We follow the same procedure as in the two-flavor case in order to construct the Krylov basis. As we will see shortly, there are exactly three non-zero Krylov states, however, the Krylov states are not equivalent to the flavor states of neutrino in the three-flavor oscillations. Below we provide the forms of Krylov basis for each initial state.
#### 4.2.1 Initial electron-neutrino (\(\nu_{e}\)) state
We start by considering
\[|K_{0}\rangle\equiv|\nu_{e}\rangle=(1,0,0)^{T}\]
then, other states spanning the Krylov basis take the form as
\[|K_{1}\rangle=N_{1e}(0,a_{1},a_{2})^{T}\ \ \text{and}\ \ \ |K_{2}\rangle=N_{2e}(0,b_{1},b_{2})^{T},\]
where,
\[a_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{e2}^{*}U_{\mu 2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{e3}^{*}U_{\mu 3}\] \[=\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin\theta_{12}\cos \theta_{12}\cos\theta_{23}+e^{i\delta}\sin\theta_{13}\sin\theta_{23}\left( \left(\frac{\Delta m_{31}^{2}}{2E}\right)-\left(\frac{\Delta m_{21}^{2}}{2E} \right)\sin^{2}\theta_{12}\right),\]
\[a_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{e2}^{*}U_{\tau 2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{e3}^{*}U_{\tau 3}\] \[=-\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin\theta_{12}\cos \theta_{12}\sin\theta_{23}+e^{i\delta}\sin\theta_{13}\cos\theta_{23}\left( \left(\frac{\Delta m_{31}^{2}}{2E}\right)-\left(\frac{\Delta m_{21}^{2}}{2E} \right)\sin^{2}\theta_{12}\right),\]
\[b_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{e}\right)U_{e2}^{*}U_{\mu 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{e}\right)U_{e3}^{*}U_{\mu 3}\] \[=\sin\theta_{13}\cos\theta_{23}\left(\left(\frac{\Delta m_{31}^{ 2}}{2E}\right)-\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin^{2}\theta_{12} \right)-e^{i\delta}\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin\theta_{12} \cos\theta_{12}\sin\theta_{23},\]
\[b_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{e}\right)U_{e2}^{*}U_{\tau 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{e}\right)U_{e3}^{*}U_{\tau 3}\] \[=-\sin\theta_{13}\sin\theta_{23}\left(\left(\frac{\Delta m_{31}^{ 2}}{2E}\right)-\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin^{2}\theta_{12} \right)-e^{i\delta}\left(\frac{\Delta m_{21}^{2}}{2E}\right)\sin\theta_{12} \cos\theta_{12}\cos\theta_{23},\]
The variables \(N_{1\alpha}\), \(N_{2\alpha}\) and \(A_{\alpha}\) (\(\alpha=e,\mu,\tau\)) are expressed at the end of this subsection. Then using Eq. (7) for the time-evolved flavor states and Eq. (20) we calculate the complexity as
\[\chi_{e}= P_{e\mu}(t)\left[N_{1e}^{2}|a_{1}|^{2}+2N_{2e}^{2}|b_{1}|^{2}) \right]+P_{e\tau}(t)\left[N_{1e}^{2}|a_{2}|^{2}+2N_{2e}^{2}|b_{2}|^{2}\right]\] \[+2\Re\left[N_{1e}^{2}a_{1}^{*}a_{2}A_{e\mu}(t)A_{e\tau}(t)^{*} \right]+4\Re\left[N_{2e}^{2}b_{1}^{*}b_{2}A_{e\mu}(t)A_{e\tau}(t)^{*}\right]. \tag{11}\]
Here \(\Re\) refers to the real part of the argument and the probabilities \(P_{\alpha\beta}(t)=|A_{\alpha\beta}(t)|^{2}\). Note that the probability for \(\nu_{e}\) to oscillate to other flavors is \(1-P_{ee}=P_{e\mu}+P_{e\tau}\), and differs from the complexity \(\chi_{e}\), which has additional terms.
#### 4.2.2 Initial muon-neutrino (\(\nu_{\mu}\)) state
Similarly, if we start by considering
\[|K_{0}\rangle\equiv|\nu_{\mu}\rangle=(0,1,0)^{T}\]
then we get
\[|K_{1}\rangle=N_{1\mu}(c_{1},0,c_{2})^{T}\ \ \text{and}\ \ \ |K_{2}\rangle=N_{2\mu}(d_{1},0,d_{2})^{T},\]
where,
\[c_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{\mu 2}^{*}U_{e2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{\mu 3}^{*}U_{e3},\] \[c_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{\mu 2}^{*}U_{\tau 2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{\mu 3}^{*}U_{\tau 3},\] \[d_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{\mu}\right)U_{\mu 2}^{*}U_{e2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{\mu}\right)U_{\mu 3}^{*}U_{e3},\] \[d_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{\mu}\right)U_{\mu 2}^{*}U_{\tau 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{\mu}\right)U_{\mu 3}^{*}U_{\tau 3}.\]
Then following the same procedure as in the \(\nu_{e}\) case we calculate the complexity for the \(\nu_{\mu}\) case as
\[\chi_{\mu}= \ P_{\mu e}(t)\left[N_{1\mu}^{2}|c_{1}|^{2}+2N_{2\mu}^{2}|d_{1}|^ {2}\right]+P_{\mu\tau}(t)\left[N_{1\mu}^{2}|c_{2}|^{2}+2N_{2\mu}^{2}|d_{2}|^{2}\right]\] \[+2\Re\left[N_{1\mu}^{2}c_{1}^{*}c_{2}A_{\mu e}(t)A_{\mu\tau}(t)^{* }\right]+4\Re\left[N_{2\mu}^{2}d_{1}^{*}d_{2}A_{\mu e}(t)A_{\mu\tau}(t)^{*} \right]. \tag{4.4}\]
#### 4.2.3 Initial tau-neutrino (\(\nu_{\tau}\)) state
Again, we start with the initial flavor state
\[|K_{0}\rangle\equiv|\nu_{\tau}\rangle=(0,0,1)^{T},\]
and obtain the two other Krylov states as
\[|K_{1}\rangle=N_{1\tau}(e_{1},e_{2},0)^{T}\ \ \text{and}\ \ |K_{2}\rangle=N_{2\tau}(f_{1},f_{2},0)^{T},\]
where,
\[e_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{\tau 2}^{*}U_{e2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{\tau 3}^{*}U_{e3},\] \[e_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)U_{\tau 2}^{*}U_{\mu 2}+ \left(\frac{\Delta m_{31}^{2}}{2E}\right)U_{\tau 3}^{*}U_{\mu 3},\] \[f_{1}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{\tau}\right)U_{\tau 2}^{*}U_{e2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{\tau}\right)U_{\tau 3}^{*}U_{e3},\] \[f_{2}= \left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}-A_{\tau}\right)U_{\tau 2}^{*}U_{\mu 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{\tau}\right)U_{\tau 3}^{*}U_{\mu 3}.\]
The complexity in this case is given by
\[\chi_{\tau}= \ P_{\tau e}(t)\left[N_{1\tau}^{2}|e_{1}|^{2}+2N_{2\tau}^{2}|f_{1 }|^{2}\right]+P_{\tau\mu}(t)\left[N_{1\tau}^{2}|e_{2}|^{2}+2N_{2\tau}^{2}|f_{2 }|^{2}\right]\] \[+2\Re\left[N_{1\tau}^{2}e_{1}^{*}e_{2}A_{\tau e}(t)A_{\tau\mu}(t )^{*}\right]+4\Re\left[N_{2\tau}^{2}f_{1}^{*}f_{2}A_{\tau e}(t)A_{\tau\mu}(t )^{*}\right]. \tag{4.5}\]
Here, we give analytical expressions for constants used in previous discussions for initial neutrino flavor \(\nu_{\alpha}\).
\[A_{\alpha}= \frac{1}{2E}\left[\left(\Delta m_{21}^{2}\right)^{3}|U_{\alpha 2}|^{2 }(1-|U_{\alpha 2}|^{2})+\left(\Delta m_{31}^{2}\right)^{3}|U_{\alpha 3}|^{2}(1-|U_{ \alpha 3}|^{2})-\left(\Delta m_{21}^{2}\right)\left(\Delta m_{31}^{2}\right)\right.\] \[\left.|U_{\alpha 2}|^{2}|U_{\alpha 3}|^{2}\left(\Delta m_{21}^{2}+ \Delta m_{31}^{2}\right)\right]\left[\left(\Delta m_{21}^{2}\right)^{2}|U_{ \alpha 2}|^{2}(1-|U_{\alpha 2}|^{2})+\left(\Delta m_{31}^{2}\right)^{2}|U_{ \alpha 3}|^{2}\right.\] \[\left.(1-|U_{\alpha 3}|^{2})-2\left(\Delta m_{21}^{2}\right)\left( \Delta m_{31}^{2}\right)|U_{\alpha 2}|^{2}|U_{\alpha 3}|^{2}\right]^{-1},\]
and normalization constants
\[N_{1\alpha}= \left(\left(\frac{\Delta m_{21}^{2}}{2E}\right)^{2}|U_{\alpha 2}|^{ 2}(1-|U_{\alpha 2}|^{2}+\left(\frac{\Delta m_{31}^{2}}{2E}\right)^{2}|U_{ \alpha 3}|^{2}(1-|U_{\alpha 3}|^{2})\right.\] \[\left.-2\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{ \Delta m_{31}^{2}}{2E}\right)|U_{\alpha 2}|^{2}|U_{\alpha 3}|^{2}\right)^{-1/2},\] \[N_{2\alpha}= \left(\left(\frac{\Delta m_{21}^{2}}{2E}\right)^{2}\left(\frac{ \Delta m_{21}^{2}}{2E}-A_{\alpha}\right)^{2}|U_{\alpha 2}|^{2}(1-|U_{\alpha 2}|^{2})\right.\] \[\left.+\left(\frac{\Delta m_{31}^{2}}{2E}\right)^{2}\left(\frac{ \Delta m_{31}^{2}}{2E}-A_{\alpha}\right)^{2}|U_{\alpha 3}|^{2}(1-|U_{\alpha 3}|^{2})\right.\] \[\left.-2\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{ \Delta m_{31}^{2}}{2E}\right)\left(\frac{\Delta m_{21}^{2}}{2E}-A_{\alpha} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-A_{\alpha}\right)|U_{\alpha 2}|^{2}|U_{ \alpha 3}|^{2}\right)^{-1/2}.\]
Explicit expressions of vacuum oscillation amplitudes \(A_{\alpha\beta}(t)\) are given in the appendix.
### Matter effects on the complexity of neutrino system
Neutrinos can also travel through a medium that may induce a matter potential due to coherent forward-scattering of electron neutrinos (\(\nu_{e}\)) with electrons contained inside that matter [59, 60]. In that case, the Hamiltonian in flavor basis has an extra matter potential term. For a constant matter density this extra term \(V=\pm\sqrt{2}G_{f}N_{e}\) is added to the vacuum Hamiltonian as
\[H_{f}=UH_{m}U^{-1}+V\ diag(1,0,0).\]
Here, \(G_{f}\) and \(N_{e}\) are the Fermi constant and electron number density in matter, respectively. The "+" and "-" signs of the potential correspond to neutrinos and antineutrinos, respectively.
In the case of constant matter density, the initial two Krylov states come out to be the same as those in the case of vacuum oscillations, _i.e.,_
\[|K_{0}\rangle_{\alpha}^{matter} =|K_{0}\rangle_{\alpha}^{vacuum} \tag{4.6}\] \[|K_{1}\rangle_{\alpha}^{matter} =|K_{1}\rangle_{\alpha}^{vacuum}\,, \tag{4.7}\]
where \(\alpha\) represents the flavor of neutrino at the time of production. However, \(|K_{2}\rangle\) contains the effects of constant matter density. The expression of the \(|K_{2}\rangle\) state for the initial \(\nu_{e}\) flavor is as follows
\[|K_{2}\rangle_{e}=N_{2e}^{m}(0,b_{1}^{m},b_{2}^{m})^{T}\]
where,
\[b_{1}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21}^ {2}}{2E}+V-B_{e}\right)U_{e2}^{*}U_{\mu 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{e}\right)U_{e3}^{*}U_{\mu 3},\] \[b_{2}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}+V-B_{e}\right)U_{e2}^{*}U_{\tau 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{e}\right)U_{e3}^{*}U_{\tau 3}.\]
The superscript \(m\) here stands for matter effects. Similarly, for the initial \(\nu_{\mu}\) flavor
\[\left|K_{2}\right\rangle_{\mu}=N_{2\mu}^{m}(d_{1}^{m},0,d_{2}^{m})^{T},\]
where,
\[d_{1}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{21 }^{2}}{2E}+V-B_{\mu}\right)U_{e2}U_{\mu 2}^{*}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\mu}\right)U_{e3}U_{\mu 3}^{*}\] \[d_{2}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_{2 1}^{2}}{2E}-B_{\mu}\right)U_{\mu 2}^{*}U_{\tau 2}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-B_{\mu}\right)U_{\mu 3}^{*}U_{\tau 3},\]
and for the initial \(\nu_{\tau}\) flavor
\[\left|K_{2}\right\rangle_{\tau}=N_{2\tau}^{m}(f_{1}^{m},f_{2}^{m},0)^{T}\]
where,
\[f_{1}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_ {21}^{2}}{2E}+V-B_{\tau}\right)U_{e2}U_{\tau 2}^{*}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\tau}\right)U_{e3}U_{\tau 3}^{*},\] \[f_{2}^{m} =\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{\Delta m_ {21}^{2}}{2E}-B_{\tau}\right)U_{\mu 2}U_{\tau 2}^{*}+\left(\frac{\Delta m_{31}^{2}}{2E} \right)\left(\frac{\Delta m_{31}^{2}}{2E}-B_{\tau}\right)U_{\mu 3}U_{\tau 3}^{*}.\]
The constant \(B_{e}\) is represented as
\[B_{e}= \left[\left(\Delta m_{21}^{2}\right)^{2}\left(\Delta m_{21}^{2}+ 2EV\right)|U_{e2}|^{2}(1-|U_{e2}|^{2})+\left(\Delta m_{31}^{2}\right)^{2} \left(\Delta m_{31}^{2}+2EV\right)|U_{e3}|^{2}\right.\] \[\left.(1-|U_{e3}|^{2})-\left(\Delta m_{21}^{2}\right)\left(\Delta m _{31}^{2}\right)|U_{e2}|^{2}|U_{e3}|^{2}\left((\Delta m_{21}^{2}+2EV)+\left( \Delta m_{31}^{2}+2EV\right)\right)\right]\] \[\left[2E\left[\left(\Delta m_{21}^{2}\right)^{2}|U_{e2}|^{2}(1-|U _{e2}|^{2})+\left(\Delta m_{31}^{2}\right)^{2}|U_{e3}|^{2}(1-|U_{e3}|^{2})\right.\right.\] \[\left.\left.\qquad-2\left(\Delta m_{21}^{2}\right)\left(\Delta m _{31}^{2}\right)|U_{e2}|^{2}|U_{e3}|^{2}\right]\right]^{-1}.\]
For initial \(\nu_{\mu}\) and \(\nu_{\tau}\) state the constant \(B_{\alpha}\) is
\[B_{\alpha}= \left[\left(\Delta m_{21}^{2}\right)^{3}|U_{\alpha 2}|^{2}(1-|U_{ \alpha 2}|^{2})+\left(\Delta m_{31}^{2}\right)^{3}|U_{\alpha 3}|^{2}(1-|U_{ \alpha 3}|^{2})-\left(\Delta m_{21}^{2}\right)\left(\Delta m_{31}^{2}\right)\right.\] \[\left.|U_{\alpha 2}|^{2}|U_{\alpha 3}|^{2}\left(\Delta m_{21}^{2}+ \Delta m_{31}^{2}\right)+2EV\left(\left(\Delta m_{21}^{2}\right)^{2}|U_{e2}|^{2 }|U_{\alpha 2}|^{2}+\left(\Delta m_{31}^{2}\right)^{2}|U_{e3}|^{2}|U_{\alpha 3}|^{2}\right.\right.\] \[\left.\left.\left.+2\left(\Delta m_{21}^{2}\right)\left(\Delta m _{31}^{2}\right)\Re(U_{e2}^{*}U_{\alpha 2}U_{e3}U_{\alpha 3}^{*})\right)\right]\left[2E\left[\left( \Delta m_{21}^{2}\right)^{2}|U_{\alpha 2}|^{2}(1-|U_{\alpha 2}|^{2})+\left(\Delta m_{31}^{2} \right)^{2}\right.\right.\] \[\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.
where \(\alpha=\mu,\tau\). The normalization factor \(N_{1}\) remains the same in matter as in vacuum but the normalization factor \(N_{2}\) is modified as given below.
\[N_{2e}^{m}= \left(\left(\frac{\Delta m_{21}^{2}}{2E}\right)^{2}|U_{e2}|^{2}(1- |U_{e2}|^{2})\left[\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{e}\right)^{2}\right]\right.\] \[\left.+\left(\frac{\Delta m_{31}^{2}}{2E}\right)^{2}|U_{e3}|^{2}( 1-|U_{e3}|^{2})\left[\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{e}\right)^{2}\right]\right.\] \[\left.-2\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{ \Delta m_{31}^{2}}{2E}\right)\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{\mu} \right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\mu}\right)|U_{e2}|^{2}|U_{e3} |^{2}\right)^{-1/2},\] \[N_{2\mu}^{m}= \left(\left(\frac{\Delta m_{21}^{2}}{2E}\right)^{2}|U_{\mu 2}|^{2} \left[\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{\mu}\right)^{2}|U_{e2}|^{2}+ \left(\frac{\Delta m_{21}^{2}}{2E}-B_{\mu}\right)^{2}|U_{\tau 2}|^{2}\right]\right.\] \[\left.+\left(\frac{\Delta m_{31}^{2}}{2E}\right)^{2}|U_{\mu 3}|^{2} \left[\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\mu}\right)^{2}|U_{e3}|^{2}+ \left(\frac{\Delta m_{31}^{2}}{2E}-B_{\mu}\right)^{2}|U_{\tau 3}|^{2}\right]\right.\] \[\left.+2\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{ \Delta m_{31}^{2}}{2E}\right)\left[\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{\mu }\right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\mu}\right)\Re(U_{\mu 2}^{*}U_{e2}U_{ \mu 3}U_{e3}^{*})\right.\right.\] \[\left.\left.+\left(\frac{\Delta m_{21}^{2}}{2E}-B_{\mu}\right) \left(\frac{\Delta m_{31}^{2}}{2E}-B_{\mu}\right)\Re(U_{\mu 2}^{*}U_{ \tau 2}U_{\mu 3}U_{\tau 3}^{*})\right]\right)^{-1/2},\] \[N_{2\tau}^{m}= \left(\left(\frac{\Delta m_{21}^{2}}{2E}\right)^{2}|U_{\tau 2}|^{2} \left[\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{\tau}\right)^{2}|U_{e2}|^{2}+ \left(\frac{\Delta m_{21}^{2}}{2E}-B_{\tau}\right)^{2}|U_{\mu 2}|^{2}\right]\right.\] \[\left.+\left(\frac{\Delta m_{31}^{2}}{2E}\right)^{2}|U_{\tau 3}|^{2} \left[\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\tau}\right)^{2}|U_{e3}|^{2}+ \left(\frac{\Delta m_{31}^{2}}{2E}-B_{\tau}\right)^{2}|U_{\mu 3}|^{2}\right]\right.\] \[\left.+2\left(\frac{\Delta m_{21}^{2}}{2E}\right)\left(\frac{ \Delta m_{31}^{2}}{2E}\right)\left[\left(\frac{\Delta m_{21}^{2}}{2E}+V-B_{ \tau}\right)\left(\frac{\Delta m_{31}^{2}}{2E}+V-B_{\tau}\right)\Re(U_{\tau 2}^{*}U_{e2 }U_{\tau 3}U_{e3}^{*})\right.\right.\right.\] \[\left.\left.+\left(\frac{\Delta m_{21}^{2}}{2E}-B_{\tau}\right) \left(\frac{\Delta m_{31}^{2}}{2E}-B_{\tau}\right)\Re(U_{\tau 2}^{*}U_{\mu 2}U_{ \tau 3}U_{\mu 3}^{*})\right]\right)^{-1/2}.\]
## 5 Results
In this section, we explore the effects of oscillation parameters on the complexity of the three-flavor neutrino oscillation system using numerical calculations. To obtain all the plots, we have considered the best-fit values of the oscillation parameters from reference [57] as \(\theta_{12}=33.64^{o}\), \(\theta_{13}=8.53^{o}\), \(\theta_{23}=47.63^{o}\) and \(\Delta m_{21}^{2}=7.53\times 10^{-5}\) eV\({}^{2}\). For normal hierarchy, we have used \(\Delta m_{31}^{2}=2.528\times 10^{-3}\) eV\({}^{2}\) and \(\Delta m_{31}^{2}=-2.46\times 10^{-3}\) eV\({}^{2}\) for inverted hierarchy.
In Fig. 1 we have plotted the complexity \(\chi_{\alpha}\) with respect to \(L/E\) ratio, where \(L\) and \(E\) are the distance traveled by neutrinos in vacuum and energy of neutrino, respectively, keeping \(\delta=0^{o}\) in case of initial flavor \(\nu_{e}\) (blue solid line), \(\nu_{\mu}\) (red dashed line) and \(\nu_{\tau}\) (green dot-dashed line). The left panel shows the general case of neutrino evolution whereas the right panel represents the scenario that is experimentally reliable, as the \(L/E\) ratio
corresponds to the current and planned long baseline experimental facilities. The rapid oscillation pattern seen in the left panel (zoomed-in in the right panel) is due to \(\Delta m^{2}_{31}\) mass-squared difference in the oscillation phase, while the longer oscillation pattern is due to \(\Delta m^{2}_{21}\) in the oscillation phase. The oscillation length is \(\sim 10^{3}\) km at \(E=1\) GeV for \(\Delta m^{2}_{31}\) and \(\sim 3\times 10^{4}\) km at \(E=1\) GeV for \(\Delta m^{2}_{21}\). In the general case (left panel), we can
Figure 1: (color online) Complexity plotted with respect to the distance \(L\) over energy \(E\) traveled by neutrinos in vacuum and in case if the initial flavor is \(\nu_{e}\) (blue solid line), \(\nu_{\mu}\) (red dashed line) and \(\nu_{\tau}\) (green dot-dashed line) for \(CP\)-violating phase \(\delta=0^{o}\). All other parameters are set at their best-fit values.
Figure 2: (color online) Complexity for large \(L/E\) range (upper panels), small \(L/E\) range (middle panels) and 1-\(P_{\alpha\alpha}\) (lower panels) with respect to \(L/E\) for neutrinos traveling in vacuum in the case if the initial flavor is \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right) for different values of the \(CP\)-violating phase \(\delta\) depicted by different colors.
see that the complexity is maximum if the neutrino is produced initially as \(\nu_{e}\), however, this happens only at a very large \(L/E\) value of \(\sim 1.6\times 10^{4}\) km/GeV. While, in current experimental setups (right panel), which covers roughly one oscillation length for \(\Delta m^{2}_{31}\), the initial \(\nu_{e}\) flavor provides the least complexity among all neutrino flavors.
Next, in Fig. 2, we have plotted the complexity \(\chi_{\alpha}\) (upper two panels) and the total oscillation probability for a given flavor \(\nu_{\alpha}\) to other flavors _i.e.,_\(1-P_{\alpha\alpha}\) (bottom panels) with respect to the \(L/E\) ratio for different values of \(\delta\). We can see here that the complexity mimics the features of the total oscillation probability \(1-P_{\alpha\alpha}\). However, it is visible that \(\chi_{\alpha}\) for all three flavors provide more information regarding the \(CP\)-violating phase \(\delta\). For the large \(L/E\) range (top panels) the complexities are maximized and the corresponding \(\delta=+90^{o}\) or \(-90^{o}\) for \(\chi_{\mu}\) and \(\chi_{\tau}\), and at \(\delta=\pm 90^{o}\) for \(\chi_{e}\). Note that the CP is maximally
Figure 3: (color online) Complexity (first row), 1-\(P_{\alpha\alpha}\) (second row) and various transition probabilities (third and fourth rows) with respect to the neutrino-energy \(E\) in case of initial flavor \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right) for different values of the \(CP\)-violating phase \(\delta\) depicted by different colors. Here, we have considered \(L=1000\) km. All other parameters are set at their best-fit values.
violated at approximately these \(\delta\) values.4 In the limited \(L/E\) range (middle panels) \(\chi_{\mu}\) and \(\chi_{\tau}\) are maximized at \(\delta=-90^{o}\) (red-dashed line) and at \(\delta=+90^{o}\) (red-solid line), respectively, where CP is maximally violated. However, \(\chi_{e}\) is maximized at \(\delta=+135^{o}\) and at \(-45^{o}\). The reason is that the complexity is rather low for \(\chi_{e}\) in the low \(L/E\) range, as discussed before, and cannot probe the \(\delta=\pm 90^{o}\) value for which \(\chi_{e}\) is maximized (upper left panel).
Footnote 4: For \(\chi_{\mu}\) and \(\chi_{\tau}\) the CP-violation is maximum for \(\delta\approx\pm 95^{o}\) because of the cross-terms in the Krylov states.
Dependence of complexity on \(\delta\) can also be seen in the first and second rows of Fig. 3 where we have shown the variations of \(\chi_{\alpha}\) and their corresponding total oscillation probabilities \(1-P_{\alpha\alpha}\) with energy \(E\) for a fixed baseline of \(L=1000\) km. It is clear from these plots that the effect of \(\delta\) is significantly distinguishable if the initial flavor is either \(\nu_{\mu}\) or \(\nu_{\tau}\). In the case of initial \(\nu_{e}\), this effect of non-zero \(\delta\) is again quite small. The non-zero \(\delta\) value notably enhances the complexity of the system for \(\nu_{\mu}\) and \(\nu_{\tau}\) flavors and these are maximum for \(\delta=-90^{o}\) and \(\delta=90^{o}\), respectively. As mentioned earlier, these are also the values for which CP is maximally violated.
In Fig. 3, we have also compared the complexities with corresponding (individual) oscillation probabilities \(P_{\alpha\beta}\). For example, \(\chi_{e}\) can be compared with \(P_{e\mu}\) and \(P_{e\tau}\), \(\chi_{\mu}\) can be compared with \(P_{\mu e}\) and \(P_{\mu\tau}\) and so on. It can be seen that the oscillation probabilities \(P_{\alpha\beta}\) where \(\alpha\neq\beta\), indicate specific values of \(\delta\)-phase to be maximum. Specifically, \(P_{e\mu}\), \(P_{\tau e}\) and \(P_{\mu\tau}\) are maximum for \(\delta=90^{o}\) whereas \(P_{\mu e}\), \(P_{e\tau}\) and \(P_{\tau\mu}\) are maximum for \(\delta=-90^{o}\). On the other hand, \(\chi_{\mu}\), which is a combination of \(P_{\mu e}\) and \(P_{\mu\tau}\), is maximum at \(\delta=-90^{o}\) showing more inclination towards \(P_{\mu e}\). Similarly, \(\chi_{\tau}\), which is a combination of \(P_{\tau e}\) and \(P_{\tau\mu}\) approaches its maximum value at \(\delta=90^{o}\). The variation of \(\chi_{e}\) with respect to \(\delta\) is different
Figure 4: (color online) Complexity (upper panel) and 1-\(P_{\alpha\alpha}\) (lower panel) with respect to the \(L/E\) ratio in case of initial flavor \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right) where the effects of higher octant (\(\theta_{23}=51.295^{o}\)) and lower octant (\(\theta_{23}=44.026^{o}\)) of \(\theta_{23}\) are represented by blue and red curves, respectively.
than \(P_{e\mu}\) and \(P_{e\tau}\) as \(\chi_{e}\) achieves its maximum value at both \(\delta=135^{o}\) and \(-45^{o}\) for the adopted \(L\) in these plots. However, this variation of \(\chi_{e}\) with \(\delta\) is very small. The oscillation maxima and minima for \(P_{e\mu}\), \(P_{e\tau}\), \(P_{\mu e}\) and \(P_{\tau e}\) also varies with \(\delta\). This is because the CP phase \(\delta\) gets added in the expressions for the oscillation phase. Therefore, depending on the sensitivity of an experiment to a certain energy range, measurements involving \(\nu_{e}\) can result in higher probability for a certain value of \(\delta\) other than \(\pm 90^{o}\) where \(\chi_{e}\) has the global maximum (see Fig. 2).
We have also analyzed the effects of the octant of \(\theta_{23}\) on complexity. In Fig. 4 we plot \(\chi_{\alpha}\) (upper panels) and their corresponding \(1-P_{\alpha\alpha}\) (lower panels) with respect to the \(L/E\) ratio. In this figure, blue and red curves represent the case of upper (\(\theta_{23}=51.295^{o}\)) and lower (\(\theta_{23}=44.026^{o}\)) octants of \(\theta_{23}\), respectively. The \(\theta_{23}\)-values we considered here are the extreme points associated with \(3\sigma\) allowed range. It can be seen that for \(\chi_{e}\) there is no sensitivity for the \(\theta_{23}\) octant, however, the complexities associated to \(\nu_{\mu}\) and \(\nu_{\tau}\) flavors can distinguish between blue and red curves, \(i.e.\), \(\chi_{\mu}\) and \(\chi_{\tau}\) show some sensitivity to the octant of \(\theta_{23}\). However, this feature of complexities is almost similar to that of \(1-P_{\alpha\alpha}\). Therefore, complexity does not provide additional information for the parameter \(\theta_{23}\).
### Complexity estimates for specific experiments
The currently operating two long baseline neutrino oscillation experiments, T2K in Japan [61] and NOvA in the USA [62], are poised to measure oscillation parameters such as \(\delta\), \(\theta_{23}\) and the mass hierarchy, _i.e.,_ the sign of the mass-squared difference \(|\Delta m^{2}_{31}|\). T2K has a baseline of \(L=295\) km while that of NOvA is \(L=810\) km. Muon neutrinos are produced in these experiments through charged pion decays. The flux of these neutrinos peaks at approximately \(0.6\) GeV and \(1.8\) GeV, respectively, for T2K and NOvA. Latest results from T2K hint a measurement of the CP-violating phase \(\delta=-2.14^{+0.90}_{-0.69}\) radians and a preference for normal hierarchy [36]. The NOvA experiment in its latest analysis [37], however, rejects the T2K best-fit value of \(\delta\) by more than \(2\sigma\) confidence and prefers instead \(\delta=0.82^{+0.27}_{-0.87}\)\(\pi\), again with a preference for normal hierarchy. See, e.g., reference [63] for a review of this tension between the T2K and NOvA results and plausible solutions.
In this subsection, we explore complexity in the context of the T2K and NOvA experiments, and sensitivity of complexity on the oscillation parameters, especially the CP phase
Figure 5: (color online) Complexity \(\chi_{e}\) (left), \(\chi_{\mu}\) (middle) and \(\chi_{\tau}\) (right) w. r. t. neutrino-energy \(E\) is shown. Here, \(L=810\) km, \(\delta=-90^{o}\) and matter potential \(V=1.01\times 10^{-13}\) eV have been considered. Solid and dashed curves represent the case of vacuum and matter oscillations, respectively.
\(\delta\). Note that the matter effect discussed in Sec. 4.3 is important for the NOvA experiment, where neutrinos propagate through the crust of the Earth over a distance of 810 km from their production point to the detector. Matter effects can be considered negligible for T2K due to its shorter baseline and lower energy range of neutrinos. In Fig. 5 we plot the complexities \(\chi_{e}\) (left panel), \(\chi_{\mu}\) (middle panel) and \(\chi_{\tau}\) (right panel) calculated without (solid lines) and with (dashed lines) matter effect with respect to the neutrino-energy \(E\) for the NOvA baseline. The matter potential, in this case, is \(V=1.01\times 10^{-13}\) eV for an average density of 2.8 g/cm\({}^{3}\). It is clear that the matter effect increases complexity of the system in all cases of initial flavors of the neutrino, but most significantly for \(\nu_{e}\) as expected.
In Figs. 6 and 7 we show contour plots of \(\chi_{\alpha}\) as functions of the CP-phase \(\delta\) and neutrino energy \(E\), respectively for the T2K and NOvA experiments. We have also compared complexities with the total oscillation probability \(1-P_{\alpha\alpha}\) and individual oscillation probabilities \(P_{\alpha\beta}\). One can see that \(\chi_{e}\) shows less variations with respect to \(\delta\) while this sensitivity is largely enhanced in the case of \(\chi_{\mu}\) and \(\chi_{\tau}\) at the relevant flux energies of \(E\approx 0.6\) GeV and \(E\approx 1.8\) GeV, respectively, for T2K and NOvA. For both the experiments, the maxima of \(\chi_{\mu}\) and \(\chi_{\tau}\) are found at \(\delta\approx-1.5\) radian and \(\delta=1.5\) radian, respectively. This means that the matter effect just enhances the magnitude of complexities (as shown in Fig. 5), however, the characteristics of \(\chi_{\alpha}\) with respect to \(\delta\) are almost similar for both T2K and NOvA experiments. We have also compared the complexities with corresponding flavor transition probabilities to specific flavors, for example, \(\chi_{e}\) is compared with \(P_{e\mu}\) and \(P_{e\tau}\). Note that \(1-P_{\alpha\alpha}\) are essentially featureless and do not provide much information on \(\delta\), the reason being a cancellation of features in individual probabilities \(P_{\alpha\beta}\) during the summation.
Let us compare results from the complexities with experimental results and probabilities. In the T2K and NOvA experimental setups, where only \(\nu_{\mu}\) beams are produced, the only relevant complexity is \(\chi_{\mu}\). For both the T2K and NOvA \(\chi_{\mu}\) is maximized at \(\delta\approx-1.5\) radian at the relevant experimental energies. The T2K best-fit value of \(\delta=-2.14^{+0.90}_{-0.69}\) radian is consistent with this expectation. The NOvA best-fit, however, is at \(\delta\approx 2.58\) radian which is far away from the maximum \(\chi_{\mu}\) in the lower-half plane of \(\delta\) but is still within a region of high \(\chi_{\mu}\) value in the upper-half plane of \(\delta\). Now, if we look at \(P_{\mu e}\), which is the only oscillation probability accessible to the T2K and NOvA setups, it becomes maximum at \(\delta\approx-1.5\) radian. This is compatible with T2K best-fit but is in odd with the NOvA best-fit. In fact, \(P_{\mu e}\) is significantly lower at the NOvA best-fit point. It is interesting to see that complexity, which is an information-theoretic measure, provides correct prediction for the \(\delta\) in experimental setups. We would also like to mention here that Fig. (7) is obtained for the case of normal mass hierarchy, however, we have also noticed that \(\chi_{\mu}\) exhibits the same characteristic in case of inverted mass hierarchy.
Figure 6: T2K: Complexity (first row), 1-\(P_{\alpha\alpha}\) (second row) and oscillation probabilities \(P_{\alpha\beta}\) (\(\alpha\neq\beta\)) (third and fourth row) are manifested in the plane of \(E-\delta\) in case of initial flavor \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right). Here, we have considered \(L=295\) km corresponding to the T2K experimental setup.
Figure 7: NOvA: Complexity (first row), 1-\(P_{\alpha\alpha}\) (second row) and oscillation probabilities \(P_{\alpha\beta}\) (\(\alpha\neq\beta\)) (third and fourth row) are manifested in the plane of \(E-\delta\) in case of initial flavor \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right). Here, we have considered \(L=810\) km corresponding to the NOvA experimental setup.
Further, we have also analyzed the effects of the neutrino mass hierarchy. In Fig. 8 we plot \(\chi_{e}\), \(\chi_{\mu}\) and \(\chi_{\tau}\) with respect to neutrino energy \(E\) in the context of NOvA. Solid and dashed curves are representing normal hierarchy (NH) and inverted hierarchy (IH) of the neutrino mass eigenstates. In the upper panel, we considered the vacuum oscillation framework whereas the lower panel is depicting the case of matter oscillations. Here we can see that the complexity can distinguish between the effects due to NH and IH in the presence of non-zero matter potential.
Finally, we also compare the effects of mass hierarchy in neutrino and antineutrino oscillations scenarios. In Fig. 9, \(\chi_{e}/\chi_{\bar{e}}\) (left panel), \(\chi_{\mu}/\chi_{\bar{\mu}}\) (middle panel) and \(\chi_{\tau}/\chi_{\bar{\tau}}\) (right panel) are plotted with respect to \(E\). The red and blue curves represent the cases of neutrino and antineutrino, respectively with NH (solid line) and IH (dashed line). It can be seen that in either case of neutrino or antineutrino, the effects of NH and IH are significantly distinguishable for all three flavors. Apart from this, in the case of \(\chi_{e}\), red-solid line (neutrinos for NH) and blue-dashed line (antineutrinos for IH) exhibit more complexity. In fact, we can see a complete swap between the NH (IH) hierarchy and \(\nu\) (\(\bar{\nu}\)). This is a unique character of \(\chi_{e}\) and is different from the probability \(P_{\mu e}\), also shown in Fig. 9. On the other hand, for \(\chi_{\mu}\) and \(\chi_{\tau}\) the maximum is achieved in case of neutrinos with NH and Antineutrinos with IH, respectively. Note that complexity for antineutrinos can be achieved by replacing the matter potential \(V\to-V\), and the CP phase \(\delta\to-\delta\). Therefore, \(\chi_{e}\) for neutrino in NH coincide with antineutrino in IH. There is an (almost) overlap between neutrino and antineutrino curves for IH in the case of \(\chi_{\mu}\) and with NH in the case of \(\chi_{\tau}\).
## 6 Summary
In this section, we summarize the results of our analysis of spread complexity in the context of neutrino oscillations.
* We have inspected the spread complexity for two-flavor neutrino oscillations. We find that in this case, the Krylov basis is equivalent to the basis spanned by the flavor states of neutrino. Hence, the complexity for both cases of the initial flavor of neutrino comes out to be equal to the oscillation probability, _i.e.,_\(\chi_{e}=P_{e\mu}\) and \(\chi_{\mu}=P_{\mu e}\) as can be seen in Eqs. (4.1) and (4.2). It means that complexity and oscillation probabilities contain the same information. Also, since \(P_{e\mu}=P_{\mu e}\) for both vacuum and standard matter oscillations, it implies that \(\chi_{e}=\chi_{\mu}\).
* In the three-flavor neutrino oscillation framework, we find that the Krylov basis is not equal to the flavor state basis. Forms of the Krylov states for all three cases of initial states (\(\nu_{e}\), \(\nu_{\mu}\), \(\nu_{\tau}\)) are given in Eqs. (4.3), (4.4) and (4.5) of Sec. 4.2. We find that the spread complexities have extra cross terms apart from the transition probabilities of the initial neutrino flavor.
* Complexities show oscillatory patterns (see Fig. 1) driven by the two mass-squared differences (\(\Delta m^{2}_{21}\) and \(\Delta m^{2}_{31}\)), similar to the probabilities. The relevant probability
to compare with \(\chi_{\alpha}\), however, is \(1-P_{\alpha\alpha}\). In vacuum, the complexities are maximized over a large \(L/E\approx(10-22)\times 10^{3}\) km for \(\chi_{\mu}\) and \(\chi_{\tau}\), depending on the CP-phase \(\delta\) and at \(L/E\approx 16\times 10^{3}\) km for \(\chi_{e}\) (see Fig. 2). Notably, \(\chi_{e}\) has the highest complexity for this large range of \(L/E\) but note that, the maximum \(L/E\) value accessible in current long-baseline oscillation experiments is about 1000 km/GeV. Hence, in current experimental conditions, the complexity represented by \(\chi_{e}\) is much lower than the
Figure 8: NOvA: Complexity with respect to neutrino-energy \(E\) in case of initial flavor \(\nu_{e}\) (left), \(\nu_{\mu}\) (middle) and \(\nu_{\tau}\) (right) with \(L=810\) km and \(\delta=-90^{o}\). The upper and lower panel represent the case of vacuum and matter oscillations, respectively. Solid curves are associated with normal mass ordering (NO) and dashed curves depict the inverted ordering (IO).
Figure 9: NOvA: Complexities and \(P_{\mu e}\) with respect to neutrino-energy \(E\) where red and blue curves represent neutrino and antineutrino case, respectively, with solid (normal ordering) and dashed (inverted ordering) lines. Here \(L=810\) km and \(\delta=-90^{o}\) are considered.
complexities of \(\chi_{\mu}\) and \(\chi_{\tau}\).
* We have scrutinized the effects of different oscillation parameters on the complexities. In vacuum, \(\chi_{\mu}\) and \(\chi_{\tau}\) are maximized for the CP-violating phase \(\delta\approx\pm 90^{o}\), while \(\chi_{e}\) is maximized at \(\delta=90^{o}\) (see Figs. 2 and 3). These maximization happens at very large \(L/E\) as mentioned above. For \(L/E\sim 1000\) km, local maxima for \(\chi_{\mu}\) and \(\chi_{\tau}\) are still at \(\delta\approx\pm 90^{o}\) but can be different for \(\chi_{e}\) depending on the exact \(L/E\). We found that sensitivity of complexities to the octant of \(\theta_{23}\) is small (see Fig. 4). \(\chi_{e}\) essentially has no sensitivity whereas \(\chi_{\mu}\) and \(\chi_{\tau}\) show small but non-zero variation with respect to \(\theta_{23}\) when varied over its \(3\sigma\) allowed range.
* We have investigated \(\chi_{\alpha}\) particularly for the setups of the T2K and NOvA experiments, two long-baseline neutrino oscillation experiments currently operating. For the 810 km baseline of NOvA, the matter effect is important and enhances the complexity embedded in the evolution of all three flavors of neutrinos (see Fig. 5). A detailed examination of \(\chi_{\alpha}\) in the \(E-\delta\) plane (see Figs. 6 and 7) shows that \(\chi_{e}\) is less affected by the variation of \(\delta\), whereas, \(\chi_{\mu}\) and \(\chi_{\tau}\) show stronger variation and the maxima are found around \(\delta=-90^{o}\) and \(+90^{o}\), respectively, which coincide with the relevant \(E\) for these experiments. \(\delta=-90^{o}\) for maximum \(\chi_{\mu}\) is consistent with results from T2K but is in contradiction with results from NOvA. Even though the T2K result is obtained with \(1\sigma\) confidence only, it is encouraging and looks like quantum information theory is providing a theoretical justification for this preference. The enhancements of \(\chi_{e}\) for T2K at around \(\delta=135^{o}\) and \(-45^{o}\), and at \(E\sim 0.2\) GeV are outside the current experimental setup and we cannot check their validity. These, however, correspond to local maxima for the particular \(L/E\) as mentioned above and the global maximum at \(\delta=\pm 90^{o}\) is inaccessible currently (see Fig. 2). It will be interesting to probe these features with a \(\nu_{e}\) beam in a future experiment.
* Neutrino mass hierarchy, whether normal or inverted, affect the complexity and the matter effect is essential to distinguish between them (see Fig. 8). Similarly, the effect of neutrino and antineutrino oscillations affected by mass hierarchy is also embedded in the complexity \(\chi_{\alpha}\) (see Fig. 9).
## 7 Conclusions
This study examines the spread complexity of neutrino states in two- and three-flavor oscillation scenarios. In the two-flavor scenario, complexity and transition probabilities yield equivalent information. However, in the case of three-flavor oscillation, a different pattern emerges. An initial flavor state evolves into two mixed final states and the complexity, while compared to the total oscillation probability, contains additional information. In particular, we examined sensitivity of complexity on the yet unknown value of the CP-violating phase
angle. Remarkably, when we explored complexity across various phase angles, we found that the complexity is maximized for a value of the phase angle for which CP is also maximally violated. Notably, the T2K experimental data also favors this phase angle, which is obtained from studying the flavor transition. This matching is quite fascinating both from the perspectives of neutrino physics and understanding quantum complexity for natural evolution.
Another intriguing aspect regarding complexity is its ability to differentiate between the oscillation probabilities of muon and tau neutrinos, unlike the total oscillation probabilities which remain indistinguishable for a given CP-violating phase angle. If we were to set the phase angle at its maximally CP-violating values, the complexities of muon and tau neutrinos would exhibit slight disparities. Consequently, complexity offers a distinguishing factor between these two scenarios that the total probability fails to provide. Although, this and many other features of complexities we have explored are not accessible to current experimental setups, our study may motivate future studies. Similar analysis can be done for the mixing in the quark sector through the \(CKM\) matrix which will be the future direction of our project.
In conclusion, quantum spread complexity emerges as a potent and novel quantity for investigating neutrino oscillations. Not only does it successfully reproduce existing results, but it also demonstrates the potential to serve as a theoretical tool for predicting new outcomes in future experiments. Its application holds promise in advancing our understanding of neutrino physics and astrophysics in general.
## Appendix A Vacuum oscillation amplitudes
\[A_{e\mu}=-\frac{1}{2}\cos\theta_{13}e^{-\frac{1}{2}i\left(2 \delta+\frac{t(\Delta m_{11}^{2}+\Delta m_{31}^{2})}{E}\right)}\left(\sin 2 \theta_{12}\cos\theta_{23}\left(-1+e^{\frac{i\Delta m_{21}^{2}t}{2E}}\right)e^ {\frac{1}{2}i\left(2\delta+\frac{\Delta m_{31}^{2}t}{E}\right)}\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+2e^{ \frac{i\Delta m_{11}^{2}t}{2E}}-e^{\frac{i\Delta m_{11}^{2}t}{2E}}\right)\right)\]
\[A_{e\tau}=\frac{1}{2}\cos\theta_{13}e^{-\frac{1}{2}i\left(2 \delta+\frac{t(\Delta m_{11}^{2}+\Delta m_{31}^{2})}{E}\right)}\left(\sin 2 \theta_{12}\sin\theta_{23}\left(-1+e^{\frac{i\Delta m_{21}^{2}t}{2E}}\right)e^ {\frac{1}{2}i\left(2\delta+\frac{\Delta m_{31}^{2}t}{E\nu}\right)}\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad-e^{\frac{it( \Delta m_{21}^{2}+\Delta m_{31}^{2})}{2E}}+2e^{\frac{i\Delta m_{21}^{2}t}{2E}}- e^{\frac{i\Delta m_{31}^{2}t}{2E}}\right)\right)\]
\[A_{\mu e}=-\frac{1}{2}\cos\theta_{13}e^{-\frac{it(\left(\Delta m_{2 1}^{2}+\Delta m_{31}^{2}\right)}{2E}}\left(-e^{i\delta}\sin\theta_{13}\sin\theta _{23}\left(\sin^{2}\theta_{12}\left(-1+e^{\frac{i\Delta m_{21}^{2}t}{2E}} \right)e^{\frac{i\Delta m_{31}^{2}t}{2E}}\right.\right.\] \[\qquad\qquad\left.\left.-e^{\frac{it(\Delta m_{21}^{2}+\Delta m_{ 31}^{2})}{2E}}+2e^{\frac{i\Delta m_{31}^{2}t}{2E}}-e^{\frac{i\Delta m_{31}^{2} t}{2E}}\right)+\cos^{2}\theta_{12}\sin\theta_{13}\sin\theta_{23}\left(-1+e^{ \frac{i\Delta m_{21}^{2}t}{2E}}\right)\right.\] \[\qquad\qquad\left.e^{\frac{1}{2}i\left(2\delta+\frac{\Delta m_{ 31}^{2}t}{E}\right)}+\sin 2\theta_{12}\cos\theta_{23}e^{\frac{it(\Delta m_{21}^{2}+ \Delta m_{31}^{2})}{2E}}-2\sin\theta_{12}\cos\theta_{12}\cos\theta_{23}e^{ \frac{i\Delta m_{31}^{2}t}{2E}}\right)\]
\[A_{\mu r}=\frac{1}{32}e^{-\frac{1}{2}i\left(2\delta+\frac{it( \Delta m_{21}^{2}+\Delta m_{31}^{2})}{E}\right)}\left(8\sin 2\theta_{12}\sin \theta_{13}\left(\left(1+e^{2i\delta}\right)\cos 2\theta_{23}-e^{2i\delta}+1\right) \left(-1+e^{\frac{i\Delta m_{21}^{2}t}{2E}}\right)\right.\] \[\qquad\qquad\left.e^{\frac{i\Delta m_{31}^{2}t}{2E}t}+2e^{i\delta }\sin 2\theta_{23}\left(\cos(2(\theta_{12}-\theta_{13}))\left(e^{\frac{i\Delta m _{31}^{2}t}{2E}}-e^{\frac{it(\Delta m_{21}^{2}+\Delta m_{31}^{2})}{2E}}\right)\right.\right.\] \[\qquad\qquad\left.-\cos(2(\theta_{12}+\theta_{13}))e^{\frac{it( \Delta m_{21}^{2}+\Delta m_{31}^{2})}{2E}}-6\cos 2\theta_{12}\left(e^{\frac{i \Delta m_{31}^{2}t}{2E}}-e^{\frac{it(\Delta m_{31}^{2}+\Delta m_{31}^{2})}{2E} }\right)\right.\] \[\qquad\qquad\left.-2\cos 2\theta_{13}e^{\frac{it(\Delta m_{21}^{2}+ \Delta m_{31}^{2})}{2E}}-2e^{\frac{it(\Delta m_{31}^{2}+\Delta m_{31}^{2})}{2E }}+4\cos 2\theta_{12}e^{\frac{i\Delta m_{31}^{2}t}{2E}}+4e^{\frac{i\Delta m_{31 }^{2}t}{2E}}\right.\] \[\qquad\qquad\left.\left.+e^{\frac{i\Delta m_{31}^{2}t}{2E}}\cos( 2(\theta_{12}+\theta_{13}))-2\cos 2\theta_{13}e^{\frac{i\Delta m_{31}^{2}t}{2E}}-2e^{ \frac{i\Delta m_{31}^{2}t}{2E}}\right)\right)\]
\[A_{\tau e}=\frac{1}{2}\cos\theta_{13}e^{-\frac{it(\Delta m_{2 1}^{2}+\Delta m_{31}^{2})}{2E}}\left(e^{i\delta}\sin\theta_{13}\cos\theta_{23} \left(\cos 2\theta_{12}\left(e^{\frac{i\Delta m_{31}^{2}t}{2E}}-e^{\frac{it(\Delta m _{21}^{2}+\Delta m_{31}^{2})}{2E}}\right)\right.\right.\] \[\qquad\qquad\left.\left(-1+e^{\frac{i\Delta m_{21}^{2}t}{2E}} \right)e^{\frac{i\Delta m_{31}^{2}t}{2E}}+2e^{i\delta}\sin 2\theta_{23}\left(\cos(2( \theta_{12}-\theta_{13}))\left(e^{\frac{i\Delta m_{31}^{2}t}{2E}}-e^{\frac{it( \Delta m_{31}^{2}+\Delta m_{31}^{2})}{2E}}\right)\right.\right.\] \[\qquad\qquad\left.\left.-\cos(2(\theta_{12}+\theta_{13}))e^{ \frac{it(\Delta m_{21}^{2}+\Delta m_{31}^{2})}{2E}}-6\cos 2\theta_{12}\left(e^{\frac{i \Delta m_{31}^{2}t}{2E}}-e^{\frac{it(\Delta m_{31}^{2}+\Delta m_{31}^{2})}{2E} }\right)\right.\] \[\qquad\qquad\left.-2\cos 2\theta_{13}e^{\frac{it(\Delta m_{21}^{2}+ \Delta m_{31}^{2})}{2E}}-2e^{\frac{it(\Delta m_{31}^{2}+\Delta m_{31}^{2})}{2E }}+4\cos 2\theta_{13}e^{\frac{i\Delta m_{31}^{2}t}{2E}}+4e^{\frac{i\Delta m_{3 1}^{2}t}{2E}}\right.\] \[\qquad\qquad\left.\left.+e^{\frac{i\Delta m_{31}^{2}t}{2E}}\cos( 2(\theta_{12}+\theta_{13}))-2\cos 2\theta_{13}e^{\frac{i\Delta m_{31}^{2}t}{2E}}-2e^{ \frac{i\Delta m_{31}^{2}t}{2E}}\right)\right)\]
###### Acknowledgments.
We thank Pratik Nandi, Arpan Bhattacharyya, Jaco van Zyl, Ushak Rahaman and Alexei Smirnov for their helpful discussions. This work was partially supported by grants from the National Institute of Theoretical and Computational Sciences (NITheCS) and from the University of Johannesburg Research Council.
|
2302.04798 | Equivariant MuZero | Deep reinforcement learning repeatedly succeeds in closed, well-defined
domains such as games (Chess, Go, StarCraft). The next frontier is real-world
scenarios, where setups are numerous and varied. For this, agents need to learn
the underlying rules governing the environment, so as to robustly generalise to
conditions that differ from those they were trained on. Model-based
reinforcement learning algorithms, such as the highly successful MuZero, aim to
accomplish this by learning a world model. However, leveraging a world model
has not consistently shown greater generalisation capabilities compared to
model-free alternatives. In this work, we propose improving the data efficiency
and generalisation capabilities of MuZero by explicitly incorporating the
symmetries of the environment in its world-model architecture. We prove that,
so long as the neural networks used by MuZero are equivariant to a particular
symmetry group acting on the environment, the entirety of MuZero's
action-selection algorithm will also be equivariant to that group. We evaluate
Equivariant MuZero on procedurally-generated MiniPacman and on Chaser from the
ProcGen suite: training on a set of mazes, and then testing on unseen rotated
versions, demonstrating the benefits of equivariance. Further, we verify that
our performance improvements hold even when only some of the components of
Equivariant MuZero obey strict equivariance, which highlights the robustness of
our construction. | Andreea Deac, Théophane Weber, George Papamakarios | 2023-02-09T17:46:29Z | http://arxiv.org/abs/2302.04798v1 | # Equivariant MuZero
###### Abstract
Deep reinforcement learning repeatedly succeeds in closed, well-defined domains such as games (Chess, Go, StarCraft). The next frontier is real-world scenarios, where setups are numerous and varied. For this, agents need to learn the underlying rules governing the environment, so as to robustly generalise to conditions that differ from those they were trained on. Model-based reinforcement learning algorithms, such as the highly successful MuZero, aim to accomplish this by learning a world model. However, leveraging a world model has not consistently shown greater generalisation capabilities compared to model-free alternatives. In this work, we propose improving the data efficiency and generalisation capabilities of MuZero by explicitly incorporating the _symmetries_ of the environment in its world-model architecture. We prove that, so long as the neural networks used by MuZero are equivariant to a particular symmetry group acting on the environment, the entirety of MuZero's action-selection algorithm will also be equivariant to that group. We evaluate Equivariant MuZero on procedurally-generated MiniPacman and on Chaser from the ProcGen suite: training on a set of mazes, and then testing on unseen rotated versions, demonstrating the benefits of equivariance. Further, we verify that our performance improvements hold even when only some of the components of Equivariant MuZero obey strict equivariance, which highlights the robustness of our construction.
## 1 Introduction
Reinforcement learning (RL) is a potent paradigm for solving sequential decision making problems in a dynamically changing environment. Successful examples of its uses include game playing (Vinyals et al., 2019), drug design (Segler et al., 2018), robotics (Ibarz et al., 2021) and theoretical computer science (Fawzi et al., 2022). However, the generality of RL often leads to data inefficiency, poor generalisation to situations that differ from those encountered in training, and lack of safety guarantees. This is an issue especially in domains where data is scarce or difficult to obtain, such as medicine or human-in-the-loop scenarios.
Most RL approaches do not directly attempt to capture the regularities present in the environment. As an example, consider a grid-world: moving down in a maze is equivalent to moving left in the \(90^{\circ}\) clock-wise rotation of the same maze. Such equivalences can be formalised via Markov Decision Process homomorphisms (Ravindran, 2004; Ravindran & Barto, 2004), and while some works incorporate them (e.g. van der Pol et al., 2020; Rezaei-Shoshtari et al., 2022), most deep reinforcement learning agents would act differently in such equivalent states if they do not observe enough data. This becomes even more problematic when the number of equivalent states is large. One common example is 3D regularities, such as changing camera angles in robotic tasks.
In recent years, there has been significant progress in building deep neural networks that explicitly obey such regularities, often termed geometric deep learning (Bronstein et al., 2021). In this context,
the regularities are formalised using symmetry groups and architectures are built by composing transformations that are equivariant to these symmetry groups (e.g. convolutional neural networks for the translation group, graph neural networks and transformers for the permutation group).
As we are looking to capture the symmetries present in an environment, a fitting place is within the framework of model-based RL (MBRL). MBRL leverages explicit world-models to forecast the effect of action sequences, either in the form of next-state or immediate reward predictions. These imagined trajectories are used to construct plans that optimise the forecasted returns. In the context of state-of-the-art MBRL agent MuZero (Schrittwieser et al., 2020), a Monte-Carlo tree search is executed over these world-models in order to perform action selection.
In this paper, we demonstrate that equivariance and MBRL can be effectively combined by proposing Equivariant MuZero (EqMuZero, shown in Figure 2), a variant of MuZero where equivariance constraints are enforced by design in its constituent neural networks. As MuZero does not use these networks directly to act, but rather executes a search algorithm on top of their predictions, it is not immediately obvious that the actions taken by the EqMuZero agent would obey the same constraints--is guaranteed to produce a rotated action when given a rotated maze? One of our key contributions is a proof that guarantees this: as long as all neural networks are equivariant to a symmetry group, all actions taken will also be equivariant to that same symmetry group. Consequently, EqMuZero can be more data-efficient than standard MuZero, as it knows by construction how to act in states it has never seen before. We empirically verify the generalisation capabilities of EqMuZero in two grid-worlds: procedurally-generated MiniPacman and the Chaser game in the ProcGen suite.
## 2 Background
Reinforcement LearningThe reinforcement learning problem is typically formalised as a Markov Decision Process \((S,A,P,R,\gamma)\) formed from a set of states \(S\), a set of actions \(A\), a discount factor \(\gamma\in[0,1]\), and two functions that model the outcome of taking action \(a\) in state \(s\): the transition distribution \(P(s^{\prime}|s,a)\)--specifying the next state probabilities--and the reward function \(R(s,a)\)--specifying the expected reward. The aim is to learn a _policy_, \(\pi(a|s)\), a function specifying (probabilities of) actions to take in state \(s\), such that the agent maximises the (expected) cumulative reward \(G(\tau)=\sum_{t=0}^{t=T}\gamma^{t}R(s_{t},a_{t})\), where \(\tau=(s_{0},a_{0},s_{1},a_{1},\ldots,s_{T},a_{T})\) is the trajectory taken by the agent starting in the initial state \(s_{0}\) and following the policy to decide \(a_{t}\) based on \(s_{t}\).
MuZeroReinforcement learning agents broadly fall into two categories: _model-free_ and _model-based_. The specific agent we extend here, MuZero (Schrittwieser et al., 2020), is a model-based agent for deterministic environments (where \(P(s^{\prime}|s,a)=1\) for exactly one \(s^{\prime}\) for all \(s\in S\) and \(a\in A\)). MuZero relies on several neural-network components that are composed to create a _world model_. These components are: the _encoder_, \(E:S\to Z\), which embeds states into a latent space \(Z\) (e.g. \(Z=\mathbb{R}^{k}\)), the _transition model_, \(T:Z\times A\to Z\), which predicts embeddings of next states, the _reward model_, \(R:Z\times A\to\mathbb{R}\), which predicts the immediate expected reward after taking an action in a particular state, the _value model_, \(V:Z\to\mathbb{R}\), which predicts the value (expected cumulative reward) from this state, and the _policy model_\(P:Z\to[0,1]^{|A|}\), which predicts the probability of taking each action from the current state. To plan its next action, MuZero executes a Monte Carlo tree search (MCTS) over many simulated trajectories, generated using the above models.
MuZero has demonstrated state-of-the-art capabilities over a variety of deterministic or near-deterministic environments, such as Go, Chess, Shogi and Atari, and has been successfully applied to real-world domains such as video compression (Mandhane et al., 2022). Although here we focus on MuZero for deterministic environments, we note that extensions to stochastic environments also exist (Antonoglou et al., 2021) and are an interesting target for future work.
Groups and RepresentationsA _group_\((\mathfrak{G},\circ)\) is a set \(\mathfrak{G}\) equipped with a _composition_ operation \(\circ:\mathfrak{G}\times\mathfrak{G}\to\mathfrak{G}\) (written concisely as \(\mathfrak{g}\circ\mathfrak{h}=\mathfrak{gh}\)), satisfying the following axioms: _(associativity)_\((\mathfrak{gh})\mathsf{I}=\mathfrak{g}(\mathfrak{h}\mathsf{I})\) for all \(\mathfrak{g},\mathsf{h},\mathsf{I}\in\mathfrak{G}\); _(identity)_ there exists a unique \(\mathfrak{\epsilon}\in\mathfrak{G}\) satisfying \(\mathfrak{eq}=\mathfrak{g}\mathfrak{\epsilon}=\mathfrak{g}\) for all \(\mathfrak{g}\in\mathfrak{G}\); _(inverse)_ for every \(\mathfrak{g}\in\mathfrak{G}\) there exists a unique \(\mathfrak{g}^{-1}\in\mathfrak{G}\) such that \(\mathfrak{ga}^{-1}=\mathfrak{g}^{-1}\mathfrak{g}=\mathfrak{\epsilon}\).
Groups are a natural way to describe _symmetries_: object transformations that leave them unchanged. They can be reasoned about in the context of linear algebra by using their _real representations_:
functions \(\rho_{\mathcal{V}}:\mathfrak{G}\rightarrow\mathbb{R}^{N\times N}\) that give, for every group element \(\mathfrak{g}\in\mathfrak{G}\), a real matrix demonstrating how this element _acts_ on a vector space \(\mathcal{V}\). For example, for the rotation group \(\mathfrak{G}=\mathrm{SO}(n)\), the representation \(\rho_{\mathcal{V}}\) would provide an appropriate \(n\times n\) rotation matrix for each rotation \(\mathfrak{g}\).
Equivariance and InvarianceAs symmetries are assumed to not change the essence of the data they act on, we would like to construct neural networks that adequately represent such symmetry-transformed inputs. Assume we have a neural network \(f:\mathcal{X}\rightarrow\mathcal{Y}\), mapping between vector spaces \(\mathcal{X}\) and \(\mathcal{Y}\), and that we would like this network to respect the symmetries within a group \(\mathfrak{G}\). Then we can impose the following condition, for all group elements \(\mathfrak{g}\in\mathfrak{G}\) and inputs \(\mathbf{x}\in\mathcal{X}\):
\[f(\rho_{\mathcal{X}}(\mathfrak{g})\mathbf{x})=\rho_{\mathcal{Y}}(\mathfrak{ g})f(\mathbf{x}). \tag{1}\]
This condition is known as _\(\mathfrak{G}\)-equivariance_--for any group element, it does not matter whether we act with it on the input or on the output of the function \(f\)--the end result is the same. A special case of this, _\(\mathfrak{G}\)-invariance_, is when the output representation is trivial (\(\rho_{\mathcal{Y}}(\mathfrak{g})=\mathbf{I}\)):
\[f(\rho_{\mathcal{X}}(\mathfrak{g})\mathbf{x})=f(\mathbf{x}). \tag{2}\]
In geometric deep learning, equivariance to reflections, rotations, translations and permutations has been of particular interest (Bronstein et al., 2021).
Generally speaking, there are three ways to obtain an equivariant model: a) data augmentation, b) data canonicalisation and c) specialised architectures. Data augmentation creates additional training data by applying group elements \(\mathfrak{g}\) to input/output pairs \((\mathbf{x},\mathbf{y})\)--equivariance is encouraged by training on the transformed data and/or minimising auxiliary losses such as \(\|\rho_{\mathcal{Y}}(\mathfrak{g})f(\mathbf{x})-f(\rho_{\mathcal{X}}(\mathfrak{ g})\mathbf{x})\|\). Data augmentation can be simple to apply, but it results in only approximate equivariance. Data canonicalisation requires a method to standardise the input, such as breaking the translation symmetry for molecular representation by centering the atoms around the origin (Musil et al., 2021)--however, in many cases, such as the relatively simple MiniPacman environment we use in our experiments, such a canonical transformation may not exist. Specialised architectures have the downside of being harder to build, but they can guarantee exact equivariance--as such, they reduce the search space of functions, potentially reducing the number of parameters and increasing training efficiency.
Equivariance in RLThere has been previous work at the intersection of reinforcement learning and equivariance. While leveraging multi-agent symmetries was repeatedly shown to hold promise (van der Pol et al., 2021; Muglich et al., 2022), of particular interest to us are the symmetries emerging from the environment, in a single-agent scenario. Related work in this space can be summarised by the commutative diagram in Figure 1. When considering only the cube at the bottom, we recover Park et al. (2022)--a supervised learning task where a latent transition model \(T\) learns to predict the next state embedding. They show that if \(T\) is equivariant, the encoder can pick up the symmetries of the environment even if it is not fully equivariant by design. Mondal et al. (2022) build a model-free agent by combining an equivariant-by-design encoder and enforcing the remaining equivariances via regularisation losses. They also consider the invariance of the reward, captured in Figure 1 by taking the decoder to be the reward model and \(l=1\). The work of van der Pol et al. (2020) can be described by having the value model as the decoder, while the work of Wang et al. (2022) has the decoder as the policy model and \(l=|A|\).
## 3 Experiments and results
EnvironmentsWe consider two 2D grid-world environments, MiniPacman (Guez et al., 2019) and Chaser (Cobbe et al., 2020), that feature an agent navigating in a 2D maze. In both environments,
Figure 1: Commutative diagram of symmetries in RL. State transitions due to an action \(a\) are back-to-front, transformations due to a symmetry \(\mathfrak{g}\) are left-to-right, state encoding and decoding by the model is bottom-to-top.
the state is the grid-world map \(\mathbf{X}\) and an action is a direction to move. Both of these grid-worlds are symmetric with respect to \(90^{\circ}\) rotations, in the sense that moving down in some map is the same as moving left in the \(90^{\circ}\) clock-wise rotated version of the same map. Hence, we take our symmetry group to be \(\mathfrak{G}=C_{4}=\{\mathbf{I},\mathbf{R}_{90^{\circ}},\mathbf{R}_{180^{ \circ}},\mathbf{R}_{270^{\circ}}\}\), the 4-element cyclic group, which in our case represents rotating the map by all four possible multiples of \(90^{\circ}\).
Equivariant MuZeroIn what follows, we describe how the various components of EqMuZero (Figure 2) are designed to obey \(C_{4}\)-equivariance. For simplicity, we assume there are only four directional movement actions in the environment (\(A=\{\rightarrow,\downarrow,\leftarrow,\uparrow\}\)). Any additional non-movement actions (such as the "do nothing" action) can be included without difficulty.
To enforce \(C_{4}\)-equivariance in the encoder, we first need to specify the effect of rotations on the latent state \(\mathbf{z}\). In our implementation, the latent state consists of 4 equally shaped arrays, \(\mathbf{z}=(\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_{4})\), and we prescribe that a \(90^{\circ}\) clock-wise rotation manifests as a cyclical permutation: \(\mathbf{R}_{90^{\circ}}\mathbf{z}=(\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_ {4},\mathbf{z}_{1})\). Then, our equivariant encoder embeds state \(\mathbf{X}\) and action \(a\) as follows:
\[E(\mathbf{X},a)=(h(\mathbf{X})+g(a),h(\mathbf{R}_{90^{\circ}}\mathbf{X})+g( \mathbf{R}_{90^{\circ}}a),h(\mathbf{R}_{180\text{-}}\mathbf{X})+g(\mathbf{R}_{ 180^{\circ}}a),h(\mathbf{R}_{270\text{-}}\mathbf{X})+g(\mathbf{R}_{270^{\circ }}a)) \tag{3}\]
where \(h\) is a CNN and \(g\) is an MLP. For the summation, the output of \(g\) is accordingly broadcasted across all pixels of \(h\)'s output. It is easy to verify that this equation satisfies \(C_{4}\)-equivariance, that is, \(E(\mathbf{R}_{90^{\circ}}\mathbf{X},\mathbf{R}_{90^{\circ}}a)=\mathbf{R}_{90^ {\circ}}E(\mathbf{X},a)\).
We can build a \(C_{4}\)-equivariant transition model by maintaining the structure in the latent space:
\[T(\mathbf{z})=(\tau(\mathbf{z}_{1}),\tau(\mathbf{z}_{2}),\tau(\mathbf{z}_{3} ),\tau(\mathbf{z}_{4})). \tag{4}\]
It is also possible to have a less constrained transition model that allows components of \(\mathbf{z}\) to _interact_, while still retaining \(C_{4}\)-equivariance, as follows:
\[T(\mathbf{z})=(\tau(\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_ {4}),\tau(\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_{4},\mathbf{z}_{1}),\tau( \mathbf{z}_{3},\mathbf{z}_{4},\mathbf{z}_{1},\mathbf{z}_{2}),\tau(\mathbf{z}_ {4},\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3})). \tag{5}\]
In our experiments, we use the more constrained variant for MiniPacman, and the less constrained variant for Chaser, as more data is available for the latter. In either case, we take \(\tau\) to be a ResNet.
The policy model is made \(C_{4}\)-equivariant by appropriately combining state and action embeddings from all four latents in \(\mathbf{z}\):
\[P(a\,|\,\mathbf{z})=\frac{\pi(a\,|\,\mathbf{z}_{1})+\pi(\mathbf{R}_{90^{ \circ}}a\,|\,\mathbf{z}_{2})+\pi(\mathbf{R}_{180^{\circ}}a\,|\,\mathbf{z}_{3} )+\pi(\mathbf{R}_{270^{\circ}}a\,|\,\mathbf{z}_{4})}{4} \tag{6}\]
where \(\pi(\cdot\,|\,\mathbf{z}_{i})\) is an MLP followed by a softmax, which produces a probability distribution over actions given the map encoded by \(\mathbf{z}_{i}\). It is easy to show that \(\sum_{a\in A}P(a\,|\,\mathbf{z})=1\), i.e. \(P(\cdot\,|\,\mathbf{z})\) is properly normalised, and that \(P(\mathbf{R}_{90^{\circ}}a\,|\,\mathbf{R}_{90^{\circ}}\mathbf{z})=P(a\,|\, \mathbf{z})\), i.e. it satisfies \(C_{4}\)-equivariance.
Figure 2: Architecture of Equivariant MuZero, where \(h\), \(g\) are encoders, \(\tau\) is the transition model, \(\rho\) is the reward model, \(v\) is the value model and \(\pi\) is the policy predictor. Each colour represents an element of the \(C_{4}\) group \(\{\mathbf{I},\mathbf{R}_{90^{\circ}},\mathbf{R}_{180^{\circ}},\mathbf{R}_{270^{ \circ}}\}\) applied to the input (observation and action).
Lastly, the reward and value networks (\(R\), \(V\)), modeled by MLPs \(\rho\) and \(v\) respectively, should be \(C_{4}\)-invariant. We can satisfy this constraint by _aggregating_ the latent space with any \(C_{4}\)-invariant function, such as sum, average or max. Here we use summation:
\[R(\mathbf{z})=\rho(\mathbf{z}_{1}+\mathbf{z}_{2}+\mathbf{z}_{3}+\mathbf{z}_{4}),\qquad V(\mathbf{z})=v(\mathbf{z}_{1}+\mathbf{z}_{2}+\mathbf{z}_{3}+\mathbf{z }_{4}). \tag{7}\]
Composing the equivariant components described above (Equations 3-7), we construct the end-to-end equivariant EqMuZero agent, displayed in Figure 2. In appendix A, we prove that, assuming that all the relevant neural networks used by MuZero are \(\mathcal{B}\)-equivariant, the proposed EqMuZero agent will select actions in a \(\mathcal{B}\)-equivariant manner.
ResultsWe compare EqMuZero with a standard MuZero that uses non-equivariant components: ResNet-style networks for the encoder and transition models, and MLP-based policy, value and reward models, following Hamrick et al. (2020). Moreover, as the encoder and the policy of EqMuZero are the only two components which require knowledge of how the symmetry group acts on the environment, we include the following ablations in order to evaluate the trade-off between end-to-end equivariance and general applicability: Standard MuZero with an equivariant encoder, equivariant MuZero with a standard encoder and equivariant MuZero with a standard policy model.
We train each agent on a set of maps, \(\mathbf{X}\). To test for generalisation, we measure the agent's performance on three, progressively harder, settings. Namely, we evaluate the agent on \(\mathbf{X}\), with randomised initial agent position (denoted by _same_ in our results), on the set of rotated maps \(\mathbf{R}\mathbf{X}\), where \(\mathbf{R}\in\{\mathbf{R}_{90^{\circ}},\mathbf{R}_{180^{\circ}},\mathbf{R}_{270 ^{\circ}}\}\) (denoted by _rotated_) and, lastly, on a set of maps \(\mathbf{Y}\), such that \(\mathbf{Y}\cap\mathbf{X}=\varnothing\) and \(\mathbf{Y}\cap\mathbf{R}\mathbf{X}=\varnothing\) (denoted by _different_).
Figure 3 (top) presents the results of the agents on MiniPacman. First, we empirically confirm that the average reward on layouts \(\mathbf{X}\), seen during training, matches the average reward gathered on the rotations of the same mazes, \(\mathbf{R}\mathbf{X}\), for EqMuZero. Second, we notice that changing the equivariant policy with a non-equivariant one does not significantly impact performance. However, the same swap in the encoder brings the performance of the agent down to that of Standard MuZero--this suggests that the structure in the latent space of the transition model, when not combined with some explicit method of imposing equivariance in the encoder, does not provide noticeable benefits. Third, we notice that Equivariant MuZero is generally robust to layout variations, as the learnt high-reward behaviours also transfer to \(\mathbf{Y}\). At the same time, Standard MuZero significantly drops in
Figure 3: Results on procedurally-generated MiniPacman (top) and Chaser from ProcGen (bottom).
performance for both \(\mathbf{Y}\) and \(\mathbf{RX}\). We note that experiments on MiniPacman were done in a low-data scenario, using 5 maps of size \(14\times 14\) for training; we observed that the differences between agents diminished when all agents were trained with at least 20 times more maps.
Figure 3 (bottom) compares the performance of the agents on the ProcGen game, Chaser, which has similar dynamics to MiniPacman, but larger mazes of size \(64\times 64\) and a more complex action space. Due to the complexity of the action space, we only use EqMuZero with a standard policy, rather than a fully equivariant version. We use 500 maze instances for training. Our results demonstrate that, even when the problem complexity is increased in such a way, Equivariant MuZero still consistently outperforms the other agents, leading to more robust plans being discovered.
|
2306.15362 | Planning Landmark Based Goal Recognition Revisited: Does Using Initial
State Landmarks Make Sense? | Goal recognition is an important problem in many application domains (e.g.,
pervasive computing, intrusion detection, computer games, etc.). In many
application scenarios, it is important that goal recognition algorithms can
recognize goals of an observed agent as fast as possible. However, many early
approaches in the area of Plan Recognition As Planning, require quite large
amounts of computation time to calculate a solution. Mainly to address this
issue, recently, Pereira et al. developed an approach that is based on planning
landmarks and is much more computationally efficient than previous approaches.
However, the approach, as proposed by Pereira et al., also uses trivial
landmarks (i.e., facts that are part of the initial state and goal description
are landmarks by definition). In this paper, we show that it does not provide
any benefit to use landmarks that are part of the initial state in a planning
landmark based goal recognition approach. The empirical results show that
omitting initial state landmarks for goal recognition improves goal recognition
performance. | Nils Wilken, Lea Cohausz, Christian Bartelt, Heiner Stuckenschmidt | 2023-06-27T10:20:28Z | http://arxiv.org/abs/2306.15362v2 | # Planning Landmark Based Goal Recognition Revisited: Does Using Initial State Landmarks Make Sense?
###### Abstract
Goal recognition is an important problem in many application domains (e.g., pervasive computing, intrusion detection, computer games, etc.). In many application scenarios, it is important that goal recognition algorithms can recognize goals of an observed agent as fast as possible. However, many early approaches in the area of Plan Recognition As Planning, require quite large amounts of computation time to calculate a solution. Mainly to address this issue, recently, Pereira et al. [11] developed an approach that is based on planning landmarks and is much more computationally efficient than previous approaches. However, the approach, as proposed by Pereira et al., also uses trivial landmarks (i.e., facts that are part of the initial state and goal description are landmarks by definition). In this paper, we show that it does not provide any benefit to use landmarks that are part of the initial state in a planning landmark based goal recognition approach. The empirical results show that omitting initial state landmarks for goal recognition improves goal recognition performance.
Keywords:Online Goal Recognition Classical Planning Planning Landmarks.
## 1 Introduction
Goal recognition is the task of recognizing the goal(s) of an observed agent from a possibly incomplete sequence of actions executed by an observed agent. This task is relevant in many real-world application domains like crime detection [5], pervasive computing [19],[4], or traffic monitoring [12]. State-of-the-art goal recognition systems often rely on the principle of Plan Recognition As Planning (PRAP) and hence, utilize classical planning systems to solve the goal recognition problem [13], [14], [17], [1]. However, many of these approaches require quite large amounts of computation time to calculate a solution. Mainly to address this issue, recently, Pereira et al. [11] developed an approach that is based on planning landmarks (PLR) and is much more computationally efficient than
previous approaches. The approach, as proposed by Pereira et al., also uses trivial landmarks (i.e., facts that are part of the initial state and goal description are landmarks by definition). However, in this paper, we formally analyze and discuss why it does not provide any benefit using initial state landmarks for goal recognition. On the contrary, we show that ignoring initial state landmarks is superior regarding goal recognition performance. In addition, we provide three new evaluation datasets and analyze how the structure of a goal recognition problem affects the results of a planning landmark based goal recognition approach when initial state landmarks are used or ignored. More explicitly, the contributions of this paper are:
1. We formally discuss why it does not provide a benefit to use initial state landmarks for goal recognition and propose and adjusted planning landmark based approach.
2. We provide three new benchmark datasets that are based on a publicly available dataset, which is commonly used in the literature [10]. These datasets have a modified goal structure, such that not all possible goals include the same number of facts, which has an effect onto evaluation performance.
3. We empirically show that ignoring initial state landmarks is superior regarding goal recognition performance of the PLR approach.
## 2 Background
In the context of classical planning systems, planning landmarks are usually utilized to guide the heuristic search through the search space that is induced by a planning problem [8]. However, recently, Pereira et al. [11] proposed an approach that utilizes them to solve the goal recognition problem. The basic idea of PLR is to use the structural information that can be derived from planning landmarks, which can be - informally - seen as way-points that have to be passed by every path to a possible goal. Hence, when it was observed that such way-points were passed recently by the observed agent, this indicates that the agent currently follows a path to the goal(s) for which the observed way-point is a landmark. In this work, we propose an adapted version of PLR [11]. Although PLR was originally developed for the goal recognition problem, it can also be applied to the online goal recognition problem, which we consider in the empirical evaluation. Before we formally define the goal recognition problem and online goal recognition problem, we start by defining a planning problem.
### Classical Planning
Classical planning is usually based on a model of the planning domain that defines possible actions, their preconditions, and effects on the domain. More formally, in this work, we define a (STRIPS) planning problem as follows:
Definition 1 ((Strips) Planning Problem): A Planning Problem is a tuple \(P=\langle F,s_{0},A,g\rangle\) where \(F\) is a set of facts, \(s_{0}\subseteq F\) and \(g\subseteq F\) are the initial
state and a goal, and \(A\) is a set of actions with preconditions \(Pre(a)\subseteq F\) and lists of facts \(Add(a)\subseteq F\) and \(Del(a)\subseteq F\) that describe the effect of action \(a\) in terms of facts that are added and deleted from the current state. Actions have a non-negative cost \(c(a)\). A state is a subset of \(F\). A goal state is a state \(s\) with \(s\supseteq g\). An action \(a\) is applicable in a state \(s\) if and only if \(Pre(a)\subseteq s\). Applying an action \(a\) in a state \(s\) leads to a new state \(s^{\prime}=(s\cup Add(a)\setminus Del(a))\). A solution for a planning problem (i.e., a plan) is a sequence of applicable actions \(\pi=a_{1},\cdots a_{n}\) that transforms the initial state into a goal state. The cost of a plan is defined as \(c(\pi)=\sum\limits_{i}c(a_{i})\)._
### Goal Recognition
Definition 2 (Goal Recognition): Goal recognition is the problem of inferring a nonempty subset \(\mathbf{G}\) of a set of intended goals \(G\) of an observed agent, given a possibly incomplete sequence of observed actions \(\mathbf{O}\) and a domain model \(D\) that describes the environment in which the observed agent acts. Further, the observed agent acts according to a hidden policy \(\delta\). More formally, a goal recognition problem is a tuple \(R=\langle D,\mathbf{O},G\rangle\). A solution to a goal recognition problem \(R\) is a nonempty subset \(\mathbf{G}\subseteq G\) such that all \(g\in\mathbf{G}\) are considered to be equally most likely to be the true hidden goal \(g_{*}\) that the observed agent currently tries to achieve.
The most favorable solution to a goal recognition problem \(R\) is a subset \(\mathbf{G}\) which only contains the true hidden goal \(g_{*}\). In this work, \(D=\langle F,I,A\rangle\) is a planning domain with a set of facts \(F\), the initial state \(I\), and a set of actions \(A\). The online goal recognition problem is an extension to the previously defined goal recognition problem that additionally introduces the concept of time and we define it as follows:
Definition 3 (Online Goal Recognition): Online goal recognition is a special variant of the goal recognition problem (Definition 2), where we assume that the observation sequence \(\mathbf{O}\) is revealed incrementally. More explicitly, let \(t\in[0,T]\) be a time index, where \(T=|\mathbf{O}|\) and hence, \(\mathbf{O}\) can be written as \(\mathbf{O}=(\mathbf{O}_{t})_{t\in[0,T]}\). For every value of \(t\), a goal recognition problem \(R(t)\) can be induced as \(R(t)=\langle D,G,\mathbf{O_{t}}\rangle\) where \(\mathbf{O_{t}}=(\mathbf{O}_{t})_{t\in[0,t]}\). A solution to the online goal recognition problem is the nonempty subsets \(\mathbf{G}_{t}\in G;\forall t\in[0,T]\).
### Planning Landmarks
Planning landmarks are typically defined as facts that must be true (i.e., part of the current planning state) or actions that must be executed at some point during the execution of a valid plan starting at \(s_{0}\) that achieves the goal \(g\)[8]. PLR only focuses on _fact landmarks_. More precisely, following [8], we define fact landmarks as follows:
Definition 4 (Fact Landmark): Given a planning problem \(P=\langle F,s_{0},A,g\rangle\), a fact \(f\in F\) is a fact landmark if for all plans \(\pi=\langle a_{1},\ldots,a_{n}\rangle\) that reach \(g\): \(\exists s_{i}:f\in s_{i};0\leq i\leq n\), where \(s_{i}\) is the planning state that is reached by applying action \(a_{i}\) to state \(s_{i-1}\).
[8] further divide this set of fact landmarks into _trivial_ and _non-trivial_ landmarks. They consider all landmarks that are either contained in the initial state (i.e., \(f\in s_{0}\)) or are part of the goal (i.e., \(f\in g\)) as trivial landmarks because they are trivially given by the planning problem definition. All other landmarks are considered to be non-trivial.
As an example, consider the smart home scenario depicted in Figure 1. For this example, we assume, that the corresponding planning domain uses a predicate _(is-at?x)_ to describe the current position of the agent (e.g., in the depicted state the grounded fact _(is-at k2)_ is true). For this example, one potential goal of the agent is defined as \(g=\{\)_(is-at ba3)_\(\}\). When we assume that the agent can carry out movement actions from one cell to any adjacent cell, then the facts _(is-at h3)_ and _(is-at ba1)_ would be _non-trivial_ fact landmarks because these cells have to be visited by every valid path from the initial position k2 to the goal position ba3 but are not part of the initial state or the goal. Moreover, _(is-at k2)_ and _(is-at ba3)_ would be _trivial_ landmarks because they also have to be true on every valid path but they are given in the initial state and the goal definition respectively.
### Extraction of Planning Landmarks
To extract landmarks, we use two landmark extraction algorithms, which were also used by Gusmao et al. [6]. Both algorithms will be described shortly in the following.
Figure 1: Exemplary Smart Home Layout.
_Exhaustive._ This algorithm computes an exhaustive set of fact landmarks on the basis of a Relaxed Planning Graph (RPG). An RPG is a relaxed representation of a planning graph that ignores all delete effects. As shown by Hoffmann et al. [8], RPGs can be used to check whether a fact is a landmark. The exhaustive algorithm checks for every fact \(f\in F\), whether it is a landmark.
_Richter [16]._ The algorithm proposed by Richter in principle works similarly to the algorithm developed by Hoffmann et al. [8], which was originally used by Pereira et al. [11]. The two main differences are that the algorithm by Richter considers the \(SAS^{+}\) encoding of planning domains and allows disjunctive landmarks. The algorithm of Hoffmann et al. only considers facts as potential landmarks that are part of the preconditions of all first achievers of a potential landmark \(l\). In contrast, the algorithm proposed by Richter allows for disjunctive landmarks, where each disjunctive landmark contains one fact from one precondition of one of the possible achievers of \(l\). This allows this method to find more landmarks than the algorithm from Hoffmann et al.
## 3 Ignoring Initial State Landmarks in Planning Landmark Based Goal Recognition
In this paper, we propose to adjust PLR such that initial state landmarks are ignored. The following subsections first introduce the adjusted approach before Subsection 3.3 formally analyzes and discusses, why we think that considering initial state landmarks provides no additional benefit to solve the goal recognition problem.
### Computing Achieved Grounded Landmarks
The two heuristics, which were proposed by Pereira et al. [11] to estimate \(P(G|O)\), both reason over the set of landmarks that were already achieved by a given observation sequence \(\boldsymbol{o}\) for each goal \(g\in G\), which is referred to as \(AL_{g}\). To determine the set of achieved landmarks for each goal, we use Algorithm 1. This algorithm is inspired by the original algorithm proposed by Pereira et al. [11]. Compared to the original, Algorithm 1 differs substantially in two points. First, it is not able to consider the predecessor landmarks for each landmark that was detected to be achieved by the given observations. The reason for this is that ordering information between landmarks would be necessary to do this. However, such information is not generated by all landmark extraction methods that are evaluated in this paper. As a consequence, the adjusted algorithm will probably have more difficulties dealing with missing observations compared to the original algorithm. Second, in contrast to the original algorithm, Algorithm 1 does not consider initial state landmarks to be actually _achieved_ by the given observation sequence \(\boldsymbol{o}\). Instead, these landmarks are simply ignored during the goal recognition process.
```
0:\(I\) initial state, \(G\) set of candidate goals, \(\boldsymbol{o}\) observations, and a set of extracted landmarks \(L_{g}\) for each goal \(g\in G\).
0:A mapping \(M_{G}\) between each goal \(g\in G\) and the respective set of achieved landmarks \(AL_{g}\).
1:functionCompute Achieved Landmarks(\(I\), \(G\), \(\boldsymbol{o}\), \(L_{G}\))
2:\(M_{G}\leftarrow\langle\rangle\)
3:for all\(g\in G\)do
4:\(L_{g}\leftarrow\) all fact landmarks from \(L_{g}\) s.t.
5:\(\forall l\in L_{g}:l\notin I\)
6:\(L\leftarrow\emptyset\)
7:\(AL_{g}\leftarrow\emptyset\)
8:for all\(o\in\boldsymbol{o}\)do
9:\(L\leftarrow\{l\in L_{g}|l\in Pre(o)\cup Add(o)\wedge l\notin L\}\)
10:\(AL_{g}\gets AL_{g}\cup L\)
11:endfor
12:\(M_{G}(g)\gets AL_{g}\)
13:endfor
14:return\(M_{G}\)
15:endfunction
```
**Algorithm 1** Compute achieved landmarks for each goal.
### Estimating Goal Probabilities
To estimate the goal probabilities from the sets of all extracted landmarks (i.e., \(L_{g}\)) and landmarks already achieved by \(\boldsymbol{o}\) (i.e., \(AL_{g}\)) for each \(g\in G\), we use slightly adjusted versions of the heuristics introduced by [11].
_Goal Completion Heuristic_. The original version of this heuristic estimates the completion of an entire goal as the average of completion percentages of the sub-goals of a goal.More precisely, the original heuristic is computed as follows [11]:
\[h_{gc}(g,AL_{g},L_{g})=\left(\frac{\sum_{sg\in g}\frac{|AL_{sg}|}{|L_{sg}|}}{|g|}\right) \tag{1}\]
However, to which of the sub-goals each of the identified achieved landmarks contributes can again only be determined if ordering information between the landmarks is available. Hence, not all landmark extraction methods that are used in this work do generate such information, the completion was slightly adjusted to be computed as:
\[h_{gc}(g,AL_{g},L_{g})=\left(\frac{|AL_{g}|}{|L_{g}|}\right) \tag{2}\]
This adjustment, in some cases, has a significant impact on the resulting heuristic scores. For example, consider the case that \(g=\{sg_{0},sg_{1},sg_{2},sg_{3},sg_{4}\}\), \(|L_{sg_{i}}|=1\) and \(|AL_{sg_{i}}|=1\), \(\forall sg_{i}\in g;0\leq i\leq 3\), \(|AL_{sg_{4}}|=0\), and \(|L_{sg_{4}}|=30\). In this case, the result of Equation 1 would be \(4/5\), whereas the result of Equation 2 would be \(4/34\). Thus, the more unevenly the landmarks are distributed over the sub-goals, the larger the difference between the original heuristic calculation and the
adjusted calculation becomes. Nevertheless, it is not fully clear which of the two options achieves better goal recognition performance.
_Landmark Uniqueness Heuristic._ The second heuristic, which was proposed by Pereira et al. [11], does not only consider the percentage of completion of a goal in terms of achieved landmarks but also considers the uniqueness of the landmarks. The intuition behind this heuristic is that it is quite common that several goals share a common set of fact landmarks. Hence, landmarks that are only landmarks of a small set of potential goals (i.e., landmarks that are more unique) provide us with more information regarding the most probable goal than landmarks that are landmarks for a larger set of goals. For this heuristic, _landmark uniqueness_ is defined as the inverse frequency of a landmark among the found sets of landmarks for all potential goals. More formally the landmark uniqueness is computed as follows [11]:
\[L_{uniq}(l,L_{G})=\left(\frac{1}{\sum_{L_{g}\in L_{G}}|\{l|l\in L_{g}\}|}\right) \tag{3}\]
Following this, the uniqueness heuristic score is computed as:
\[h_{uniq}(g,AL_{g},L_{g},L_{G})=\left(\frac{\sum_{al\in AL_{g}}L_{uniq}(al,L_{G} )}{\sum_{l\in L_{g}}L_{uniq}(l,L_{G})}\right) \tag{4}\]
To determine the set of most probable goals, for both heuristics, the heuristic values are calculated for all potential goals. Based on these scores, the set of goals that are assigned with the highest heuristic score are considered as most probable goals.
### Why Using Initial State Landmarks Does Bias Goal Recognition Performance
We propose to adjust the original PLR approach to ignore initial state landmarks because we think that landmarks that are part of the initial state do not provide any valuable information for goal recognition but might potentially even have a misleading effect. This is because using initial state landmarks for goal recognition in fact means that information which is not derived from the observed agent behaviour is used for recognition. Moreover, due to how the two recognition heuristics and the utilized planning domain are defined, using initial state landmarks introduces a bias towards considering goals with smaller numbers of non initial state landmarks as more probable. As a consequence, the goal(s) that have the largest fraction of their landmarks in the initial state are considered to be most probable when no action has been observed so far. However, this is only due to how the domain and goal descriptions are defined and not by actually observed agent behaviour.
In the following, this issue is analyzed more formally based on the completion heuristic. As the uniqueness heuristic is very similar to the completion heuristic, just that it weights more unique landmarks stronger, the theoretical analysis
would basically follow the same lines. Nevertheless, as initial state landmarks have the lowest uniqueness score, the uniqueness heuristic already is closer to ignoring initial state landmarks than the completion heuristic.
\[h_{gc}(g,al_{g},l_{g},s_{0})=\frac{|al_{g}|+|s_{0}|}{|l_{g}|+|s_{0}|} \tag{5}\]
The completion heuristic (c.f., Equation 2) can be reformulated as in Equation 5. Here, we split the two sets \(AL_{g}\) and \(L_{g}\) into the sets \(al_{g}\) and \(s_{0}\), and \(l_{g}\) and \(s_{0}\) respectively. Consequently, we have \(al_{g}=\{f|f\in AL_{g}\setminus s_{0}\}\), \(l_{g}=\{f|f\in L_{g}\setminus s_{0}\}\).
Let us now first consider what happens to the heuristic value of the completion heuristic, when we set \(|s_{0}|\) to extreme values (i.e., 0 or \(\infty\)) (c.f., equations 6 and 7).
\[\lim_{|s_{0}|\to 0}\frac{|al_{g}|+|s_{0}|}{|l_{g}|+|s_{0}|}=\frac{|al_{g}|}{|l_{g}|} \tag{6}\]
When we consider \(|s_{0}|\to 0\), the completion heuristic converges to the value of the fraction \(\frac{|al_{g}|}{|l_{g}|}\). This case is similar to ignoring initial state landmarks.
\[\lim_{|s_{0}|\rightarrow\infty}\frac{|al_{g}|+|s_{0}|}{|l_{g}|+|s_{0}|}=1 \tag{7}\]
When we consider \(|s_{0}|\rightarrow\infty\), the completion heuristic converges to the value of 1. Hence, in theory, if we would have infinitely many initial state landmarks, the heuristic value for all goals would be 1, independent from which landmarks were already achieved by the observed actions. In contrast, when we completely ignore initial state landmarks (i.e., \(|s_{0}|=0\)) the heuristic value for all goals _only_ depends on which non initial state landmarks exist for each goal and how many of those were already achieved by the observed actions. Consequently, in this case, the decision is _only_ based on information that is gained from the observation sequence. In summary, the more initial state landmarks there are compared to the number of non initial state landmarks, the less the decision on which goal(s) are the most probable ones depends on information that can be gained from the observation sequence. How strongly the heuristic value for a goal is biased by considering initial state landmarks depends on how many non initial state landmarks exist for this goal. If all goals would have similar numbers of non initial state landmarks, considering initial state landmarks would actually not affect the ranking of goals based on the completion heuristic. Nevertheless, this assumption does not hold in almost all cases in practice, and due to this, we analyze the impact of the size of \(l_{g}\) onto the heuristic score in the following.
How the value of \(|l_{g}|\) affects the completion heuristic, again for the two extreme cases \(|l_{g}|\to 0\) and \(|l_{g}|\rightarrow\infty\), is formalized by equations 8 and 9. Moreover, in this formalization the case of \(|al_{g}|=0\) for all goals \(g\in G\) is considered. Hence, we analyze how the completion heuristic behaves when no landmarks were achieved yet by the observation sequence.
\[\lim_{|l_{g}|\to 0}\frac{|s_{0}|}{|l_{g}|+|s_{0}|}=1 \tag{8}\]
When we consider \(|l_{g}|\to 0\), i.e., there exist no non initial state landmarks for goal \(g\), the completion heuristic converges to the value of \(1\). This actually means, although we have not observed any evidence that goal \(g\) is the actual goal of the agent, we decide that goal \(g\) is the actual goal of the agent. Of course, in practice, it is not very likely that \(|l_{g}|=0\). Nevertheless, this shows that the smaller the size of \(l_{g}\) is, the closer the initial heuristic value is to \(1\).
\[\lim_{|l_{g}|\rightarrow\infty}\frac{|s_{0}|}{|l_{g}|+|s_{0}|}=0 \tag{9}\]
In contrast, when we consider \(|l_{g}|\rightarrow\infty\), the initial heuristic value of the completion heuristic converges to \(0\). In practice, this case will also not happen. However, this means that the larger \(|l_{g}|\) is compared to \(|s_{0}|\), the closer the initial heuristic value will get to \(0\). In summary, this analysis shows very well that by _not_ ignoring initial state landmarks, the completion heuristic heavily favors goals for which \(|l_{g}|\) is small compared to \(|s_{0}|\) when no landmarks were observed yet. In addition, also the slope of the increase in the heuristic value depends on the size of \(|l_{g}|\). The smaller \(|l_{g}|\) is, the higher will be the slope of the heuristic value increase. Hence, by _not_ ignoring initial state landmarks, goals with small \(|l_{g}|\) are not only heavily favored initially, but they also have a faster increase of heuristic values when non initial state landmarks are observed.
## 4 Evaluation
To evaluate the performance and efficiency of the adjusted methods discussed in the previous sections, we conducted several empirical experiments on three new benchmark datasets, which are based on a commonly used publicly available dataset [10]. More precisely, the goals of the evaluation are:
* Show that ignoring initial state landmarks during the goal recognition process improves the recognition performance.
* Investigate how the structure of the benchmark problems affects goal recognition performance.
### Experimental Design
To assess the goal recognition performance of the different methods, we used the mean goal recognition precision. We calculate the precision similar to existing works (e.g., [2])Furthermore, as we consider online goal recognition problems in this evaluation, we calculated the mean precision for different fractions \(\lambda\) of total observations that were used for goal recognition. Here, we used relative numbers because the lengths of the involved observation sequences substantially differ. Hence, the mean precision Precision for a fraction \(\lambda\in[0,1]\) is calculated as follows:
\[Precision(\lambda,\mathcal{D})=\frac{\sum_{R\in\mathcal{D}}\frac{[g_{*R}\in R( |T_{R}\lambda|)]}{|\mathcal{G}_{R(T_{R}\lambda)}|}}{|\mathcal{D}|} \tag{10}\]
Here, \(\mathcal{D}\) is a set of online goal recognition problems \(R\), \(g_{*R}\) denotes the correct goal of goal recognition problem \(R\), \(T_{R}\) is the maximum value of \(t\) for online goal recognition problem \(R\) (i.e., length of observation sequence that is associated with \(R\)), and \([g_{*R}\in R(t)]\) equals 1 if \(g_{*R}\in\mathbf{G}_{R(t)}\) and 0 otherwise, where \(\mathbf{G}_{R(t)}\) is the set of recognized goals for \(R(t)\). In other words, the precision quantifies the probability of picking the correct goal from a predicted set of goals \(\mathbf{G}\) by chance.
_Datasets._ As the basis for our evaluation datasets, we use a dataset that is commonly used in the literature [10]. However, we recognized that this dataset, which contains goal recognition problems from 13 different planning domains, almost only contains goal recognition problems that include only goals with similar size (i.e., the same number of facts in all possible goals). Not only that this is not a very realistic scenario, as in practice one should expect that the different possible goals do not all have the same size in terms of facts they include. In addition, this also biases the recognition performance of the original PLR approach, as in this case, the \(l_{g}\) sets are more likely to have similar sizes. To address this issue, we have created three new datasets that are based on the original dataset. First, we modified the sets of possible goals in the existing dataset so that they have varying sizes. During this process, we ensured that none of the possible goals is a true subgoal of any of the other possible goals in the same goal recognition problem. Based on these modified goals, we created one dataset that has a random choice of true goals (\(D_{R}\)), one dataset in which the longest possible goals are the actual agent goals (\(D_{L}\)), and one dataset in which the shortest possible goals are the actual agent goals (\(D_{S}\)). As we have discussed earlier, the original PLR approach does heavily favor goals with small \(|l_{g}|\), which is more likely for goals that are smaller in general. Hence, the original PLR approach should have an advantage in the third dataset. To generate the observation sequences for the modified goals, we used the current version of the Fast Downward planner [7].
### Results
Figure 2 shows the average precision for \(D_{R}\), \(D_{L}\), and \(D_{S}\) (depicted in this order from top to bottom) over all 13 domains in each benchmark dataset. On the left hand side, the average precision for the completion heuristic is reported and on the right hand side, the average precision for the uniqueness heuristic is depicted. Further, the subfigure for each combination of heuristic and dataset shows the average precision for each of the evaluated approaches (Exhaustive (EX), Exhaustive with initial state landmarks (EX-init), Richter (RHW), and Richter with initial state landmarks (RHW-init)). The results show that ignoring initial state landmarks for goal recognition, as we propose in this paper, leads for each evaluated combination of approach, dataset, and heuristic to superior recognition performance compared to when initial state landmarks _are used_ for recognition. Interesting to note is also that the performance difference is larger for the EX extraction algorithm than for the RHW extraction algorithm. One reason
for this might be that the RHW algorithm allows for disjunctive landmarks, whereas the EX algorithm only considers single fact landmarks.
As expected, the completion heuristic achieves the best results on the \(D_{S}\) dataset, in which the shortest goals are the true goals in the goal recognition setups. This is, as analyzed previously, due to the way in which the completion
Figure 2: Average precision of the Exhaustive (EX), Exhaustive with initial state landmarks (EX-init), Richter (RHW), and Richter with initial state landmarks (RHW-init) approaches on the three benchmark datasets \(D_{R}\), \(D_{L}\), and \(D_{S}\) (depictured in this order from top to bottom). The three subfigures on the left handside report the results for the completion heuristic, while the subfigures on the right handside report the results for the uniqueness heuristic.
heuristic favors goals with smaller sets of landmarks. Due to the same reason, the results show that the performance difference between ignoring initial state landmarks and not ignoring them is the largest for the \(D_{L}\) dataset. In contrast, interestingly, the uniqueness heuristic seems to favor goals with larger sets of landmarks. Most probably this is due to the weighting through the uniqueness scores that this heuristic uses. It is very likely that goals with larger landmark sets also have more facts in their goal description than goals with smaller landmark sets. As the agent always starts at the same initial state, the shorter the plan (which in many domains correlates with goal description size) the more likely it becomes that the goal of this plan shares landmarks with other goals and hence, has less unique landmarks. Respectively, the longer a plan becomes, the more likely it is that the goal of such a plan includes more unique landmarks in its set of landmarks. Consequently, this leads to the uniqueness heuristic favoring longer goals.
## 5 Related Work
Since the idea of Plan Recognition as Planning was introduced by [13], many approaches have adopted this paradigm [14], [15], [20], [18], [17], [9], [11], [3]. It was recognized relatively soon that the initial PRAP approaches are computationally demanding, as they require computing entire plans. Since then, this problem has been addressed by many studies with the approach by [11] being a recent example. This method also belongs to a recent type of PRAP methods (to which VS belongs as well), which do not derive probability distributions over the set of possible goals by analyzing cost differences but rank the possible goals by calculating heuristic values. Another approach from this area is a variant that was suggested as an approximation for their main approach by [13].
## 6 Conclusion
In conclusion, in this paper we have formally analyzed and discussed why using initial state landmarks for goal recognition biases the recognition performance. Moreover, we provided three new benchmark datasets, which are based on a dataset that is commonly used in the literature [10].. These three benchmark datasets were used to empirically show that ignoring initial state landmarks for goal recognition is indeed superior regarding goal recognition performance. In addition, we empirically evaluated the effect of different goal recognition problem structures onto the goal recognition performance of planning landmark based goal recognition approaches. An interesting avenue for future work would to evaluate how well the slightly adjusted algorithms, proposed in this paper, handle missing and/or noisy observations. |
2310.07080 | An Extended B' Formulation for Ablating-Surface Boundary Conditions | The B' formulation can be understood as a mass and energy conservation
formalism at a reacting singular surface. In hypersonics applications, it is
typically used to compute the chemical equilibrium properties of gaseous
mixtures at ablating surfaces, and to estimate the recession velocity of the
interface. In the first half of the paper, we derive the B' formulation to
emphasize first principles. In particular, while we eventually specialize to
the commonly considered case of chemical equilibrium boundary layers that
satisfy the heat and mass transfer analogy, we first derive a general interface
jump condition that lets us highlight all the underlying assumptions of the
well-known B' equations. This procedure helps elucidate the nature of the B'
formalism and it also allows us to straightforwardly extend the original
formulation. Specifically, when applied at the interface between a porous
material and a boundary layer (as in thermal protection systems applications),
the original formulation assumes unidirectional advective transport of gaseous
species from the porous material to the boundary layer (i.e., blowing).
However, under conditions that may appear in hypersonic flight or in
ground-based wind tunnels, boundary layer gases can enter the porous material
due to a favorable pressure gradient. We show that this scenario can be easily
handled via a straightforward modification to the B' formalism, and we
demonstrate via examples that accounting for gas entering the material can
impact the predicted recession velocity of ablating surfaces. In order to
facilitate the implementation of the extended B' formulation in existing
material response codes, we present a short algorithm in section 5 and we also
refer readers to a GitHub repository where the scripts used to generate the
modified B' tables are publicly available. | Alberto Padovan, Blaine Vollmer, Francesco Panerai, Marco Panesi, Kelly A. Stephani, Daniel J. Bodony | 2023-06-20T17:09:42Z | http://arxiv.org/abs/2310.07080v2 | # An Extended \(B^{\prime}\) Formulation for Ablating-Surface Boundary Conditions
###### Abstract
The \(B^{\prime}\) formulation can be understood as a mass and energy conservation formalism at a reacting singular surface. In hypersonics applications, it is typically used to compute the chemical equilibrium properties of gaseous mixtures at ablating surfaces, and to estimate the recession velocity of the interface. In the first half of the paper, we derive the \(B^{\prime}\) formulation to emphasize first principles. In particular, while we eventually specialize to the commonly considered case of chemical equilibrium boundary layers that satisfy the heat and mass transfer analogy, we first derive a general interface jump condition that lets us highlight all the underlying assumptions of the well-known \(B^{\prime}\) equations. This procedure helps elucidate the nature of the \(B^{\prime}\) formalism and it also allows us to straightforwardly extend the original formulation. Specifically, when applied at the interface between a porous material and a boundary layer (as in thermal protection systems applications), the original formulation assumes unidirectional advective transport of gaseous species from the porous material to the boundary layer (i.e., blowing). However, under conditions that may appear in hypersonic flight or in ground-based wind tunnels, boundary layer gases can enter the porous material due to a favorable pressure gradient. We show that this scenario can be easily handled via a straightforward modification to the \(B^{\prime}\) formalism, and we demonstrate via examples that accounting for gas entering the material can impact the predicted recession velocity of ablating surfaces. In order
to facilitate the implementation of the extended \(B^{\prime}\) formulation in existing material response codes, we present a short algorithm in section 5 and we also refer readers to a GitHub repository where the scripts used to generate the modified \(B^{\prime}\) tables are publicly available.
keywords: \(B^{\prime}\) Table, Ablation, Thermal Protection System, Interface Jump Conditions +
Footnote †: journal: Journal of Heat Transfer
## 1 Introduction
Understanding the fluid-structure interaction between a high-speed boundary layer and a reacting porous material is important for various applications, including the design of thermal protection systems (TPS) for atmospheric reentry. High-fidelity simulations that aim to study the coupled physics between the fluid and the solid, necessarily require access to computational fluid dynamics (CFD) codes that simulate the physics of the boundary layer, and to material response codes that simulate the dynamic response of the material. However, if we are primarily interested in studying the response of the material, or if we seek a low-resolution estimate of the fluid-material interaction, fully resolving the boundary layer dynamics is a computational burden. In order to circumvent the need to perform a fully resolved CFD calculation, researchers have developed first-principles formulations that model the mass, momentum and energy transfer at the interface between a reacting solid and a boundary layer. The \(B^{\prime}\) formalism discussed in this paper is one such formulation, and it allows to (i) run the material response code independently of a fluid solver when we are uninterested in the fluid mechanics, or (ii) provide a low-resolution interface boundary condition when we seek a low-resolution estimate of the coupled system.
The \(B^{\prime}\) formalism can be considered as a mass and energy flux-balance condition, arising from a control volume analysis at the interface between two different media. In TPS and ablation applications (Moyer and Wool, 1970a,b), where the interface separates a high-speed boundary layer from a chemically-reacting porous material, this formalism is needed to estimate the surface recession velocity under the assumption of chemical equilibrium at the interface. The convenience of the formulation lies in its computational simplicity, and in the fact that, under several assumptions discussed in sections 3 and 4, the solution of the \(B^{\prime}\) equation can be tabulated as a function of surface temperature, surface pressure and normalized gas mass flux (hence the common name \(B^{\prime}\)_tables_).
Although the original \(B^{\prime}\) formulation is known and implemented in ablation codes (e.g., PATO (Lachaud and Mansour, 2014) and KATS (Weng and Martin, 2014)),
to the best of the authors' knowledge, a derivation from first principles is not readily available in the literature. Specifically, the original formulation is typically presented starting from an infinitesimally thin control volume containing the interface (Moyer and Rindal, 1968; Moyer and Wool, 1970b; Anderson and Kendall, 1970; de Muelenaere et al., 2012; Lachaud and Mansour, 2014; Bellas-Chatzigeorgis, 2018). In sections 2, 3 and 4 we offer an alternative derivation of the \(B^{\prime}\) mass and energy balance equations starting from a _jump condition_ that is derived using the divergence and the generalized transport theorems (Keller, 1954), without explicitly requiring an infinitesimally thin control volume. While in sections 3 and 4 we specialize to boundary layers with unity Lewis numbers (as commonly done in the literature), the jump condition presented in section 2 is general enough that it can be applied to any boundary layer model. This way, we elucidate the nature of the \(B^{\prime}\) formalism and identify the underlying assumptions that are built into it.
In section 5 we use the derivation presented in the first half of the manuscript to extend the \(B^{\prime}\) formulation to include bidirectional mass flux across the interface. In its original form, the \(B^{\prime}\) formulation assumes unidirectional advective transport of gaseous mass from the porous material to the boundary layer (i.e., blowing). (This corresponds to \(B^{\prime}_{g}>0\) in the notation of section 3.) This is because the formulation is often used to simulate the response of pyrolyzing porous materials (Moyer and Wool, 1970a; Lachaud and Mansour, 2014; Chiodi et al., 2022) that exhibit internal pressures that are often higher than the pressure inside the boundary layer (thereby leading to blowing). However, there can be cases where, even in the presence of pyrolysis, the pressure differential is such that there is a net inflow of gases into the porous material (\(B^{\prime}_{g}<0\)). In computational codes that treat the porous material's gases as a time-varying (equilibrium/non-equilibrium) mixture, the inflow of boundary layer gases into the material can be easily accounted for via a species dirichlet boundary condition at the surface (Lachaud et al., 2015). Conversely, when the gases composition is taken to be constant (i.e., when there is no species tracking), existing codes (e.g., PATO (Lachaud and Mansour, 2014), CHyPS (Chiodi et al., 2022)) typically choose to neglect the effect of the inflow of gases on the surface thermodynamics by setting \(B^{\prime}_{g}=0\). We shall see, however, that enforcing \(B^{\prime}_{g}=0\) can have a non-negligible effect on the surface thermodynamics and on the surface recession velocity. Section 5 presents an extension to the \(B^{\prime}\) formalism that allows for \(B^{\prime}_{g}<0\) even when the gases in the porous material are treated as a constant mixture. First and foremost, this extension has the same computational cost as the original formulation and, just like the latter, it allows for the \(B^{\prime}\) equations to be tabulated a priori. Second, it is constructed such that the normalized recession rate (\(B^{\prime}_{c}\) in the notation of section 3) is a continuous function
of the blowing/aspiration rate \(B^{\prime}_{fl}\). Finally, we identify blowing/aspiration regimes where the recession rate is either independent of the blowing/aspiration rate, or a linear function of the latter. This analysis shows that if the mass flux of gases into the porous material is sufficiently high, its impact on the surface recession velocity is non-negligible. This is demonstrated via examples in section 6, where we show that the modified formulation predicts recession velocities that are always equal to or greater than the recession velocities predicted by the classical \(B^{\prime}\) formulation.
The steps required to implement this formulation in existing material response codes are compactly outlined in Algorithm 1 in section 5. Moreover, the interested reader may generate the modified \(B^{\prime}\) tables using the scripts that are publicly available in the repository [https://github.com/albertopadovan/Modified_Bprime](https://github.com/albertopadovan/Modified_Bprime). The \(B^{\prime}\) tables generated by these scripts should be compatible with the material response code CHyPS (Chiodi et al., 2022) without modification, and with PATO (Lachaud and Mansour, 2014) with little to no modification.
## 2 Jump Condition of a Conserved Quantity
In this section we follow the approach of Keller (1954) to derive the jump condition of a conserved quantity \(\varphi\) across a singular surface where \(\varphi\) is discontinuous. Throughout, we use the general control volume \(\mathcal{V}=\mathcal{V}^{(p)}\cup\mathcal{V}^{(f)}\subseteq\mathbb{R}^{3}\) depicted in figure 1, where the superscripts \((p)\) and \((f)\) denote the two sides of the control volume that are separated by the singular interface \(\mathcal{I}\). In applications of interest, the \((p)\)-side will contain a volume of porous material, while the \((f)\)-side will contain a volume of fluid. The external surfaces of the control volume are denoted by \(\mathcal{S}\), and \(\mathbf{n}\in\mathbb{R}^{3}\) denotes the unit-norm outward-pointing vector normal to the surface.
In multi-physics problems, the spatio-temporal dynamics of conserved quantities are typically governed by partial differential equations defined on either side of the interface \(\mathcal{I}\). For a general quantity \(\varphi(\mathbf{x},t)\), the conservation equations in differential conservative form (and Einstein notation) may read
\[\frac{\partial\varphi^{(f)}}{\partial t}+\frac{\partial}{ \partial x_{i}}\left(\varphi^{(f)}u_{i}^{(f)}\right) =\frac{\partial}{\partial x_{i}}\xi_{i}^{(f)}+\psi^{(f)},\quad \mathbf{x}\in\mathcal{V}^{(f)}, \tag{1}\] \[\frac{\partial\varphi^{(p)}}{\partial t}+\frac{\partial}{ \partial x_{i}}\left(\varphi^{(p)}u_{i}^{(p)}\right) =\frac{\partial}{\partial x_{i}}\xi_{i}^{(p)}+\psi^{(p)},\quad \mathbf{x}\in\mathcal{V}^{(p)}. \tag{2}\]
Here, \(u_{i}\) denotes the \(i\)th component of the transport velocity vector, \(\psi\) denotes a volumetric source term and \(\xi_{i}\) denotes the \(i\)th component of additional terms (e.g., the viscous stress tensor in the momentum equation, or viscous dissipation in the energy equation). Equations (1) and (2) are well-posed on \(\mathcal{V}^{(f)}\) and \(\mathcal{V}^{(p)}\), respectively,
where \(\varphi\) is differentiable with respect to \(\mathbf{x}\), but they do not hold _on_ the interface, where \(\varphi\) typically exhibits a discontinuity. Understanding this discontinuity, and deriving the corresponding jump condition, is at the heart of imposing the correct boundary conditions in computational codes that run multi-physics simulations.
In order to derive the jump condition, we turn to the integral form of the conservation equations. In particular, the conservation equation over \(\mathcal{V}^{(f)}\) is given by
\[\begin{split}&\frac{d}{dt}\int_{\mathcal{V}^{(f)}}\varphi^{(f)}d \mathcal{V}^{(f)}+\int_{\mathcal{S}^{(f)}}\left[\varphi^{(f)}\left(u_{i}^{(f) }-v_{i}^{\mathcal{S}^{(f)}}\right)-\xi_{i}^{(f)}\right]n_{i}^{\mathcal{S}^{(f) }}d\mathcal{S}^{(f)}\\ &+\int_{\mathcal{I}}\left[\varphi^{(f)}\left(u_{i}^{(f)}-v_{i}^{ \mathcal{I}}\right)-\xi_{i}^{(f)}\right]n_{i}^{\mathcal{I}^{(f)}}d\mathcal{I} =\int_{\mathcal{V}^{(f)}}\psi^{(f)}d\mathcal{V}^{(f)},\end{split} \tag{3}\]
where \(v_{i}\) is the \(i\)th component of the surface velocity vector, and \(n_{i}\) is the \(i\)th component of the outward-pointing normal vector. Using the generalized transport theorem on the time-rate-of-change term in (3), and making use of the divergence theorem, it can be checked that equations (3) and (1) are indeed equivalent. The conservation equation over \(\mathcal{V}^{(p)}\) is analogous to (3), with superscripts \((p)\).
We proceed by considering the integral form of the conservation equation for \(\varphi\)
Figure 1: Schematic of a general control volume \(\mathcal{V}=\mathcal{V}^{(f)}\cup\mathcal{V}^{(p)}\) containing the interface \(\mathcal{I}\) between a porous material and a fluid. Superscripts \((f)\) and \((p)\) denote the fluid and porous material’s sides, respectively. A description of the variables is given at the beginning of section 2.
over the whole control volume \(\mathcal{V}\),
\[\begin{split}&\frac{d}{dt}\bigg{\{}\int_{\mathcal{V}(f)}\varphi^{(f)}d \mathcal{V}^{(f)}+\int_{\mathcal{V}^{(p)}}\varphi^{(p)}d\mathcal{V}^{(p)} \bigg{\}}+\int_{\mathcal{S}^{(f)}}\left[\varphi^{(f)}\left(u_{i}^{(f)}-v_{i}^{ \mathcal{S}^{(f)}}\right)-\xi_{i}^{(f)}\right]n_{i}^{\mathcal{S}^{(f)}}d \mathcal{S}^{(f)}\\ &+\int_{\mathcal{S}^{(p)}}\left[\varphi^{(p)}\left(u_{i}^{(p)}-v_ {i}^{\mathcal{S}^{(p)}}\right)-\xi_{i}^{(p)}\right]n_{i}^{\mathcal{S}^{(p)}}d \mathcal{S}^{(p)}=\int_{\mathcal{V}^{(f)}}\psi^{(f)}d\mathcal{V}^{(f)}\\ &+\int_{\mathcal{V}^{(p)}}\psi^{(p)}d\mathcal{V}^{(p)}+\int_{ \mathcal{I}}\psi^{\mathcal{I}}d\mathcal{I}.\end{split} \tag{4}\]
In writing equation (4), we make two assumptions. First, we do not allow for any accumulation of quantity \(\varphi\) on the interface \(\mathcal{I}\). (This would appear as the time-rate of change of the surface integral of \(\varphi\) along \(\mathcal{I}\).) Second, we treat \(\mathcal{I}\) as a reactive interface, which is allowed to create/destroy some amount of \(\varphi\) via the surface source term \(\psi^{\mathcal{I}}\). These are modelling assumptions that can, in principle, be relaxed. For instance, an example of a more involved interface model can be found in Whitaker (1992), where the author considers a finite-thickness interface that is allowed to accumulate mass. Subtracting formula (3) and its analog over \(\mathcal{V}^{(p)}\) from (4), and imposing point-wise equality, the desired jump condition reads
\[\left[\varphi^{(f)}\left(u_{i}^{(f)}-v_{i}\right)-\xi_{i}^{(f)}\right]n_{i}- \left[\varphi^{(p)}\left(u_{i}^{(p)}-v_{i}\right)n_{i}-\xi_{i}^{(p)}\right]n _{i}=\psi^{\mathcal{I}}, \tag{5}\]
where we have used \(n_{i}\coloneqq n_{i}^{\mathcal{I}^{(p)}}=-n_{i}^{\mathcal{I}^{(f)}}\), and we have dropped the superscript \(\mathcal{I}\) on \(v_{i}\) for notational simplicity. In the next sections, we will use (5) to derive the mass and energy jump conditions at an ablating surface.
## 3 \(B^{\prime}\) Formulation from First Principles: Conservation of Mass
We use the results from the previous section to derive the well-known \(B^{\prime}\) mass balance equation. In doing so, we elucidate the nature of the \(B^{\prime}\) formulation and we identify all its underlying assumptions.
### Conservation of mass at an ablating surface
Moving forward, we specialize to the case of an ablating surface at the interface between a porous material and a fluid. We let the porous material occupy the \(\mathcal{V}^{(p)}\) region of the control volume in figure 1, while the fluid occupies the \(\mathcal{V}^{(f)}\) side. If the fluid is a reacting mixture of \(N_{s}\) species, the differential form of the continuity equation for species \(k\) is given by
\[\frac{\partial\rho_{k}^{(f)}}{\partial t}+\frac{\partial}{\partial x_{i}} \left(\rho_{k}^{(f)}u_{k,i}^{(f)}\right)=\psi_{k}^{(f)},\quad k\in\{1,2,\ldots,N_{s}\}, \tag{6}\]
where \(\rho_{k}\) and \(u_{k,i}\) are the density and \(i\)th component of the velocity associated with species \(k\), and \(\psi_{k}\) is a volumetric source term due to the reacting nature of the mixture. For future reference, we also define the mixture density \(\rho^{(f)}\) and the mixture bulk velocity \(u_{i}^{(f)}\) by (Eckert, 1969)
\[\rho^{(f)}=\sum_{k=1}^{N_{s}}\rho_{k}^{(f)},\quad u_{i}^{(f)}=\frac{1}{\rho^{( f)}}\sum_{k=1}^{N_{s}}\rho_{k}^{(f)}u_{k,i}^{(f)}. \tag{7}\]
The governing equations for the porous material will be treated in a volume-averaged sense. Let the porous material be made of a solid phase and a gaseous mixture with \(N_{s}\) species. In applications of interest, the porous material is typically made up of several solid phases, but for the current discussion it suffices to consider one. Additional solid phases can be considered with minimal change. Conservation of mass of species \(k\) within the porous material requires that equation (6) be satisfied (with all superscripts \((f)\) converted to \((p)\)), where the source term \(\psi_{k}^{(p)}\) may now account for both homogeneous and heterogeneous reactions. The averaging theorem of Whitaker (1967) and the modified averaging theorem of Gray (1975) allow us to volume-average equation (6) over a representative elemental volume \(V\) to obtain
\[\frac{\partial}{\partial t}\left(\varepsilon_{g}\langle\rho_{k}\rangle^{(g)} \right)+\frac{\partial}{\partial x_{i}}\left(\varepsilon_{g}\langle\rho_{k} \rangle^{(g)}\langle u_{k,i}\rangle^{(g)}\right)=\langle\psi_{k}\rangle^{(g)}. \tag{8}\]
Here, \(\varepsilon_{g}\) is the volume fraction occupied by the mixture within the representative volume \(V\), and \(\langle\rho_{k}\rangle^{(g)}\) is the _intrinsic_ volume average of \(\rho_{k}\) (Gray and O'Neill, 1976). In the interest of clarity, we stress that the representative elemental volume \(V\) is not related to \(\mathcal{V}\) in figure 1. A schematic of \(V\) can be found, for instance, in Gray and O'Neill (1976). It is also important to remark that (8) is not exact. In fact, the averaging procedure leads to unclosed terms that are typically neglected, either due to physically-justifiable reasons, or to the impossibility of properly closing them (see equation (24) in Gray and O'Neill (1976)). Once again, for future reference, we let \(\langle\rho\rangle^{(g)}\) and \(\langle u_{i}\rangle^{(g)}\) be the volume-averaged mixture density and mixture bulk velocity, defined analogously to (7). Finally, the volume-averaged conservation of solid mass reads
\[\frac{\partial}{\partial t}\left(\varepsilon_{s}\langle\rho\rangle^{(s)} \right)=\langle\psi_{s}\rangle^{(s)}, \tag{9}\]
where \(\varepsilon_{s}=1-\varepsilon_{g}\) is the volume fraction occupied by the solid. In order to guarantee that, within \(\mathcal{V}^{(p)}\), the sum of mixture mass and solid mass is conserved in the absence of mass fluxes through the boundaries, the source terms are usually taken to satisfy
\[\langle\psi_{s}\rangle^{(s)}+\sum_{k=1}^{N_{s}}\langle\psi_{k}\rangle^{(g)}=0. \tag{10}\]
#### 3.1.1 Conservation of mass of gaseous species \(k\)
We now return to our control volume \(\mathcal{V}\) in figure 1. Per our previous discussion, conservation of mass of species \(k\) in the \(\mathcal{V}^{(f)}\) region of the control volume is governed by (6), while conservation of mass of species \(k\) in the \(\mathcal{V}^{(p)}\) region is governed in a volume-averaged sense by (8). The jump condition in (5) can be used directly, and it reads
\[\rho_{k}^{(f)}\left(u_{k,i}^{(f)}-v_{i}\right)n_{i}-\varepsilon_{g}\langle \rho_{k}\rangle^{(g)}\left(\langle u_{k,i}\rangle^{(g)}-v_{i}\right)n_{i}= \psi_{k}^{\mathcal{I}}, \tag{11}\]
where \(\psi_{k}^{\mathcal{I}}\) is the rate of production (per unit area) of species \(k\) due to reactions at the interface. In ablation applications, this production term models the heterogeneous reactions through which the solid phase of the porous material is converted into gaseous mass (thereby causing surface recession). This will become clear in the next section 3.1.2.
#### 3.1.2 Conservation of solid mass
As in the previous section 3.1.1, we can apply the interface balance equation (5) directly. Since there is no solid phase in the \(\mathcal{V}^{(f)}\) region of the control volume \(\mathcal{V}\), and (9) governs the volume-averaged continuity of solid mass in the \(\mathcal{V}^{(p)}\) region, equation (5) reduces to
\[\varepsilon_{s}\langle\rho\rangle^{(s)}v_{i}n_{i}=\psi_{s}^{\mathcal{I}}. \tag{12}\]
This equation states that the surface velocity \(v_{i}n_{i}\) of the interface \(\mathcal{I}\) is proportional to \(\psi_{s}^{\mathcal{I}}\), where, in ablation applications, \(\psi_{s}^{\mathcal{I}}\) can be understood as the time-rate of change per unit area of solid mass lost to gaseous mass via heterogeneous reactions. As a sanity check, if solid mass is being lost to gaseous mass (e.g., during ablation), then \(\psi_{s}^{\mathcal{I}}<0\), so \(v_{i}n_{i}<0\). Since by convention \(n_{i}=n_{i}^{\mathcal{I}^{(p)}}\), this means that the surface is receding (see figure 1), as expected.
### The \(B^{\prime}\) mass balance
The \(B^{\prime}\) equation for mass conservation is derived from (11) after a number of assumptions that we will outline shortly. Before proceeding we remark that the assumptions outlined herein may or may not be physically justified. We are merely making them in order to obtain the \(B^{\prime}\) mass balance equation from (11).
By adding and subtracting \(\rho_{k}^{(f)}u_{i}^{(f)}\) and \(\varepsilon_{g}\langle\rho_{k}\rangle^{(g)}\langle u_{i}\rangle^{(g)}\) to (11), and using the fact that \(\rho_{k}=z_{k}\rho\), where \(z_{k}\) is the mass fraction of species \(k\), equation (11) can be written as
\[J_{k,i}^{(f)}n_{i}+z_{k}^{(f)}\rho^{(f)}\left(u_{i}^{(f)}-v_{i}\right)n_{i}=J_ {k,i}^{(g)}n_{i}+\varepsilon_{g}z_{k}^{(g)}\langle\rho\rangle^{(g)}\left( \langle u_{k,i}\rangle^{(g)}-v_{i}\right)n_{i}+\psi_{k}^{\mathcal{I}}, \tag{13}\]
where \(J_{k,i}\) are mass diffusion terms defined as
\[J_{k,i}^{(f)}=\rho_{k}^{(f)}\left(u_{k,i}^{(f)}-u_{i}^{(f)}\right),\quad J_{k,i}^{ (g)}=\varepsilon_{g}\langle\rho_{k}\rangle^{(g)}\left(\langle u_{k,i}\rangle^{ (g)}-\langle u_{i}\rangle^{(g)}\right). \tag{14}\]
In order to arrive at the well-known \(B^{\prime}\) equation, the following assumptions need to be made. First, mass diffusion on the porous material's side of the interface (i.e., \(J_{k,i}^{(p)}n_{i}\)) is neglected. The mass diffusion term on the fluid's side of the interface is modelled via correlation (or transfer potential) as \(J_{k,i}^{(f)}=\rho_{e}u_{e,i}St_{M}\left(z_{k}^{(f)}-z_{k}^{(e)}\right)\), where \(St_{M}\) is the mass-transfer Stanton number, and the subscript/superscript "e" denotes boundary layer edge quantities (Eckert, 1969). While more detailed mass diffusion models can be considered (Kendall, 1968; Lachaud et al., 2017), the transfer potential model considered here is the simplest, and it relies on the assumption that all species share the same mass diffusion coefficient (see also B). Putting this all together, equation (13) becomes
\[\begin{split}&\rho_{e}u_{e,i}St_{M}\left(z_{k}^{(f)}-z_{k}^{(e)} \right)n_{i}+z_{k}^{(f)}\underbrace{\rho^{(f)}\left(u_{i}^{(f)}-v_{i}\right)n _{i}}_{\coloneqq m^{(f)}}\\ &=z_{k}^{(g)}\underbrace{\varepsilon_{g}\langle\rho\rangle^{(g) }\left(\langle u_{i}\rangle^{(g)}-v_{i}\right)n_{i}}_{\coloneqq m^{(g)}}+ \psi_{k}^{\mathcal{I}}.\end{split} \tag{15}\]
In order to obtain the \(B^{\prime}\) equation that is commonly presented in the literature (and implemented in computational codes), we first need to convert (15) to its analog in terms of _elements_ rather than species. Under the assumption of equal diffusion coefficients (so that the definition of \(St_{M}\) remains unchanged), it is straightforward to see that equation (15) can be transformed into
\[\rho_{e}u_{e,i}St_{M}\left(y_{k}^{(f)}-y_{k}^{(e)}\right)n_{i}+y_{k}^{(f)} \dot{m}^{(f)}=y_{k}^{(g)}\dot{m}^{(g)}+\chi_{k}^{\mathcal{I}},\quad k\in\{1,2,\ldots,N_{es}\}, \tag{16}\]
where \(y_{k}\) is the mass fraction of element \(k\) in the mixture, \(N_{es}\) is the number of elements, and \(\chi_{k}^{\mathcal{I}}\) is the surface source term analogous to \(\psi_{k}^{\mathcal{I}}\). At this point we are ready to make the final assumption that ultimately leads to the \(B^{\prime}\) equation. Specifically, we write the source term \(\chi_{k}^{\mathcal{I}}\) as \(\chi_{k}^{\mathcal{I}}=\chi_{k_{C}}^{\mathcal{I}}\delta_{k,k_{C}}\), where \(\delta_{k,k_{C}}\) is the Kronecker delta and \(k_{C}\in\{1,2,\ldots,N_{es}\}\) is the index pointing to monatomic carbon gas. Physically, this means that the only non-trivial reaction promoted by the interface \(\mathcal{I}\) is the heterogeneous conversion of solid phase into carbon gas.
Dividing through by \(\rho_{e}u_{e,i}St_{M}\,n_{i}\), the formula above yields the desired \(B^{\prime}\) mass-balance equation
\[y_{k}^{(f)}-y_{k}^{(e)}+y_{k}^{(f)}B_{fl}^{\prime}=y_{k}^{(g)}B_{g}^{\prime}+B _{c}^{\prime}\delta_{k,k_{C}},\quad k\in\{1,2\ldots,N_{es}\}, \tag{17}\]
where \(B^{\prime}_{g}=\dot{m}^{(g)}/\left(\rho_{e}u_{e,i}St_{M}\,n_{i}\right)\), \(B^{\prime}_{c}=\chi^{\mathcal{I}}_{k_{C}}/\left(\rho_{e}u_{e,i}St_{M}\,n_{i}\right)\) and \(B^{\prime}_{fl}\) is defined analogously with \(\dot{m}^{(f)}\) on the numerator1. Due to the assumption that the solid phase is converted exclusively into carbon gas, we observe that \(\chi^{\mathcal{I}}_{k_{C}}=-\psi^{\mathcal{I}}_{s}\), so that, using (12), \(B^{\prime}_{c}\) may be expressed as
Footnote 1: An anonymous reviewer has kindly pointed out that \(B^{\prime}_{f}\) is typically used to identify the rate of material removal due to mechanical failure/erosion. We therefore use \(B^{\prime}_{fl}\) throughout the paper to refer to the blowing/aspiration rate.
\[B^{\prime}_{c}=-\frac{\varepsilon_{s}\langle\rho\rangle^{(s)}v_{i}n_{i}}{\rho _{e}u_{e,i}n_{i}St_{M}}. \tag{18}\]
(In ablation applications, we have \(B^{\prime}_{c}\geq 0\), since \(v_{i}n_{i}\leq 0\) as discussed in section 3.1.2.) For future reference, we also observe that by summing (17) over all \(k\) and using the fact that mass fractions sum to 1, we have \(B^{\prime}_{fl}=B^{\prime}_{g}+B^{\prime}_{c}\).
## 4 \(B^{\prime}\) Formulation from First Principles: Conservation of Energy
Here, we follow the same reasoning as in the previous section, and we derive the \(B^{\prime}\) energy-balance equation at a reacting interface. For this purpose, we consider, once more, the control volume depicted in figure 1.
### Conservation of energy at an ablating surface
We begin by stating the partial differential equation that governs the conservation of energy on the fluid's side of the interface \(\mathcal{I}\) (see figure 1). As in the previous sections, we consider an ideal gas mixture of \(N_{s}\) species. Letting \(E=e+(1/2)u_{i}u_{i}\) denote the total (mixture) energy, with \(e\) the internal energy, the energy equation on the \((f)\)-side of the control volume can be written as
\[\frac{\partial}{\partial t}\left(\rho^{(f)}E^{(f)}\right)+\frac{\partial}{ \partial x_{i}}\left(\rho^{(f)}E^{(f)}u_{i}^{(f)}\right)=\frac{\partial}{ \partial x_{i}}\left(u_{j}^{(f)}\tau_{i,j}^{(f)}-p^{(f)}u_{i}^{(f)}+\kappa^{(f )}\frac{\partial T^{(f)}}{\partial x_{i}}+\mathcal{D}_{i}^{(f)}\right), \tag{19}\]
where
\[\mathcal{D}_{i}^{(f)}=-u_{j}^{(f)}\sum_{k=1}^{N_{s}}\rho_{k}^{(f)}w_{k,i}^{(f )}w_{k,j}^{(f)}-\sum_{k=1}^{N_{s}}h_{k}^{(f)}J_{k,i}^{(f)}-\sum_{k=1}^{N_{s}} \frac{1}{2}w_{k,j}^{(f)}w_{k,j}^{(f)}J_{k,i}^{(f)}+\sum_{k=1}^{N_{s}}w_{k,j}^{ (f)}\tau_{k,i,j}^{(f)}. \tag{20}\]
Here, \(T\) is the temperature, \(\kappa\) is the heat conduction coefficient, \(w_{k,i}\coloneqq u_{k,i}-u_{i}\) is the velocity of species \(k\) relative to the mixture velocity, \(J_{k,i}^{(f)}\) is defined in (14), and
\(\tau_{i,j}=\sum_{k=1}^{N_{s}}\tau_{k,i,j}\) is the shear stress tensor. We refer the reader to Ramshaw (2002) for a formal derivation of (19) for an inviscid ideal gas mixture with zero thermal conductivity.
On the \((p)\)-side of the control volume, occupied by the porous material, the energy equation is often approximated as (Chiodi et al., 2022)
\[\frac{\partial}{\partial t}\left(\varepsilon_{g}\langle\rho\rangle^{(g)} \langle e\rangle^{(g)}+\varepsilon_{s}\langle\rho\rangle^{(s)}\langle h \rangle^{(s)}\right)+\frac{\partial}{\partial x_{i}}\left(\varepsilon_{g} \langle\rho\rangle^{(g)}\langle h\rangle^{(g)}\langle u_{i}\rangle^{(g)} \right)=\frac{\partial}{\partial x_{i}}\left(\kappa^{(p)}\frac{\partial \langle T\rangle}{\partial x_{i}}\right). \tag{21}\]
Here, \(h\) denotes the enthalpy and, as in the previous section, we recall that \(\langle\cdot\rangle\) denotes the intrinsic volume average. The quantity \(\langle T\rangle\) is the volume-averaged temperature of the porous material under the assumption of thermal equilibrium between the gaseous phase and the solid phase, and \(\kappa^{(p)}\) is the corresponding heat conduction coefficient. Equation (21) can be obtained from first principles by volume-averaging the energy equations for the gaseous and solid phases of the porous material. It should be observed that unclosed terms and several others terms are neglected during the volume-averaging process, but it is beyond the scope of this paper to provide details on the formal derivation of (21). We refer the reader to, e.g., Whitaker (1967) and Gray and O'Neill (1976) for details. A noteworthy observation is that (21) omits the contribution of the volume-averaged kinetic energy of the gaseous phase (superscript \((g)\)) to the total volume-averaged energy of the porous material. This has been found to be negligible if the gas exhibits velocities below \(100\,m/s\)(Martin and Boyd, 2008).
### The \(B^{\prime}\) energy balance
We now derive the \(B^{\prime}\) equation for energy conservation across the interface \(\mathcal{I}\). As in section 3.2, we stress the fact that the assumptions outlined herein may or may not be physically justified. These are made merely to obtain the \(B^{\prime}\) energy equation that is used in existing material response codes.
Invoking (5) alongside equations (19) and (21), the energy jump condition across the surface \(\mathcal{I}\) reads,
\[\begin{split}&\left(\rho^{(f)}E^{(f)}\left(u_{i}^{(f)}-v_{i} \right)+p^{(f)}u_{i}^{(f)}-u_{j}^{(f)}\tau_{i,j}^{(f)}-\kappa^{(f)}\frac{ \partial T^{(f)}}{\partial x_{i}}-\mathcal{D}_{i}^{(f)}\right)n_{i}\\ &-\left(\varepsilon_{g}\langle\rho\rangle^{(g)}\langle h\rangle ^{(g)}\left(\langle u_{i}\rangle^{(g)}-v_{i}\right)-\kappa^{(p)}\frac{ \partial\langle T\rangle}{\partial x_{i}}-\varepsilon_{s}\langle\rho\rangle ^{(s)}\langle h\rangle^{(s)}v_{i}\right)n_{i}=\Delta q_{\rm rad},\end{split} \tag{22}\]
where we recall that \(v_{i}\) denotes the interface velocity and \(n_{i}=n_{i}^{\mathcal{I}^{(p)}}\) (see figure 1). The term \(\Delta q_{\rm rad}\) denotes the radiative heat transfer at the interface \(\mathcal{I}\). This is modelled as an interfacial source term that is analogous in spirit to the term \(\psi^{\mathcal{I}}\) in (5).
In order to obtain the \(B^{\prime}\) energy balance, we proceed as follows. Using the fact that \(E^{(f)}=h^{(f)}-p^{(f)}/\rho^{(f)}+(1/2)u_{i}^{(f)}u_{i}^{(f)}\) and neglecting terms, the first and second terms in the first row of (22) become \(\rho^{(f)}h^{(f)}\left(u_{i}^{(f)}-v_{i}\right)\). We then neglect \(u_{j}^{(f)}\tau_{i,j}^{(f)}\) and all terms in \(\mathcal{D}_{i}^{(f)}\) (see equation (20)) except for the second term (i.e., the enthalpy diffusion flux). This can be justified using the boundary layer approximation discussed in Eckert (1969). Letting \(St_{H}\) denote the heat-transfer Stanton number, and taking \(St\coloneqq St_{M}=St_{H}\) (i.e., assuming unity Lewis number), we may write
\[-\kappa^{(f)}\frac{\partial T^{(f)}}{\partial x_{i}}+\sum_{k=1}^{N_{s}}h_{k}^{ (f)}J_{k,i}^{(f)}=\rho_{e}u_{e,i}St\left(h^{(f)}-h^{(e)}\right), \tag{23}\]
where we recall that superscript/subscript "e" denotes boundary layer edge quantities. While the relationship between unity Lewis number and equal Stanton numbers is well-known and discussed in the literature (see, e.g., Incropera et al. (2007); Cooper et al. (2022)), we present a short derivation in B to make the manuscript more self-contained. Equation (23) may be understood as a transfer potential model for heat transfer by convection and diffusion, similar in spirit to the model used to approximate \(J_{k,i}\) in (14). Putting this all together, we obtain
\[\begin{split}&\rho^{(f)}h^{(f)}\left(u_{i}^{(f)}-v_{i}\right)n_{i}+ \underbrace{\rho_{e}u_{e,i}St\left(h^{(f)}-h^{(e)}\right)n_{i}}_{\coloneqq q_{ \mathrm{conv}}}\\ &=\varepsilon_{g}\langle\rho\rangle^{(g)}\langle h\rangle^{(g)} \left(\langle u_{i}\rangle^{(g)}-v_{i}\right)n_{i}-\underbrace{\kappa^{(p)} \frac{\partial\langle T\rangle}{\partial x_{i}}n_{i}}_{\coloneqq q_{ \mathrm{cond}}}-\varepsilon_{s}\langle\rho\rangle^{(s)}v_{i}\langle h\rangle^ {(s)}n_{i}+\Delta q_{\mathrm{rad}}.\end{split} \tag{24}\]
This is precisely the energy balance equation displayed, e.g., in Lachaud and Mansour (2014). Dividing through by \(\rho_{e}u_{e,i}St\,n_{i}\), and recalling the definitions of \(B^{\prime}_{g}\), \(B^{\prime}_{fl}\) and \(B^{\prime}_{c}\) in the previous section, the equation above yields the desired \(B^{\prime}\) energy balance
\[h^{(f)}-h^{(e)}+B^{\prime}_{fl}h^{(f)}=B^{\prime}_{g}\langle h\rangle^{(g)}- \frac{q_{\mathrm{cond}}}{\rho_{e}u_{e,i}St\,n_{i}}+B^{\prime}_{c}\langle h \rangle^{(s)}+\frac{\Delta q_{\mathrm{rad}}}{\rho_{e}u_{e,i}St\,n_{i}}. \tag{25}\]
When this equation is solved in practice, the only unknown is \(q_{\mathrm{cond}}\), which is then used to specify a Neumann boundary condition on the temperature field \(\langle T\rangle\).
## 5 Extension of the \(B^{\prime}\) Formulation
Despite all the assumptions made in the previous section, the resulting \(B^{\prime}\) formulation should hold for any (positive or negative) values of \(B^{\prime}_{g}\) and \(B^{\prime}_{fl}\). Nonetheless,
material response codes and thermodynamics/chemical libraries (Lachaud and Mansour, 2014; Scoggins et al., 2020) only consider the case \(B^{\prime}_{g}\geq 0\). Using the control volume in figure 1, we can see that this corresponds to the case where porous material gases are advected towards the interface \(\mathcal{I}\) and, by mass conservation, when boundary layer gases are advected away from the interface. This scenario is commonly referred to as _blowing_. However, it is certainly possible that the opposite scenario occurs, where boundary layer gases are advected towards the interface (i.e., _aspiration_) and porous material gases are advected away from the interface. In this section, we propose a unified \(B^{\prime}\) formulation capable of addressing all these scenarios. Moving forward, mass fractions \(y_{k}\) are to be understood as elemental mass fractions.
We begin by modifying the transfer potential models used in the original formulation. In particular, we write
\[J_{k,i}^{(f)}=\begin{cases}\rho_{e}u_{e,i}St\left(y_{k}^{(f)}-y_{k}^{(e)} \right)&\text{if }B^{\prime}_{fl}\geq 0\\ \rho_{e}u_{e,i}St\left(y_{k}^{(g)}-y_{k}^{(e)}\right)&\text{if }B^{\prime}_{fl}<0, \end{cases} \tag{26}\]
and
\[-\kappa^{(f)}\frac{\partial T^{(f)}}{\partial x_{i}}+\sum_{k=1}^{N_{s}}h_{k}^ {(f)}J_{k,i}^{(f)}=\begin{cases}\rho_{e}u_{e,i}St\left(h^{(f)}-h^{(e)}\right)& \text{if }B^{\prime}_{fl}\geq 0\\ \rho_{e}u_{e,i}St\left(h^{(g)}-h^{(e)}\right)&\text{if }B^{\prime}_{fl}<0. \end{cases} \tag{27}\]
Here, we observe that the need to distinguish between \(B^{\prime}_{fl}\geq 0\) and \(B^{\prime}_{fl}<0\) in (26) and (27) is merely due to notation. Specifically, we shall see momentarily that when \(B^{\prime}_{fl}\geq 0\), \(y^{(f)}_{k}\) are the unknown mass fractions that can be computed via Gibbs free energy minimization under the assumption of chemical equilibrium at the wall. Conversely, when \(B^{\prime}_{fl}<0\), the equilibrium mass fractions are \(y^{(g)}_{k}\). Thus, (26) can be understood as a transfer potential model expressed in terms of the equilibrium mass fractions at the wall. This interpretation makes (26) fully consistent with the transfer potential model presented in Eckert (1969). The same argument holds for the model in (27).
Given the models (26) and (27), the corresponding \(B^{\prime}\) mass and energy balance equations read
\[\begin{cases}\xi_{k}-y_{k}^{(e)}+y_{k}^{(f)}B^{\prime}_{fl}=y_{k}^{(g)}B^{ \prime}_{g}+B^{\prime}_{c}\delta_{k,k_{C}},\quad k\in\{1,2,\ldots,N_{es}\}\\ \eta-h^{(e)}+B^{\prime}_{fl}h^{(f)}=B^{\prime}_{g}\langle h\rangle^{(g)}- \frac{q_{\text{cond}}}{\rho_{e}u_{e,i}St\,n_{i}}+B^{\prime}_{c}\langle h \rangle^{(s)}+\frac{\Delta q_{\text{rad}}}{\rho_{e}u_{e,i}St\,n_{i}},\end{cases} \tag{28}\]
where
\[\xi_{k}=\begin{cases}y_{k}^{(f)}&\text{if }B^{\prime}_{fl}\geq 0\\ y_{k}^{(g)}&\text{if }B^{\prime}_{fl}<0,\end{cases},\quad\eta=\begin{cases}h^{(f)}& \text{if }B^{\prime}_{fl}\geq 0\\ h^{(g)}&\text{if }B^{\prime}_{fl}<0.\end{cases} \tag{30}\]
In particular, we see that when \(B^{\prime}_{fl}\geq 0\), equations (28) and (29) agree with (17) and (25). Moreover, we will see that the form of (28) and (29) (inherited from the transfer potential models in (26) and (27)) is such that the unknown equilibrium mass fractions and normalized surface recession rate \(B^{\prime}_{c}\) are continuous functions of \(B^{\prime}_{fl}\). This property provides a well-behaved computational model. In the upcoming subsections we discuss the two cases \(B^{\prime}_{fl}\geq 0\) and \(B^{\prime}_{fl}<0\) in detail.
### \(B^{\prime}_{fl}\geq 0\) case
This scenario corresponds to boundary layer gases being advected away from the interface \(\mathcal{I}\) in figure 1. Recalling that \(B^{\prime}_{fl}=B^{\prime}_{g}+B^{\prime}_{c}\), we distinguish between two different subcases: \(B^{\prime}_{fl}>B^{\prime}_{c}\) and \(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\).
#### 5.1.1 \(B^{\prime}_{fl}>B^{\prime}_{c}\)
In this case, \(B^{\prime}_{g}>0\), meaning that porous material gases are advected towards the interface \(\mathcal{I}\) in figure 1. This is the one and only case considered in the classical \(B^{\prime}\) formulation. Here, the unknowns in (28) are the mass fractions \(y^{(f)}_{k}\) and \(B^{\prime}_{c}\). In general, the unknown mass fractions are those associated with the mixture (superscripted either with \((f)\) or \((g)\)) that is being advected _away_ from the interface. Since we have less equations than unknowns, solvability is achieved by assuming chemical equilibrium of the species at the interface. Under this assumption, the mass fractions \(y^{(f)}_{k}\) at equilibrium can be computed straightforwardly as a function of pressure, temperature and \(B^{\prime}_{g}\) via Gibbs free energy minimization (Pope, 2004; Scoggins et al., 2020). The temperature and pressure are readily available from the boundary conditions, or they can be computed internally by the material response code. Likewise, \(B^{\prime}_{g}\) can be computed internally from \(\langle u_{i}\rangle^{(g)}n_{i}\) at the surface. While it is clear from thermodynamics that the equilibrium composition of a mixture is a function of pressure and temperature, it is helpful to clarify the role of \(B^{\prime}_{g}\) in this specific application.
The composition of the equilibrium mixture depends on the initial composition of the reactants. In ablation applications, the reactants mixture is assumed to be made up of the "edge" elemental mixture alongside the porous material gas elemental mixture. The elemental composition of the "edge " mixture can always be expressed in terms of the mass fractions \(y^{(e)}_{j}\) with \(j\in\{1,2,\ldots,N_{es}\}\) (e.g., \(O\), \(H\), \(N\) and \(C\)). Similarly, the elemental composition of the mixture on the porous material side can be expressed as \(y^{(g)}_{j}\) with \(j\in\{1,2,\ldots,N_{es}\}\). Thus, the mass fraction of elemental species \(j\) in the elemental mixture of reactants is
\[y^{\rm reactants}_{j}=\frac{y^{(e)}_{j}+B^{\prime}_{g}y^{(g)}_{j}}{\sum_{k=1}^{ N_{es}}\left(y^{(e)}_{k}+B^{\prime}_{g}y^{(g)}_{k}\right)}. \tag{31}\]
Clearly, different values of \(B^{\prime}_{g}\) lead to different reactants mixtures, and it is therefore clear that the resulting equilibrium mixture will also be a function of \(B^{\prime}_{g}\).
Once the equilibrium mass fractions \(y_{k}^{(f)}\) are obtained, \(B^{\prime}_{c}\) can be obtained directly from (17) using \(B^{\prime}_{fl}=B^{\prime}_{g}+B^{\prime}_{c}\). In particular, fixing \(k=k_{C}\), (where we recall that \(k_{C}\) is the index pointing to monatomic carbon \(C\)), we have
\[B^{\prime}_{c}=\frac{y_{k}^{(g)}-y_{k}^{(f)}}{y_{k}^{(f)}-1}B^{\prime}_{g}+ \frac{y_{k}^{(e)}-y_{k}^{(f)}}{y_{k}^{(f)}-1}. \tag{32}\]
The process just described is usually tabulated (i.e., precomputed) as a function of pressure, temperature and \(B^{\prime}_{g}\). Hence the name \(B^{\prime}\) table. In the energy equation (29), the only unknown is \(q_{\rm cond}\), which sets a Neumann boundary condition for the the temperature field, \(h^{(f)}\) is taken to be the enthalpy of the wall equilibrium mixture (given by the \(B^{\prime}\) table), and \(\langle h\rangle^{(g)}\) is taken to be the enthalpy associated with the elemental composition on the porous material side of the interface.
#### 5.1.2 \(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\)
In this case, \(B^{\prime}_{g}\leq 0\), meaning that porous material gases are advected away from the interface \({\cal I}\). The unknowns in (28) are \(B^{\prime}_{c}\) as well as \(y_{k}^{(f)}\) and \(y_{k}^{(g)}\), since both boundary layer gases and porous material gases are being advected away from the interface. Since chemical equilibrium calculations yield one equilibrium mixture, it is clear that \(y_{k}^{(f)}=y_{k}^{(g)}\). It is worth remarking that while in the previous case the equilibrium mixture was a function of pressure, temperature and \(B^{\prime}_{g}\), here the mixture is only a function of pressure and temperature. In fact, since porous material gases are advected away from the interface, the mass fractions of the elemental mixture of reactants are given by the elemental "edge" composition alone,
\[y_{j}^{\rm reactants}=\frac{y_{j}^{(e)}}{\sum_{k=1}^{N_{es}}y_{k}^{(e)}}. \tag{33}\]
Since the reactants mixture does not depend on \(B^{\prime}_{g}\), the equilibrium mixture will also be independent of \(B^{\prime}_{g}\). Once the equilibrium mass fractions \(y_{k}^{(f)}=y_{k}^{(g)}\) are computed, (28) gives us (with \(k=k_{C}\))
\[B^{\prime}_{c}=\frac{y_{k}^{(e)}-y_{k}^{(f)}}{y_{k}^{(f)}-1}=\frac{y_{k}^{(e) }-y_{k}^{(g)}}{y_{k}^{(g)}-1}. \tag{34}\]
This equation is quite interesting, as it states that in this regime \(B^{\prime}_{c}\) is independent of \(B^{\prime}_{g}\). We can also readily check that if we evaluate (32) at \(B^{\prime}_{g}=0\), this agrees
with (34), meaning that \(B^{\prime}_{c}\) is continuous at \(B^{\prime}_{fl}=B^{\prime}_{c}\). In the energy equation (29), \(h^{(f)}=\langle h\rangle^{(g)}\) and they are taken to be equal to the enthalpy of the wall equilibrium mixture computed using the \(B^{\prime}\) table.
### \(B^{\prime}_{fl}<0\) case
We now consider the case \(B^{\prime}_{fl}<0\), which corresponds to boundary layer gases being advected towards the interface \(\mathcal{I}\) in figure 1. Thus, the unknowns in (28) are \(y^{(g)}_{k}\) and \(B^{\prime}_{c}\). The boundary layer mass fractions \(y^{(f)}_{k}\), on the other hand, are set equal to the edge mass fractions \(y^{(e)}_{k}\). This is equivalent to assuming a frozen boundary layer, where the "edge" elemental composition is equal to the elemental composition in close proximity of the wall. By setting \(y^{(f)}_{k}=y^{(e)}_{k}\), the equilibrium mixture becomes independent of \(B^{\prime}_{g}\), and thus only a function of pressure and temperature. This can be seen immediately once we observe that the reactants mixture is defined by equation (31) with \(y^{(g)}_{k}\) replaced by \(y^{(f)}_{k}\) and \(B^{\prime}_{g}\) replaced by \(B^{\prime}_{fl}\). Given the equilibrium mass fractions \(y^{(g)}_{k}\), formula (28) can be solved for \(B^{\prime}_{c}\) with \(k=k_{C}\),
\[B^{\prime}_{c}=\frac{y^{(g)}_{k}-y^{(f)}_{k}}{y^{(f)}_{k}-1}B^{\prime}_{g}+ \frac{y^{(e)}_{k}-y^{(g)}_{k}}{y^{(g)}_{k}-1}. \tag{35}\]
First, we observe that since the equilibrium mass fractions are independent of \(B^{\prime}_{g}\), then \(B^{\prime}_{c}\) is a linear function of \(B^{\prime}_{g}\). Second, if we evaluate (35) at \(B^{\prime}_{fl}=0\) (i.e., \(B^{\prime}_{g}=-B^{\prime}_{c}\)) we can see after some manipulation that this agrees with (34). Thus, \(B^{\prime}_{c}\) is continuous at \(B^{\prime}_{fl}=0\), as desired. In the energy equation (29), \(h^{(f)}=c_{p}T\), where \(T\) is the wall temperature and \(c_{p}\) is the specific heat capacity based on "edge" quantities (due to the fact that we take \(y^{(f)}_{k}=y^{(e)}_{k}\)), and \(\langle h\rangle^{(g)}\) is taken to be the enthalpy of the wall equilibrium mixture delivered by the \(B^{\prime}\) table.
We conclude this section by pointing the reader's attention to figure 2, which shows a schematic of the three \(B^{\prime}_{fl}\) regimes discussed thus far. This shows that if we account for the inflow of gases into the porous material (i.e., \(B^{\prime}_{g}<0\)), \(B^{\prime}_{c}\) will always be greater than or equal to \(B^{\prime}_{c,0}\), which is the value at \(B^{\prime}_{g}=0\). In particular, if \(B^{\prime}_{g}<0\) is small enough that \(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\), then by equation (34) we see that \(B^{\prime}_{c}=B^{\prime}_{c,0}\). Since \(B^{\prime}_{c}\) is directly proportional to the recession velocity of the interface \(\mathcal{I}\), this implies that the classical \(B^{\prime}\) formulation will predict a recession velocity that is exactly equal to the recession velocity predicted by the new \(B^{\prime}\) formulation. However, if \(B^{\prime}_{g}<0\) is large enough that \(B^{\prime}_{fl}<0\), then by (35) \(B^{\prime}_{c}>B^{\prime}_{c,0}\), and the classical framework will predict a recession velocity that is lower than that predicted by the new formulation.
Finally, in order to facilitate the implementation of the new \(B^{\prime}\) formulation in existing material response codes, we provide some representative pseudocode in Algorithm 1. Given a modified \(B^{\prime}\) table (which can be easily generated following the guidelines in A or using the scripts in [https://github.com/albertopadovan/Modified_Bprime](https://github.com/albertopadovan/Modified_Bprime)), the algorithm shows that existing material response codes that are already equipped to use the classical \(B^{\prime}\) formulation should require very little additional logical to handle the extended \(B^{\prime}\) formulation.
### A note on the blowing/suction correction
When we are interested in computing the material response of a porous material to an external flow, but we are not resolving (or computing) the response of the fluid to the material dynamics, the Stanton number \(St\) is usually corrected to account for the effect of a non-zero velocity (i.e., suction/blowing) at the interface. In particular, given the Stanton number \(St_{0}\) associated with no suction or blowing, the corrected Stanton number \(St\) is given by
\[\frac{St}{St_{0}}=\frac{\log\left(1+2\lambda B^{\prime}_{fl}\right)}{2\lambda B ^{\prime}_{fl}}, \tag{36}\]
where \(\lambda>0\). This correction was initially derived from the incompressible (laminar) velocity boundary layer equations to correct the skin friction coefficient in the presence of suction or blowing (Kays and Crawford, 1993). Given that the thermal and
```
0: Modified \(B^{\prime}\) table generated following A and/or the scripts in the repository [https://github.com/albertopadovan/Modified_Bprime](https://github.com/albertopadovan/Modified_Bprime), pressure \(p\) and temperature \(\langle T\rangle\) at the ablating surface, and gas velocity \(\langle u_{i}\rangle^{(g)}n_{i}\) normal to the ablating surface.
0: Normalized ablation rate \(B^{\prime}_{c}\), enthalpy \(h_{w}\) of the equilibrium mixture at the surface, and solution to equations (28) and (29).
1: Compute \(B^{\prime}_{g}\) (see definition in section 3.2) using \(\langle u_{i}\rangle^{(g)}n_{i}\).
2: Using \(B^{\prime}_{g}\), \(p\) and \(\langle T\rangle\), compute \(B^{\prime}_{c}\) and \(h_{w}\) using the modified \(B^{\prime}\) table (which is constructed so that (28) is automatically satisfied with the appropriate \(\xi_{k}\)).
3: Compute \(B^{\prime}_{fl}=B^{\prime}_{g}+B^{\prime}_{c}\).
4:if\(B^{\prime}_{fl}>B^{\prime}_{c}\)then
5: Solve (29) with \(B^{\prime}_{fl}\geq 0\), \(h^{(f)}=h_{w}\) and \(\langle h\rangle^{(g)}\) taken to be the formation enthalpy of the elemental composition on the porous material side of the interface (see section 5.1.1).
6:elseif\(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\)then
7: Solve (29) with \(B^{\prime}_{fl}\geq 0\) and \(h^{(f)}=\langle h\rangle^{(g)}=h_{w}\) (see section 5.1.2).
8:else
9: Solve (29) with \(B^{\prime}_{fl}<0\), \(h^{(f)}=h^{(e)}\) and \(\langle h\rangle^{(g)}=h_{w}\) (see section 5.2).
10:endif
```
**Algorithm 1** Algorithmic outline of the new \(B^{\prime}\) formulation
concentration boundary layer equations with unity Prandtl and Lewis numbers are analogous to the velocity boundary layer equations (Eckert, 1969; Incropera et al., 2007), it follows immediately that, under the same assumptions, the same correction can be used to correct the Stanton number. The derivation in Kays and Crawford (1993) for laminar incompressible boundary layers led to \(\lambda=0.5\). According to Moyer and Rindal (1968), \(\lambda=0.4\) has been reported to be better suited for turbulent flows.
Since the derivation in Kays and Crawford (1993) holds for any positive and negative non-zero velocities at the surface (i.e., positive and negative \(B^{\prime}_{fl}\), in our case), the correction in (36) may be used for both positive and negative values of \(B^{\prime}_{fl}\). The only caveat is that (36) requires \(2\lambda B^{\prime}_{fl}>-1\), otherwise the logarithm is not defined. This simply means that as \(2\lambda B^{\prime}_{fl}\) approaches \(-1\) from the right, the assumptions that originally led to (36) no longer hold. We remark that (36) is well-posed for \(B^{\prime}_{fl}=0\), since
\[\lim_{2\lambda B^{\prime}_{fl}\to 0}\ \frac{St}{St_{0}}=1. \tag{37}\]
In practical applications, it is possible for \(2\lambda B^{\prime}_{fl}\) to be less than or equal to \(-1\), in which case use of (36) would lead to computational issues. We resolve the issue by artificially lower bounding \(2\lambda B^{\prime}_{fl}\) to \(-0.9\). We close this section by observing that while the blowing/suction correction is the most popular approach to account for suction and blowing in a boundary layer, a few authors (de Muelenaere et al., 2012; Cooper and Martin, 2023) have proposed formulations that bypass the need to correct the Stanton number using (36).
## 6 Application to a TACOT Wedge
In this section we compare the new \(B^{\prime}\) formulation with the classical \(B^{\prime}\) formulation on a two-dimensional pyrolyzing and ablating TACOT (Lachaud et al., 2018) wedge, whose geometry is shown in figure 3a.
### Description of the computational setup
The "Theoretical Ablative Composite for Open Testing" (TACOT) is a porous material consisting of two solid phases (non-reacting fibers and a reacting matrix), with a virgin (i.e., non-pyrolyzed) solid volume fraction of \(0.20\) and a charred (i.e., pyrolyzed) solid volume fraction of \(0.15\). The response of the material to a prescribed boundary condition (described below) is simulated using the in-house material response solver CHyPS, whose governing equations and computational discretization are described in section III of Chiodi et al. (2022). In particular, all conservation laws
are obtained via volume averaging, with the conservation of gaseous mass and solid mass taking the form of equations (8) and (9), respectively. The volumetric source terms \(\langle\psi_{k}\rangle^{(g)}\) and \(\langle\psi_{s}\rangle^{(s)}\) enter the formulation due to the heterogenous conversion of solid mass to gaseous mass promoted by pyrolysis. Pyrolysis itself is modelled via three chemical reactions with Arrenhius coefficients specified in table 1 of Chiodi et al. (2022). Conservation of momentum within the porous material is reduced to Darcy's law, while conservation of energy (which takes the form of (21)) is posed under the assumption of thermal equilibrium. Finally, the mesh movement induced by ablation is handled with the Arbitrary Lagrangian Eulerian (ALE) formulation.
The treatment of the gas and solid properties inside the TACOT wedge are discussed in detail in sections III.4 and IV of Chiodi et al. (2022). In particular, gas properties are assumed to be functions of pressure and temperature only, while solid properties are assumed to be functions of temperature and of the pyrolysis progress variable (denoted \(\tau\) in the notation of Chiodi et al. (2022), with \(\tau=0\) indicating the virgin state and \(\tau=1\) the charred state). Gas and solid properties, as well as bulk properties (e.g., thermal conductivity and permeability) are determined via the TACOT lookup tables available in (Lachaud et al., 2018). Moreover, TACOT is treated as an isotropic porous material and the volumetric gas composition is held constant at \(y_{O}=0.115\), \(y_{C}=0.206\) and \(y_{H}=0.679\) according to the TACOT model (Lachaud et al., 2018). It is worth observing that a more advanced volumetric gas chemistry model could be used, and it could include species tracking and equilibrium/non-equilibrium chemistry. In that case, a chemistry boundary condition can be provided by the new \(B^{\prime}\) formulation when gas is advecting into the material. The new \(B^{\prime}\) formulation is implemented according to algorithm 1. The classical \(B^{\prime}\) formulation is implemented analogously, except that \(B^{\prime}_{g}\) is artificially set to 0 in step 1 of algorithm 1 when boundary layer gases enter the porous material. Finally, the radiation term \(\Delta q_{\rm rad}\) in equation (29) is modelled following the Stefan-Boltzmann law for a grey body.
The dynamics of the material are fully specified by the pressure and normalized heat flux profiles on the surface of the wedge. Nominal normalized pressure \(p/p_{\infty}\) and normalized heat flux \(\rho_{e}u_{e,i}St\,n_{i}\) profiles, shown in figures 2(b) and 2(c), are obtained from the steady-state solution of a Mach-2 flow around the wedge. In particular, we used the in-house solver PlasCom2 to solve the compressible Navier-Stokes equations at freestream conditions \(M_{\infty}=2\), \(T_{\infty}=1000\)K, \(p_{\infty}=10\)kPa, and Reynolds number \(Re_{\infty}=1.1\times 10^{6}\) based on a freestream characteristic length \(L=1\). The fluid was modeled as a single-species ideal gas with \(\gamma=1.4\) and \(R=287\,{\rm J}/\left({\rm kg-K}\right)\). Viscosity and thermal conductivity were modeled with a viscous power law of \(\mu=\mu_{298}\left(T/T_{298}\right)^{0.666}\) and \(Pr=0.72\). The material interface
boundary condition was enforced with a no-slip, impermeable, isothermal wall at \(1000\,K\), with non-zero pressure gradient. The \(\rho_{e}u_{e,i}St\,n_{i}\) profiles were computed using equation (23) with \(h^{(e)}=h_{\infty}\left(1+\left(\sqrt{Pr}\left(\gamma-1\right)/2\right)M_{ \infty}^{2}\right)\). The \(B^{\prime}\) tables were generated using Mutation++ (Scoggins et al., 2020) with the NASA-9 thermodynamics database, and assuming an "edge" elemental composition \(y_{N}^{(e)}=0.790\), \(y_{O}^{(e)}=0.210\), and a pyrolysis gas elemental composition \(y_{O}^{(\mathrm{pyro})}=0.115\), \(y_{C}^{(\mathrm{pyro})}=0.206\) and \(y_{H}^{(\mathrm{pyro})}=0.679\). Details are described in A. It is also important to remark that, throughout, ablation is treated exclusively as a surface phenomenon and volume ablation is neglected.
In order to study how the two \(B^{\prime}\) formulations behave under different heating and external pressure conditions, we run four different simulations. In particular, we specify the normalized heat flux boundary condition as
\[\alpha\rho_{e}u_{e,i}St\,n_{i}, \tag{38}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \(\alpha\) & \(p_{\infty}\) \\ \hline Case 1 & 1 & \(10\,kPa\) \\ \hline Case 2 & 1 & \(1\,kPa\) \\ \hline Case 3 & \(1/4\) & \(1\,kPa\) \\ \hline Case 4 & \(1/4\) & \(10\,kPa\) \\ \hline \end{tabular}
\end{table}
Table 1: Scaling factors \(\alpha\) for the normalized heat flux boundary condition, and external reference pressure \(p_{\infty}\).
Figure 3: _(a)_ Steady-state normalized streamwise velocity field around the wedge, _(b)_\(\rho_{e}u_{e,i}St\,n_{i}\) profile and _(c)_ normalized pressure profile at the wedge surface. Here, \(p_{\infty}=10\,kPa\). The wedge surface at \(t=0\) is parameterized according to the equation \(x=c_{1}y^{3}+c_{2}y^{2}+c_{3}\), where \(c_{1}=-7966.80539304\), \(c_{2}=336.99725483\) and \(c_{3}=-0.09014195\).
where \(\alpha\) is a scaling factor, and we vary the external reference pressure \(p_{\infty}\). The values of \(\alpha\) and \(p_{\infty}\) for the four different cases are listed in table 1. The material response code is initialized with zero heat flux and uniform pressure \(p_{\infty}\) on the wedge surface at \(t=0\), and it is brought (via linear interpolation) to the desired surface boundary condition over a ramping period of \(0.01\,s\). After that, we observe the response of the wedge for a total of \(0.5\,s\). For all cases considered herein, we will see that the length of the temporal interval \(t\in[0,0.5]\,s\) is sufficient for initial transients to decay and to observe post-transient dynamics. Throughout, we use \(\lambda=0.5\) in the blowing correction (36).
### Discussion of the results
Figures 4 and 5 show the degree of surface recession at times \(t=0.20\,s\) and \(t=0.50\,s\), respectively, for the four different cases considered in table 1. The top half of all panels (\(y\geq 0\)) shows the wedge geometry as predicted by the new \(B^{\prime}\) formulation, while the bottom half shows the geometry as given by the classical \(B^{\prime}\) formulation. The geometry is colorcoded by the local instantaneous recession velocity (in meters per second) normal to the surface. From the figures, we see that in the high pressure cases (cases 1 and 4), the new \(B^{\prime}\) formulation predicts a higher recession velocity and a larger shape deformation. By contrast, in the low pressure cases (cases 2 and 3) the two formulations give (almost) identical predictions. These observations can be explained by looking at the time history of \(B^{\prime}_{g}\) at the leading edge of the wedge in figure 6. Here, we see that for cases 2 and 3 (panels _(b)_ and _(c)_), \(B^{\prime}_{g}\geq 0\) for (almost) all times, meaning that porous material gases are blown into the boundary layer. In this case, the two formulations are mathematically identical and it should therefore be expected that they predict the same surface recession velocities. On the other hand, we see that for cases 1 and 4 (panels _(a)_ and _(d)_), \(B^{\prime}_{g}\) in the new formulation (solid lines) remains negative for all times, while \(B^{\prime}_{g}=0\) in the classical formulation (dashed lines). By equation (35), a negative \(B^{\prime}_{g}\) leads to a larger \(B^{\prime}_{c}\), which, in turn, gives higher recession velocities. Before moving forward, it is important to remark that in both \(B^{\prime}_{g}\) formulations, boundary layer gases are allowed to flow into the porous material (this can be seen clearly in figures 7 and 9). However, in the classical \(B^{\prime}_{g}\) formulation the effect of inflowing gases on the surface chemistry is neglected and \(B^{\prime}_{g}\) is not allowed to attain negative values.
We now further investigate cases 1 and 4. While both cases exhibit sustained negative \(B^{\prime}_{g}\) values, we seek an explanation for the observation that, in case 4, there is a much more pronounced difference between the new and the classical \(B^{\prime}\) formulations. This difference is evident from figure 5d, where we see that the top surface (given by the new \(B^{\prime}\) formulation) has receded almost twice as much as the bottom
Figure 4: Wedge geometry at time \(t=0.20\,s\), colorcoded by the instantaneous surface recession velocity (in meters per second) normal to the surface. The top half (\(y\geq 0\)) is the prediction using the new \(B^{\prime}\) formulation, the bottom half is the prediction using the classical \(B^{\prime}\) formulation. The gray line shows the geometry at \(t=0\).
Figure 5: Analog of figure 4 at time \(t=0.50\,s\).
surface (given by the classical \(B^{\prime}\) formulation). Ultimately, as discussed throughout the manuscript, the reason behind the discrepancy between the two formulations is driven by
\[\Delta B^{\prime}_{g}=B^{\prime}_{g,\text{classical}}-B^{\prime}_{g,\text{new}}, \tag{39}\]
which is significantly larger in case 4 (figure 6d) than in case 1 (figure 6a).
In order to understand the difference between \(\Delta B^{\prime}_{g}\) in cases 1 and 4, we first recall that \(B^{\prime}_{g}\) can be understood as a normalized mass flux and, as such, it scales linearly with the local gas density and the local gas velocity. Interestingly, we see from figures 7a and 7d that the gas velocity at the stagnation point is approximately equal for both cases 1 and 4. (This is likely due to the fact that both cases are exposed to the same pressure boundary condition (see table 1).) It follows that the difference in \(\Delta B^{\prime}_{g}\) must be due to a proportional difference in the gas density, with a higher gas density in case 4 (thus, higher mass flux and larger \(\Delta B^{\prime}_{g}\)) and a lower gas density in case 1. The reason why case 4 exhibits a higher gas density can be easily understood by recalling that case 4 is exposed to a normalized heat flux that is four times lower than that imposed in case 1 (see, once again, table 1). Consequently, the temperature at the wedge leading edge in case 4 (figure 8d) is lower than its counterpart in case 1 (figure 8d), thereby leading to higher and lower densities, respectively.
In light of this discussion, we conclude that aspiration (\(B^{\prime}_{g}<0\)) has a larger effect on the recession velocity at lower temperature and higher pressures. From an intuitive standpoint, the high pressure is necessary to cause aspiration (i.e., \(B^{\prime}_{g}<0\)), and this is required to observe any sort of difference between the two formulations. Clearly, the higher the pressure the higher the difference. However, as discussed, we also observe that the surface temperature has a non-negligible effect on the surface recession, with higher temperatures leading to higher recession velocities (case 1),
Figure 6: Time history of \(B^{\prime}_{g}\) at the wedge leading edge for cases 1-4 (panels \((a)\)-\((d)\)). The solid line corresponds to the new \(B^{\prime}\) formulation, while the dashed line to the classical \(B^{\prime}\) formulation.
but lower temperatures causing a larger spread \(\Delta B^{\prime}_{g}\) between the two formulations.
In closing the results section, it is also interesting to study the inflow/outflow of gases into and out of the porous material as a function of time. To do so, we focus on cases 3 and 4, and we plot contours of the gas velocity normal to the surface as a function of time and streamwise location along the wedge surface (figure 9). In both cases, we do not observe noteworthy qualitative differences between the flow of gases computed using the new \(B^{\prime}\) formulation (top panels) and the classical \(B^{\prime}\) formulation (bottom panels). This suggests that accounting for the inflow of gases into the porous material has an effect primarily in the surface recession rate and in the surface thermodynamics (as discussed in the preceding paragraphs). Despite this, figure 9 is still interesting, and it can be used to better understand the physics at hand. Interestingly, in case 3 we observe a "flow reversal" whereby gases that are initially flowing into the material at early times and near the wedge leading edge, are eventually expelled along the whole surface at later times. (This is likely to be attributed to a rise in pressure inside the material due to pyrolysis, as discussed in Lachaud et al. (2015).) Except for early times, \(B^{\prime}_{g}\geq 0\) along the whole surface, so the new \(B^{\prime}\) formulation is mostly in agreement with the classical \(B^{\prime}\) formulation, and the integrated difference in terms of surface recession is qualitatively negligible (see figures 4c and 5c). Case 4, on the other hand, exhibits much larger space-time regions of gas inflow, so it is to be expected that accounting for the effect of aspiration in the \(B^{\prime}\) formulation will lead to significant differences in the predicted surface recession (see figures 4d and 5d). Interestingly, case 4 does not exhibit the same flow reversal as case 3, except for a narrow region on the wedge shoulder (approximately between \(x=-0.089\) and \(x=0.087\) and after time \(t\approx 0.25\)).
Figure 7: Analog of figure 6, for the time-history of the gas velocity normal to the surface at the wedge leading edge. (Negative values indicate that gases are entering the porous material.)
Figure 8: Analog of figure 6, for the time-history of the surface temperature at the leading edge of the wedge.
Figure 9: Contour plot of the gas velocity (in meters per second) normal to the surface. The black contour lines emphasize the 0-contour, i.e., the transition from negative gas velocities (aspiration) to positive gas velocities (blowing). The top panels are given by the new \(B^{\prime}\) formulation, while the bottom panels by the classical \(B^{\prime}\) formulation.
## 7 Conclusion
We derive the \(B^{\prime}\) formulation for ablating-surface boundary conditions from first principles, starting from a jump condition that we obtained following the approach of Keller (1954). This allows to clearly identify all the underlying assumptions of the \(B^{\prime}\) formulation, especially when applied at a reacting interface between a boundary layer and a porous material. We then extend the \(B^{\prime}\) formalism to account for the advective transport of boundary layer gases into the porous material. Although this is a common occurrence in hypersonics applications and in thermal protection systems, the classical \(B^{\prime}\) formulation neglects its effect on the dynamics of the material. We demonstrate, both theoretically and via examples, that accounting for the advective transport of gases into the porous material can have a significant effect on the recession velocity of ablating interfaces.
## Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. 2139536, issued to the University of Illinois at Urbana-Champaign by the Texas Advanced Computing Center under subaward UTAUS-SUB00000545 with Dr. Daniel Stanzione as the PI. The computations were performed on TACC's Frontera under LRAC grant CTS20006.
## Appendix A Generating the \(B^{\prime}\) Tables
Here, we describe how the \(B^{\prime}\) tables (for the new framework) can be generated using Mutation++ (Scoggins et al., 2020). We seek a table whose independent variables are the wall pressure \(p\), the wall temperature \(\langle T\rangle\) and the normalized blowing rate \(B^{\prime}_{g}\) on the porous material's side of the interface. Given \(p\), \(\langle T\rangle\) and \(B^{\prime}_{g}\) as inputs, the tables will output (after interpolation, if necessary), the normalized recession rate \(B^{\prime}_{c}\) and the enthalpy \(h_{w}\) of the equilibrium mixture.
When generating the tables, some care is required. In particular, Mutation++ generates tables as a function of \(p\), \(\langle T\rangle\) and the normalized mass flux of species that are advected _towards_ the interface. Depending on the specific case (see subsections below), this normalized mass flux is either \(B^{\prime}_{g}\) or \(B^{\prime}_{fl}\). As mentioned, however, during computation we would like to perform table look-ups based on \(p\), \(\langle T\rangle\) and \(B^{\prime}_{g}\), since \(B^{\prime}_{g}\) is a quantity that is always readily computed by the material response solver (recall the definition of \(B^{\prime}_{g}\) from equation (17)). In order to be able to perform look-ups based on \(p\), \(\langle T\rangle\) and \(B^{\prime}_{g}\), the tables generated by Mutation++ require some post-processing.
### Table for \(B^{\prime}_{g}\geq 0\)
From section 5, this case corresponds to \(B^{\prime}_{fl}\geq B^{\prime}_{c}\). This table can be generated using Mutation++ directly, without any further post processing, since the normalized mass flux of species that are advected towards the interface is precisely \(B^{\prime}_{g}\). The composition of the reactants used for the equilibrium calculations is speciefied in section 5.1.1. Henceforth, we refer to this table as Table 1.
### Table for \(B^{\prime}_{fl}<0\)
From section 5 this is one of the two cases corresponding to \(B^{\prime}_{g}<0\). (The other case is \(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\), discussed shortly.) This table can also be generated using Mutation++, with the reactants composition specified in section 5.2. However, the normalized mass flux used by Mutation++ corresponds to \(B^{\prime}_{fl}\) (and not \(B^{\prime}_{g}\), as desired). Fortunately, by mass conservation, we know that \(B^{\prime}_{g}=B^{\prime}_{fl}-B^{\prime}_{c}\). The table generated by Mutation++ can then be easily rearranged such that the look-up can be performed based on \(B^{\prime}_{g}\). We henceforth refer to this table as Table 2.
### Table for \(0\leq B^{\prime}_{fl}\leq B^{\prime}_{c}\)
This is the other case corresponding to \(B^{\prime}_{g}<0\). However, we recall from section 5.1.2, that in this specific case \(B^{\prime}_{c}\) and \(h_{w}\) (i.e., the outputs of the \(B^{\prime}\) tables) are independent of \(B^{\prime}_{fl}\) or \(B^{\prime}_{g}\). Then, for a given \(p\) and \(\langle T\rangle\), the outputs \(B^{\prime}_{c}\) and \(h\) can be calculated from Table 1 with \(B^{\prime}_{g}=0\). We henceforth refer to this table as Table 3. Finally, a unified \(B^{\prime}\) table can be obtained by "stacking" together tables 2, 3 and 1 (in increasing \(B^{\prime}_{g}\) order, from negative to positive).
## Appendix B Mass- and Heat-Transfer Boundary Layer Analogy
While this topic is addressed in Eckert (1969) and Incropera et al. (2007), and touched upon in Cooper et al. (2022) and in Appendix A in Meurisse et al. (2018), we repropose the derivation of the mass- and heat-transfer boundary layer analogy. This will clarify the definition of mass- and heat-transfer Stanton numbers, as well as the interpretation of the mass- and heat-transfer potential models used in the \(B^{\prime}\) mass and energy balances.
Following Eckert (1969), we begin with the steady, zero-pressure-gradient boundary layer equations
\[\frac{\partial\rho u}{\partial x}+\frac{\partial\rho v}{\partial y} =0\] (B.1) \[\rho u\frac{\partial u}{\partial x}+\rho v\frac{\partial u}{ \partial y} =\frac{\partial}{\partial y}\left(\mu\frac{\partial u}{\partial y}\right)\] (B.2) \[\rho u\frac{\partial H}{\partial x}+\rho v\frac{\partial H}{ \partial y} =-\frac{\partial\varepsilon}{\partial y}+\frac{\partial}{ \partial y}\left(\mu u\frac{\partial u}{\partial y}\right)\] (B.3) \[\rho u\frac{\partial w_{i}}{\partial x}+\rho v\frac{\partial w_{ i}}{\partial y} =-\frac{\partial j_{i}}{\partial y}.\] (B.4)
Here, \(w_{i}\) are the mass fractions in a mixture with \(N_{s}\) species, \(H=h+(1/2)(u^{2}+v^{2})\) is the total enthalpy and the fluxes \(\varepsilon\) and \(j_{i}\) are defined as
\[\varepsilon=-\kappa\frac{\partial T}{\partial y}+\sum_{i=1}^{N_{s}}h_{i}j_{i},\quad j_{i}=-\rho D_{i}\frac{\partial w_{i}}{\partial y}.\] (B.5)
The definition of \(j_{i}\) is known as Fick's law, with diffusion coefficient \(D_{i}\) associated with species \(i\). The definition of \(\varepsilon\), on the other hand, is the sum of Fourier's law for heat conduction, and the transport of enthalpy due to diffusion (see, e.g., Ramshaw (2002)).
Using the definition of the fluxes in (B.5), equations (B.4) and (B.3) can be cast in conservative form using (B.1) and (B.2),
\[\frac{\partial}{\partial x}\left(\rho uh\right)+\frac{\partial}{ \partial x}\left(\rho vh+\varepsilon\right) =0,\] (B.6) \[\frac{\partial}{\partial x}\left(\rho uw_{i}\right)+\frac{ \partial}{\partial y}\left(\rho vw_{i}-\rho D_{i}\frac{\partial w_{i}}{ \partial y}\right) =0\] (B.7)
In obtaining (B.6) we have used the definition of \(H\), neglected the term \(\rho u\partial v/\partial x+\rho v\partial v/\partial y\) (consistently with the scaling arguments that led to the velocity boundary layer equation (B.2)), and neglected the viscous dissipation term \(\mu\left(\partial u/\partial y\right)^{2}\) (see page 366 in Incropera et al. (2007)).
For boundary layer analogy between the thermal boundary layer (B.6) and the species boundary layer (B.7), we require
\[\varepsilon=-\rho D_{i}\frac{\partial h}{\partial y}.\] (B.8)
As a first step, we observe that the enthalpy of the mixture can be expressed as
\[h(T,w)=\sum_{i=1}^{N_{s}}h_{i}(T)w_{i},\] (B.9)
so that, using the chain rule and defining \(c_{p}=\partial h/\partial T\), we have
\[dT=\frac{1}{c_{p}}dh-\frac{1}{c_{p}}\sum_{i=1}^{N_{s}}h_{i}(T)dw_{i}.\] (B.10)
Using the definition of \(\varepsilon\) in (B.5) and the equation above, we can write
\[\varepsilon=-\frac{\kappa}{c_{p}}\frac{\partial h}{\partial y}+\frac{\kappa}{ c_{p}}\sum_{i=1}^{N_{s}}h_{i}\frac{\partial w_{i}}{\partial y}-\rho\sum_{i=1}^{N_ {s}}h_{i}D_{i}\frac{\partial w_{i}}{\partial y}.\] (B.11)
Defining the Prandlt and Schmidt numbers
\[Pr=\frac{\mu c_{p}}{\kappa},\quad Sc_{i}=\frac{\mu}{\rho D_{i}},\] (B.12)
we can write (B.11) as
\[\varepsilon=-\frac{\mu}{Pr}\frac{\partial h}{\partial y}+\frac{\mu}{Pr}\sum_ {i=1}^{N_{s}}\left(1-\frac{Pr}{Sc_{i}}\right)h_{i}\frac{\partial w_{i}}{ \partial y}.\] (B.13)
From this equation, it is immediate that (B.8) is satisfied so long as \(Pr=Sc_{i}\) (i.e., if the species Lewis number \(Le_{i}=Pr/Sc_{i}\) is equal to 1). Thus, given the set of assumptions made throughout this derivation, mass- and heat-transfer boundary layer analogy is achieved for species Lewis numbers \(Le_{i}=1\). We note in passing that to achieve analogy with the velocity boundary layer in (B.2), one would also require \(Pr=1\). Before moving forward, we wish to point out that the derivation of the boundary layer analogy presented herein is slightly different than the one in Eckert (1969), where the author worked directly with total enthalpy. This led to a different set of assumptions and to the additional requirement of \(Pr\) for thermal/species boundary layer analogy.
Using the derivation above, we can now straightforwardly define the mass-transfer and heat-transfer Stanton numbers. Assuming equal diffusion coefficients \(D=D_{i}\) for all \(i\), the mass-transfer Stanton number \(St_{M}\) is defined as
\[j_{i}=-\rho D\frac{\partial w_{i}}{\partial y}\coloneqq\rho_{e}u_{e}St_{M} \left(w_{i,s}-w_{i,e}\right),\] (B.14)
where the subscript "e" denotes an edge quantity and the subscript "s" denotes a surface quantity. The heat-transfer Stanton number \(St_{H}\) is defined similarly,
\[\varepsilon=\rho_{e}u_{e}St_{H}\left(h_{s}-h_{e}\right).\] (B.15)
By the aforementioned boundary layer analogy, it follows immediately that
\[St\coloneqq St_{M}=St_{H}.\] (B.16)
As a final note, it is interesting to express the contribution of \(\kappa\partial T/\partial y\) to \(\varepsilon\) in terms of the Stanton number. Starting from the definition of \(\varepsilon\) in (B.5), using (B.14) and (B.15) alongside the boundary layer analogy and equal diffusion coefficients, we have
\[\varepsilon=\rho_{e}u_{e}St\left(h_{s}-h_{e}\right)=-\kappa\frac{\partial T}{ \partial y}+\rho_{e}u_{e}St\underbrace{\sum_{i=1}^{N_{s}}h_{i}\left(w_{i,s}-w_ {i,e}\right)}_{\coloneqq(h_{s}-h_{s,e})},\] (B.17)
which implies
\[-\kappa\frac{\partial T}{\partial y}=\rho_{e}u_{e}St\left(h_{s,e}-h_{e} \right),\] (B.18)
where \(h_{s,e}\) is the enthalpy at the surface with edge composition.
|
2303.11902 | Hidden Steering Nonlocality in Quantum Networks | By combining two objects with no quantum effect one can get an object with
quantum effect. Such a phenomenon, often referred to as activation has been
analyzed for the notion of steering nonlocality. Activation of steering
nonlocality is observed for different classes of mixed entangled states in
linear network scenarios. Characterization of arbitrary two qubit states, in
ambit of steering activation in network scenarios has been provided in this
context. Using the notion of reduced steering, instances of steerability
activation are also observed in nonlinear network. Present analysis involves
three measurement settings scenario(for both trusted and untrusted parties)
where steering nonlocality is distinguishable from Bell nonlocality. | Kaushiki Mukherjee, Biswajit Paul, Soma Mandal | 2023-03-21T14:48:47Z | http://arxiv.org/abs/2303.11902v1 | # Hidden Steering Nonlocality in Quantum Networks
###### Abstract
By combining two objects with no quantum effect one can get an object with quantum effect. Such a phenomenon, often referred to as _activation_ has been analyzed for the notion of steering nonlocality. Activation of steering nonlocality is observed for different classes of mixed entangled states in linear network scenarios. Characterization of arbitrary two qubit states, in ambit of steering activation in network scenarios has been provided in this context. Using the notion of reduced steering, instances of steerability activation are also observed in nonlinear network. Present analysis involves three measurement settings scenario(for both trusted and untrusted parties) where steering nonlocality is distinguishable from Bell nonlocality.
## I I. Introduction
Quantum nonlocality is is an inherent feature of quantum theory[1; 2]. It forms the basis of various information theoretic tasks[3; 4; 5; 6; 7; 8; 9; 10]. Presence of entanglement is a necessary condition for generation of nonlocal correlations, though it is not sufficient due to existence of local models of some mixed entangled states[11; 12; 13]. Such type of entangled states are often referred to as _local entangled states[14]_. Procedures involving exploitation of nonlocal correlations from local entangled states are often referred to as _activation scenarios_.[15]. Till date, such activation scenarios are classified into three categories: _activation via local filtering[16; 17; 18]_, _activation by tensoring[19; 20; 21; 22; 23]_ and _activation in quantum networks_.. Any possible combination of mechanisms involved in these three types is also considered as a valid activation procedure.
In activation by quantum network scenarios, nonlocality is activated by suitable arrangement of states(different or identical copies) in a quantum network[29; 30; 31; 32; 33]. Speaking of the role of quantum networks in activation, entanglement swapping networks have emerged as an useful tool for activating nonlocality of states in standard Bell scenario. In present discussion, utility of these networks will be explored in ambit of activating nonlocality beyond Bell scenario.
In an entanglement swapping network, entanglement is created between two distant parties sharing no direct common past[34; 35; 36]. Apart from its fundamental importance, it is applicable in various quantum applications. This procedure is also a specific example of quantum teleportation[37].
The key point of quantum nonlocality activation(Bell-CHSH sense) in entanglement swapping scenario is that starting from entangled states(shared between interacting parties) satisfying Bell-CHSH inequality, a Bell-nonlocal state is generated between non interacting parties at the end of the protocol. In [29; 32; 33] swapping procedure has been framed as a novel example of nonlocality activation in quantum mechanics. Existing research works have exploited bipartite[29; 32; 33] and tripartite hidden nonlocality[38] in standard Bell scenario using swapping network. Present work will be exploring the utility(if any) of entanglement swapping network for activation of quantum steering nonlocality. Owing to involvement of sequential measurements in the network scenario, we will refer activation of steering nonlocality as _revealing hidden steering nonlocality_ in spirit of Popescu[16].
Motivated by famous EPR argument[1] claiming incompleteness of quantum theory, Schrodinger first gave the concept of _steering[39; 40]_. A complete mathematical formalism of such a manifestation of steering was provided in [41] where they characterized _steering correlations_. Several criteria have emerged for detecting steerability of correlations generated from a given quantum state[42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. The correlation based criterion given in [44], often referred to as CJWR inequality, is used here for analyzing activation of steerability. Up to two measurement settings scenario, notions of Bell-CHSH nonlocality and any steering nonlocality are indistinguishable. So, here we consider CJWR inequality for three measurement settings. Violation of this symmetric inequality guarantees steerability of the bipartite correlations generated in the corresponding measurement scenario. Such form of steerability is often referred to as _F\({}_{3}\) steerability_. Using such a symmetric inequality as a detection criterion allows interchange of the roles of the trusted and the untrusted parties in the operational interpretation of steering.
Now consider a scenario involving two entangled states(\(\rho_{AB},\rho_{BC}\),say) such that none of them violates CJWR inequality for three settings[44]. Let \(\rho_{AB}\) and
\(\rho_{BC}\) be shared between three distant parties Alice, Bob and Charlie(say) where Alice and Charlie share no direct common past. Let \(\rho_{AB}\) be shared between Alice and Bob(say) whereas \(\rho_{BC}\) be shared between Bob and Charlie. Let classical communication be allowed between two parties sharing a state. Hence, Alice and Charlie do not interact. In such a scenario, when the parties perform local operations, will it be possible to generate a steerable state between the two non interacting parties? Affirmative result is obtained when one considers an entanglement swapping network. To be precise, for some outputs of Bob, conditional state shared between the two non interacting parties(Alice and Charlie) turn out to be \(F_{3}\) steerable.
After observing hidden steerability for some families of two qubit states in a standard entanglement swapping network(Fig.1), a characterization of arbitrary two qubits states is given in this context. As already mentioned before, CJWR inequality (for three settings) given in [44] is used as a detection criterion. Instance of genuine activation of steering is also observed in the sense that steerable state is obtained while using unsteerable states in the swapping protocol. Arbitrary two qubit states have also been characterized in perspective of genuine activation. At this junction it should be pointed out that the steerable conditional states resulting at the end of the protocol are Bell-local in corresponding measurement scenario[55].
Exploring hidden steerability in three party entanglement swapping scheme, number of parties is then increased. Results of activation are observed in a star network configuration of entanglement swapping involving non-linear arrangement of four parties under some suitable measurement contexts.
Rest of our work is organized as follows. In Sec.II, we provide the motivation underlying present discussion. In Sec.III, we provide with some mathematical preliminaries. Activation of steerability in three party network scenario is analyzed in Sec.IV. In next section, revelation of hidden steerability is then discussed when number of parties is increased in a non linear fashion(in Sec.V). Phenomenon of genuine activation of steering nonlocality is discussed in Sec.VI followed by concluding remarks in Sec.VII.
## II Motivation
Steerable correlations are used in various quantum information processing tasks such as cryptography[56; 57; 58; 59; 60; 61], randomness certification[62; 63; 64; 65; 66], channel discrimination[67; 68] and many others. So any steerable quantum state is considered an useful resource. Though pure entangled states are best candidate in this context, but these are hardly available. Consequently, mixed entangled states are used in practical situations all of which are not steerable. From practical perspectives, exploiting steerability from unsteerable entangled states thus warrants attention. In this context revelation of hidden steerability from unsteerable quantum states basically motivates present discussion. Choosing network scenario based on entanglement swapping for the activation purpose is further motivated by the fact that steerable correlations can be generated between two non interacting parties once the states involved are subjected to suitable LOCC[53]. Such nonclassical correlations in turn may be used as a resource in network based quantum information and communication protocols[69; 70; 71].
## III Preliminaries
### Bloch Vector Representation
Let \(\varrho\) denote a two qubit state shared between two parties.
\[\varrho=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u}\vec{\sigma}\otimes\mathbb{I}_ {2}+\mathbb{I}_{2}\otimes\vec{v}.\vec{\sigma}+\sum_{j_{1},j_{2}=1}^{3}w_{j_{1} j_{2}}\sigma_{j_{1}}\otimes\sigma_{j_{2}}), \tag{1}\]
with \(\vec{\sigma}{=}(\sigma_{1},\sigma_{2},\sigma_{3})\), \(\sigma_{j_{k}}\) denoting Pauli operators along three mutually perpendicular directions(\(j_{k}{=}1,2,3\)). \(\vec{u}{=}(l_{1},l_{2},l_{3})\) and \(\vec{v}{=}(r_{1},r_{2},r_{3})\) stand for the local bloch vectors(\(\vec{u},\vec{v}{\in}\mathbb{R}^{3}\)) of party \(\mathcal{A}\) and \(\mathcal{B}\) respectively with \(|\vec{u}|,|\vec{v}|{\leq}1\) and \((w_{j_{1}j_{3}}){\times}3\) denotes the correlation tensor \(\mathcal{W}\)(a real matrix). The components \(w_{j_{1}j_{2}}\) are given by \(w_{j_{1}j_{2}}{=}\text{Tr}[\varrho\,\sigma_{j_{1}}\otimes\sigma_{j_{2}}]\).
On applying suitable local unitary operations, the correlation tensor becomes diagonalized:
\[\varrho^{{}^{\prime}}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{a}.\vec{\sigma} \otimes\mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{v}.\vec{\sigma}+\sum_{j=1}^{3} \mathfrak{t}_{j}\sigma_{j}\otimes\sigma_{j}), \tag{2}\]
Here the correlation tensor is \(T{=}\text{diag}(t_{11},t_{22},t_{33})\). Under local unitary operations entanglement content of a quantum state remains invariant. Hence, steerability of \(\varrho\) and \(\varrho^{{}^{\prime}}\) remain the same.
### Steering Inequality
A linear steering inequality was derived in [44]. Under the assumption that both the parties sharing a bipartite state(\(\rho_{AB}\)) perform \(n\) dichotomic quantum measurements(on their respective particles), Cavalcanti, Jones, Wiseman, and Reid(CJWR) formulated a series of correlators based inequalities[44] for checking steerability of \(\rho_{AB}\) :
\[\mathcal{F}_{n}(\rho_{AB},\nu)=\frac{1}{\sqrt{n}}\left|\sum_{l=1}^{n}\langle A _{l}\otimes B_{l}\rangle\right|\leq 1 \tag{3}\]
Notations used in the above inequality are detailed below:
* \(\langle A_{l}\otimes B_{l}\rangle=\text{Tr}(\rho_{AB}(A_{l}\otimes B_{l}))\)
* \(\rho_{AB}\in\text{H}_{\mathbf{A}}\otimes\text{H}_{\mathbf{B}}\) is any bipartite quantum state[52].
* \(A_{l}\)=\(\hat{a}_{l}\cdot\overrightarrow{\sigma}\), \(B_{l}\)=\(\hat{b}_{l}\cdot\overrightarrow{\sigma}\), \(\hat{a}_{l}\), \(\hat{b}_{l}\in\mathbb{R}^{3}\) denote real orthonormal vectors. \(A_{l}B_{l}\) thus denote inputs of Alice and Bob.
* \(\nu=\{\hat{a}_{1},\hat{a}_{2},...\hat{a}_{n},\hat{b}_{1},\hat{b}_{2},...,\hat{b }_{n}\}\) stands for the collection of measurement directions.
In case, dimension of each of local Hilbert spaces \(\text{H}_{\mathbf{A}},\text{H}_{\mathbf{B}}\) is \(2\), \(\rho_{AB}\) is given by Eq.(1). Violation of Eq.(3) guarantees both way steerability of \(\rho_{AB}\) in the sense that it is steerable from A to B and vice versa.
Steering phenomenon remaining invariant under local unitary transformations, the analytical expressions of the steering inequality remain unaltered if the simplified form(Eq.(2)) of two qubit state \(\rho_{AB}\) is considered. The analytical expression of the upper bound of corresponding inequality for 3 settings is given by[52]:
\[\text{Max}_{v}\mathcal{F}_{3}(\rho_{AB},\nu) =\sqrt{\hat{\iota}_{11}^{2}+\hat{\iota}_{22}^{2}+\hat{\iota}_{33} ^{2}},\] \[=\sqrt{\text{Tr}(T^{\dagger}T)}\] \[=\sqrt{\text{Tr}(W^{\dagger}W)} \tag{4}\]
where \(W\) and \(T\) denote the correlation tensor corresponding to density matrix representation of \(\rho_{AB}\) given by Eq.(1) and Eq.(2) respectively. Last equality in Eq.(4) holds as trace of a matrix is unitary equivalent. Hence, by the linear inequality(Eq.3) (for \(n\)=3), any two qubit state \(\rho_{AB}\)(shared between \(A\) and \(B\)) is both-way \(F_{3}\) steerable if:
\[\mathcal{S}_{AB}=\text{Tr}[T_{AB}^{T}T_{AB}]>1. \tag{5}\]
Eq.(5) gives only a sufficient condition detecting steerability. So if any state violates Eq.(5), the state may be steerable, but its steerability remains undetected by CJWR inequality(Eq.(3) for \(n\)=3). Any state violating Eq.(5) may be referred to as \(F_{3}\) unsteerable state in the sense that the state is unsteerable up to CJWR inequality for three settings.
### Bell Nonlocality in Three Settings Measurement Scenario
Consider a bipartite measurement scenario involving three dichotomic measurements settings(on each side). Such a scenario is often referred to as \((3,3,2,2)\) measurement scenario. CHSH is not the only possible facet inequality in \((3,3,2,2)\) scenario[72; 73]. A complete list of facet inequalities of Bell polytope(for this measurement scenario) was computed in [73]. There exists only one Bell inequality inequivalent to CHSH inequality. That inequivalent facet inequality is referred to as the \(I_{3322}\) inequality[55]. Denoting local measurements of Alice and Bob as \(A_{1},A_{2},A_{3}\) and \(B_{1},B_{2},B_{3}\) respectively and the outcomes of each of this measurement as \(\pm 1\), \(I_{3322}\) inequality takes the form[55]:
\[-2P_{B_{1}}-P_{B_{2}}-P_{A_{1}}+P_{A_{1}B_{1}}+P_{A_{1}B_{2}}+P_{A _{1}B_{3}}+P_{A_{2}B_{1}}+\] \[P_{A_{2}B_{2}}-P_{A_{2}B_{3}}+P_{A_{3}B_{1}}-P_{A_{3}B_{2}}\leq 0, \tag{6}\]
where \(\forall\,i,j\)=\(1,2,3\), \(P_{B_{i}}\)=\(p(1|B_{i})\), \(P_{A_{i}}\)=\(p(1|A_{i})\) denote marginal probabilities and \(P_{A_{i}B_{j}}\)=\(p(11|A_{i}B_{j})\) stands for the joint probability terms. In terms of these probability terms, CHSH inequality[3] takes the form:
\[-(P_{A_{1}}+P_{B_{1}}+P_{A_{2}B_{2}})+P_{A_{1}B_{1}}+P_{A_{1}B_{2}}+P_{A_{2}B_{ 1}}\leq 0 \tag{7}\]
There exist quantum states which violate above inequality(Eq.(6)) but satisfy CHSH inequality(Eq.(7)) and vice-versa[55]. Violation of anyone of CHSH(Eq.(7)) or \(I_{3322}\) inequality(Eq.(6)) guarantees nonlocality of corresponding correlations in \((3,3,2,2)\) scenario. Conversely, as these two are the only inequivalent facet inequalities of Bell-local polytope, so any correlation satisfying both Eqs.(6,7) is Bell-local in \((3,3,2,2)\) scenario.
### Reduced Steering
Notion of reduced steering has emerged in context of manifesting multipartite steering with the help of bipartite steering[74]. Consider an \(n\)-partite quantum state \(\varrho_{1,2,...,n}\) shared between \(n\) parties \(A_{1},A_{2},...,A_{n}\) If any one of these parties \(A_{i}\)(say) can steer the particle of another party say \(A_{j}(i\neq j)\) without aid of any of the remaining parties \(A_{k}(k\neq i,j)\), then the \(n\)-partite original state \(\varrho_{1,2,...,n}\) is said to exhibit reduced steering. So reduced steering is one notion of steerability of \(\varrho_{1,2,...,n}\). Technically speaking \(\varrho_{1,2,...,n}\) is steerable if at least one of the bipartite reduced states \(\varrho_{i,j}\) is steerable.
## IV Hidden steerability in linear network
As already mentioned before, we focus on steering activation in quantum network scenario involving qubits such that steerable correlations are generated between two distant parties who do not share any direct common past. We start with an entanglement swapping network involving three parties.
### Linear Three Party Network Scenario
Consider a network of three parties Alice, Bob and Charlie arranged in a linear chain(see Fig.1). Let \(\rho_{AB}\) denote the entangled state shared between Alice and Bob whereas entangled state \(\rho_{BC}\) be shared between Bob and Charlie. So initially Alice and Charlie do not share any physical state. Let one way classical communication be allowed between parties sharing a state. To be more specific Bob can communicate to each of Alice and Charlie. Alice and Charlie are thus the two non interacting parties.
First Bob performs joint measurement on his two qubits in the Bell basis:
\[|\phi^{\pm}\rangle=\frac{|00\rangle\pm|11\rangle}{\sqrt{2}},\ |\psi^{\pm}\rangle=\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}. \tag{8}\]
Let \(\vec{\varrho}{=}(b_{1}b_{2})\) denote the outcome of Bob: \((0,0),(0,1),(1,0),(1,1)\) correspond to \(|\phi^{+}\rangle\), \(|\phi^{-}\rangle\), \(|\psi^{+}\rangle\) and \(|\psi^{-}\rangle\). Bob then communicates the results to Alice and Charlie. Let \(e^{(b_{1}b_{2})}_{AC}\) be the conditional state shared between Alice and Charlie when Bob obtains the outcome \(\vec{b}{=}(b_{1}b_{2})\). Each of Alice and Charlie now performs one of three arbitrary projective measurements on their respective qubits. Let \(x_{i}\) and \(z_{i}(i{=}1,2,3)\) denote the measurement settings of Alice and Charlie with \(a_{ij}\) and \(c_{ij}(j{=}0,1)\) denoting the binary outputs corresponding to \(x_{i}\) and \(z_{j}\) respectively. Bipartite correlations arising from the local measurements of Alice and Charlie are then used to test CJWR inequality for three settings:
\[\frac{1}{\sqrt{3}}|\langle A_{1}\otimes C_{1}\rangle+\langle A_{2}\otimes C_{ 2}\rangle+\langle A_{3}\otimes C_{3}\rangle|\leq 1 \tag{9}\]
Such a testing of the conditional states is required to check activation of steerability in the network. Idea of steerability activation detection is detailed below.
### Steering Activation in Network
Phenomenon of steering activation is observed if both the initial states \(\rho_{AB}\) and \(\rho_{BC}\) are \(F_{3}\) unsteerable whereas at least one of the four conditional states \(\rho^{(00)}_{AC},\rho^{(01)}_{AC},\rho^{(10)}_{AC},\rho^{(11)}_{AC}\) is \(F_{3}\) steerable. Precisely speaking, activation occurs if both \(\rho_{AB}\) and \(\rho_{BC}\) violate Eq.(5) whereas \(\rho^{b_{1}b_{2}}_{AC}\) satisfies the same for at least one possible pair \((b_{1},b_{2})\). Any pure entangled state being \(F_{3}\) steerable, no activation is possible if one or both of the initial states \(\rho_{AB}\) and \(\rho_{BC}\) possess pure entanglement. So the periphery of analyzing steerability activation encompasses only mixed entangled states. We next provide with an instance of activation observed in the network.
### An Instance of Activation
Let us now consider the following families of two qubit states:
\[\gamma_{1}=(1-p)|\varphi\rangle\langle\varphi|+p|00\rangle\langle 00| \tag{10}\]
\[\gamma_{2}=(1-p)|\varphi\rangle\langle\varphi|+p|11\rangle\langle 11| \tag{11}\]
where \(|\varphi\rangle=\sin\alpha|01\rangle+\cos\alpha|10\rangle\), \(0\leq\alpha\leq\frac{\pi}{4}\) and \(0\leq p\leq 1\). These class of states were used for the purpose of increasing maximally entangled fraction in an entanglement swapping network[54]. Each of these families violates Eq.(5) if:
\[2((1-p)\sin 2\alpha)^{2}+(2p-1)^{2}\leq 1 \tag{12}\]
Now let \(\rho_{AB}\) and \(\rho_{BC}\) be any member of the family given by \(\gamma_{1}\) and \(\gamma_{2}\)(Eqs.(10,11)) respectively such that the state parameters satisfy Eq.(12). When Bob's particles get projected along \(|\phi^{\pm}\rangle\), each of the conditional states \(\rho^{00}_{AC},\rho^{01}_{AC}\) is steerable(satisfying Eq.(5)) if:
\[\frac{1}{N_{1}}(9-26p+25p^{2}+4(3-8p+5p^{2})\cos(2\alpha)\] \[+3(-1+p)^{2}\cos(4\alpha))>1 \tag{13}\]
where \(N_{1}{=}2(-1-p+(-1+p)\cos(2\alpha))^{2}\). Similarly if Bob's output is \(|\psi^{\pm}\rangle\), steerability of each of \(\rho^{10}_{AC},\rho^{11}_{AC}\) is guaranteed if
\[\frac{1}{N_{2}}(8(-1+p)^{4}\sin(2\alpha)^{4}+N_{3})>1, \tag{14}\]
where \(N_{2}{=}(3-2p+3p^{2}-4(-1+p)p\cos(2\alpha)+(-1+p)^{2}\cos(4\alpha))^{2}\) and \(N_{3}{=}(3-10p+11p^{2}+4(-1+p)^{2}\cos(4\alpha))^{2}\).
Figure 1: _A network of three parties Alice, Bob and Charlie. Alice and Bob share an entangled state \(\rho_{AB}\) and that the state shared between Bob and Charlie is \(\rho_{BC}\). Bob performs Bell basis measurement(BSM) on his two particles and communicates the results to Alice and Charlie who then perform projective measurements on their conditional state._
\((p,\alpha)=(p,\alpha)+(p,p)p\cos(2\alpha)+(-1+p)^{2}\cos(4\alpha))^{2}\). There exist state parameters \((p,\alpha)\) which satisfy Eqs.(12,13). This in turn indicates that there exist states from the two families(Eqs.(10,11)) for which steerability is activated for Bob obtaining 00 or 01 output(see Fig.2). For example, activation is observed for all members from these two families characterized by \(\alpha\)=0.1, and \(p\)\(\in\)\((0.001,0.331)\). However, in case conditional state \(\rho_{AC}^{(10)}\) or \(\rho_{AC}^{(11)}\) is obtained, activation of steering is not observed.
To this end one may note that a conditional state satisfying anyone of Eq.(13) or Eq.(14) is Bell-local in \((3,3,2,2,)\) scenario, i.e., it violates neither \(I_{3322}\) inequality(Eq.(6)) nor CHSH inequality(Eq.(7)).
### Measurement Settings Detecting Steerability
As already mentioned before, for the purpose of investigating activation, criterion(Eq.(5)) used as a sufficient criterion for detecting steerability of conditional states is a closed form of the upper bound of violation of CJWR inequality for three settings(Eq.(9)). It may be pointed out that the two parties sharing the conditional state in the network being Alice and Charlie, in Eq.(9), observables \(C_{i}\) considered unlike that of \(B_{i}\)(used in Eq.(3)). Now, as the closed form involves only state parameters [52], in case any state satisfies the criterion given by Eq.(5), state is steerable. But no information about measurement settings involved in detecting steerability of the state can be obtained. However, from practical view point, it is interesting to know suitable measurement settings which help in steering the states. For that, given a two qubit state, suitable measurement settings are those projective measurements(for each of the two parties) for which the state considered violates Eq.(9). \(\dot{A}_{i}\)=\(\vec{a}_{i}\).\(\vec{\sigma}\) and \(C_{i}\)=\(\vec{c}_{i}\).\(\vec{\sigma}\)(\(i\)=\(1,2,3\)) denote projection measurements of Alice and Charlie respectively. As mentioned in section(III), for violation of CJWR inequality(Eq.(9)), each of Alice and Charlie performs projective measurements in orthogonal directions: \(\vec{a}_{i}\).\(\vec{a}_{j}\)=\(0\)=\(\vec{c}_{i}\).\(\vec{c}_{j}\), \(\forall\)\(i\neq\)J. CJWR inequality being symmetric[44], violation of Eq.(9) implies that the corresponding state is steerable from Alice to Charlie and also from Charlie to Alice. Now, for obvious reasons choice of appropriate settings is state specific. For providing some specific examples of suitable measurement settings, we next consider the instance of activation provided in subsection(IV.3).
Consider a particular member from each of the two families(Eqs.(10,11)) characterized by \((p,\alpha)\)=\((0.214,0.267)\). None of these two states is steerable(up to Eq.(5)). So none of these two states violate Eq.(9). Let these two states be used in the linear network. In case Bob gets output \((0,0)\) or \((0,1)\), conditional state \(\rho_{AC}^{00}\) or \(\rho_{AC}^{01}\), shared between Alice and Charlie, violates Eq.(9) when Alice projects her particle in anyone of the three following orthogonal directions:
\((0,0,1)\), \((0,-1,0)\), \((-1,0,0)\) and Charlie's projective measurement directions are given by:
\((0,0,1)\), \((0,1,0)\), and \((-1,0,0)\). It may be noted here that these are not the only possible directions for which violation of Eq.(9) is observed. Alternate measurement directions may also exist. However, there exists no measurement settings of Alice and Charlie for which the conditional states \(\rho_{AC}^{10}\) or \(\rho_{AC}^{11}\) violate Eq.(9). So steering activation is possible(up to Eq.(9)) in case Bob obtains output 00 or 01 only.
Getting instances of steering activation in the network, an obvious question arises next: can hidden steerability be observed for arbitrary two qubit states? This however turns out to be impossible in three measurement setting projective measurement scenario(for the non interacting parties) when one uses Eq.(5) as steerability detection criterion[52]. We now analyze arbitrary two qubit states in this context.
### Characterization of Arbitrary Two Qubit States
Let two arbitrary states be initially considered in the swapping protocol. In density matrix formalism the
Figure 2: _Shaded region is a subspace in the parameter space \((p,\alpha)\) of the family of states given by Eqs.(10,11). It indicates region of steering activation(as detected by Eq.(5)) obtained in the entanglement swapping protocol(Fig.1) when Bob obtains either \(|\phi^{+}\rangle\) or \(|\phi^{-}\rangle\). It should be noted here that none of the conditional states \(\rho_{AC}^{(00)}\), \(\rho_{AC}^{(01)}\) is Bell nonlocal in three binary measurement settings scenario[55]._
states are represented as:
\[\rho_{AB}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u_{1}}.\vec{\sigma}\otimes \mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{v_{1}}.\vec{\sigma}+\sum_{j_{i},j_{i}=1 }^{3}w_{j_{1}j_{2}}\sigma_{j_{1}}\otimes\sigma_{j_{2}}), \tag{15}\]
\[\rho_{BC}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u_{2}}.\vec{\sigma}\otimes \mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{v_{2}}.\vec{\sigma}+\sum_{j_{i},j_{i }=1}^{3}w_{j_{2}j_{2}}\sigma_{j_{1}}\otimes\sigma_{j_{2}}), \tag{16}\]
Steerability of states remain unhindered under local unitary operations. Let suitable local unitary operations be applied to the initial states for diagonalizing the correlation tensors:
\[\rho^{{}^{\prime}}_{AB}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{a_{1}}.\vec{ \sigma}\otimes\mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{b_{1}}.\vec{\sigma}+ \sum_{j=1}^{3}\mathrm{t}_{ijj}\sigma_{j}\otimes\sigma_{j}), \tag{17}\]
\[\rho^{{}^{\prime}}_{BC}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{a_{2}}.\vec{ \sigma}\otimes\mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{b_{2}}.\vec{\sigma}+ \sum_{j=1}^{3}\mathrm{t}_{2j}\sigma_{j}\otimes\sigma_{j}), \tag{18}\]
Let both \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) be \(F_{3}\) unsteerable, i.e., let both of them violate Eq.(5). Hence \(\sum_{j=1}^{3}\sqrt{\mathbb{I}_{1j}^{2}}{\leq}1\), \(\sqrt{\mathbb{I}_{2j}^{2}}{\leq}1\). We next characterize \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) by analyzing nature of the conditional states \(\rho^{b_{1}b_{2}}_{AC}\). In this context, we provide three results each of which can be considered as a condition for no steering activation in the network. To be precise, if bloch parameters of any initial two qubit states satisfy assumptions(see Table 1 for more details) of any of these three results then there will be no activation of \(F_{3}\) steerability. Of these three results, two are proved analytically whereas the last one is a numerical observation only. First we give the two analytic results in form of two theorems.
**Theorem.:**_If one or both the initial states(Eqs.(17,18)) do not have any non null local bloch vector(see Table 1) then none of the conditional states \(\rho^{b_{1}b_{2}}_{AC}\) satisfies Eq.(5)._
_Proof:_See Appendix.A Up to the steering criterion given by Eq.(5), above result implies impossibility of steering activation in swapping network involving two qubit states whose local bloch vectors(corresponding to both the parties) vanish under suitable local unitary operations. Maximally mixed marginals class of two qubit states has no local bloch vector. So, activation is not possible in network involving any member from this class.
So hidden steerability cannot be exploited in absence of local bloch vectors corresponding to both the parties of a bipartite quantum state. But can the same be generated if both \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) has one non null local bloch vector? Following theorem provides a negative observation.
**Theorem.:**_If both the initial states \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) have only one non null local bloch vector, i.e., \(\vec{a_{1}}{=}\vec{a_{2}}{=}\Theta\) or \(\vec{b_{1}}{=}\vec{b_{2}}{=}\Theta(\Theta\) denote null vector) then none of the conditional states \(\rho^{b_{1}b_{2}}_{AC}\) satisfies Eq.(5)._
_Proof of Theorem.2:_ This proof is exactly the same as that for Theorem.1 owing to the fact that here also \(\sqrt{\mathrm{Tr}(\mathcal{V}^{T}_{b_{1}b_{2}}\mathcal{V}_{b_{1}b_{2}})}{=} \sqrt{\sum_{k=1}^{3}(t_{1kk}t_{2kk})^{2}}\) where \(\mathcal{V}_{b_{1}b_{2}}\) denote correlation tensor of resulting conditional states \(\rho^{(b_{1}b_{2})}_{AC}\). Note that in Theorem.2, \(\vec{a_{1}}{=}\vec{a_{2}}{=}\Theta\) or \(\vec{b_{1}}{=}\vec{b_{2}}{=}\Theta\) is considered. But what if \(\vec{a_{1}}{=}\vec{b_{2}}{=}\Theta\) or \(\vec{b_{1}}{=}\vec{a_{2}}{=}\Theta\)? Does activation occurs in such case? Numerical evidence suggests a negative response to this query:
**Numerical Observation:**_If \(\vec{a_{1}}{=}\vec{b_{2}}{=}\Theta\) or \(\vec{b_{1}}{=}\vec{a_{2}}{=}\Theta\) then none of the conditional states \(\rho^{b_{1}b_{2}}_{AC}\) satisfies Eq.(5)._
Justification of this observation is based on the fact that numerical maximization of the steerability expression(Eq.(5)) corresponding to each possible conditional state \(\rho^{b_{1}b_{2}}_{AC}\) gives \(1\) under the constraints that both the initial quantum states(\(\rho^{{}^{\prime}}_{AB}\),\(\rho^{{}^{\prime}}_{BC}\)). Consequently none of the conditional states satisfies Eq.(5) if none of \(\rho^{{}^{\prime}}_{AB}\),\(\rho^{{}^{\prime}}_{BC}\) satisfies Eq.(5).
Above analysis points out the fact that for revealing hidden steerability, each of \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) should have non null local bloch vectors corresponding to both the parties. However that condition is also not sufficient for activation. In case, correlation tensor of any one of them is a null matrix, the state is a separable state. When such a state is considered as an initial state in the network, none of the conditional states is entangled and thereby activation of steerability becomes impossible. So, when steerability is activated in the network following are the necessary requirements:
* All of the local bloch vectors must be non null: \(\vec{a_{i}}{\neq}\Theta\),\(\vec{b_{i}}{\neq}\Theta\)\(\forall\)\(i\) and
* Both the initial states should have non null correlation tensors.
However the above conditions are only necessary for activation purpose but are not sufficient for the same. We next provide illustration with specific examples in support of our claim.
#### iii.1.1 Illustration
Let us now analyze the classes of states given by Eqs.(10,11) in perspective of above characterization. Both the families of initial states(Eqs.(10,11)) have local bloch vectors: \(\vec{a_{1}}{=}(0,0,p{\rm-cos}(2\imath)(1{\rm-}p))\), \(\vec{b_{1}}{=}(0,0,p{\rm+cos}(2\imath)(1{\rm-}p))\), \(\vec{a_{2}}{=}(0,0,p{\rm-cos}(2\imath)(1{\rm-}p))\), \(\vec{b_{2}}{=}(0,0,p{\rm-cos}(2\imath)(1{\rm-}p))\), \(\vec{b_{2}}{=}(0,0,p{\rm+cos}(2\imath)(1{\rm-}p))\). Local bloch vectors are non null for \(\cos(2\imath){\neq}\pm\frac{p}{1-p}\). Correlation tensors of the states from both the families are given by
\(\mathrm{diag}((1-p)\sin(2\alpha),(1-p)\sin(2\alpha),2p-1)\). Clearly activation is not observed for all family members having non null local blocks as well as non null correlation tensors. For instance, consider \((p,\alpha)\)\(=\)\((0.6,0.6)\). Bloch parameters of corresponding states are given by:
* \(\vec{a_{1}}\)\(=\)\((0,0,0.455057))\),\(\vec{b_{1}}\)\(=\)\((0,0,0.744943)\),
* \(\vec{a_{2}}\)\(=\)\((0,0\),-\(0.455057))\), \(\vec{b_{2}}\)\(=\)\((0,0\),-\(0.744943)\),
* \(\mathrm{diag}(t_{i11},t_{i22},t_{i33})\)\(=\)\(\mathrm{diag}(0.372816,0.372816,0.2)\), \(\forall\,i\)
No steering activation is observed when these two states are used in the network. This in turn implies that the criteria given in IV.5 are only necessary but not sufficient to ensure activation in the network.
Now, as already discussed in subsection(IV.3), there exist members from these families(see Fig.(2)) which when used in the swapping network steering activation is observed.
Network scenario considered so far involved two states shared between three parties. However, will increasing length of the chain, hence increasing number of initial states be useful for the purpose of revealing hidden steerability? Though general response to this query is non trivial, we consider a star network configuration of four parties to give instances of activation of reduced steering.
## V Non-linear swapping network involving \(n\)\(\geq\)\(3\) states
Consider \(n+1\)(\(n\)\(\geq\)\(3\)) number of parties \(A_{1},A_{2},...,A_{n}\) and \(B\). Let \(n\) bipartite states \(\varrho_{i}(i\)\(=\)\(1,2,...,n)\) be shared between the parties such that \(\varrho_{i}\) is shared between parties \(B\) and \(A_{i}(i\)\(=\)\(1,2,...,n)\)(see Fig.3). \(B\) performs a joint measurement on his share of qubits from each \(\varrho_{i}\) and communicates outputs to the other parties \(A_{i}(i\)\(=\)\(1,2,...,n)\). Reduced steering of each of the conditional \(n\)-partite states is checked. To be precise, it is checked whether at least one possible bipartite reduced state of at least one of the conditional states satisfies Eq.(5). In case at least one of the conditional states has reduced steering when none of \(\varrho_{i}(i\)\(=\)\(1.2...,n)\) satisfies Eq.(5), activation of steerability is obtained. Activation is thus observed when one of the \(n\) parties sharing \(n\)-partite conditional state can steer the particles of another party without any assistance from remaining \(n-2\) parties sharing the same.
Consider a specific instance of \(n\)\(=\)\(3\). Let each of \(\varrho_{1},\varrho_{2},\varrho_{3}\) be a member of the family of states given by Eq.(10) with \(p\)\(=\)\(p_{1},p_{2},p_{3}\) for \(\varrho_{1},\varrho_{2},\varrho_{3}\) respectively. Let \(B\) perform joint measurement in the following orthonor
\begin{table}
\begin{tabular}{|c|c|c|} \hline Result & Assumptions & Steerability \\ & & Activation \\ \hline & \((\vec{a_{i}},\vec{b_{i}})\)\(=\)\((\Theta,\Theta)\)\(\forall\,i\) & \\ & or & \\ & \((\vec{a_{i}},\vec{b_{i}})\)\(=\)\((\Theta,\Theta)\) for \(i\)\(=\)\(1\) & \\ Theorem.1 & or & No \\ & \((\vec{a_{i}},\vec{b_{i}})\)\(=\)\((\Theta,\Theta)\) for \(i\)\(=\)\(2\) & \\ \hline & \(\vec{a_{1}}\)\(=\)\(\vec{a_{2}}\)\(=\)\(\Theta\) & \\ Theorem.2 & or & No \\ & \(\vec{b_{1}}\)\(=\)\(\vec{b_{2}}\)\(=\)\(\Theta\) & \\ \hline & \(\vec{a_{1}}\)\(=\)\(\vec{b_{2}}\)\(=\)\(\Theta\) & \\ Numerical & or & \\ Observation & \(\vec{b_{1}}\)\(=\)\(\vec{a_{2}}\)\(=\)\(\Theta\) & No \\ \hline \end{tabular}
\end{table}
Table 1: Assumptions of three results(analyzed above) are enlisted here. The correlation tensor of each of the two initial states \(\rho^{{}^{\prime}}_{AB}\) and \(\rho^{{}^{\prime}}_{BC}\) remain arbitrary. Restrictions are imposed over the local bloch parameters only.
Figure 3: _Schematic Diagram of a star network. For \(i\)\(=\)\(1,2,...,n\), bipartite state \(\varrho_{i}\) is shared between parties \(B\) and \(A_{i}\). Party \(B\) performs joint measurement on state of his \(n\) particles and communicates his output to each of \(A_{1},A_{2},...,A_{n}\). Reduced steering of corresponding conditional state shared between \(A_{1},A_{2},...,A_{n}\) is checked._
mal basis:
\[|\delta_{1}\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|100\rangle+|010\rangle)\] \[|\delta_{2}\rangle=\frac{1}{\sqrt{3}}(|010\rangle-|100\rangle+|000\rangle)\] \[|\delta_{3}\rangle=\frac{1}{\sqrt{3}}(-|010\rangle+|001\rangle+|000\rangle)\] \[|\delta_{4}\rangle=\frac{1}{\sqrt{3}}(|100\rangle+|000\rangle-|001\rangle)\] \[|\delta_{5}\rangle=\frac{1}{\sqrt{3}}(|101\rangle+|110\rangle+|011\rangle)\] \[|\delta_{6}\rangle=\frac{1}{\sqrt{3}}(|110\rangle-|101\rangle+|111\rangle)\] \[|\delta_{7}\rangle=\frac{1}{\sqrt{3}}(-|110\rangle+|111\rangle+|011\rangle)\] \[|\delta_{8}\rangle=\frac{1}{\sqrt{3}}(|111\rangle+|101\rangle-|011\rangle) \tag{19}\]
When \(B\)'s particles get projected along \(\delta_{j}\), let \(\rho^{(j)}(j{=}1,...,8)\) denote the conditional state shared between \(A_{1}\), \(A_{2}\), \(A_{3}\). Reduced steering of each of the conditional states is checked in terms of the steering inequality given by Eq.(5). Now, let all three initial states \(\varrho_{1},\varrho_{2},\varrho_{3}\) violate Eq.(5). When \(B\)'s state gets projected along any one of \(\delta_{1},\delta_{6},\delta_{7},\delta_{8}\)(Eq.(19)), for some state parameters \((p,\alpha)\), each of corresponding conditional states has reduced steering. Region of activation is thus observed(see Fig.4). Some particular instances of activation are enlisted in Table.(2). At this point it should be pointed out that none of the reduced states corresponding to the conditional states violates neither \(I_{3322}\) inequality(Eq.(6)) nor CHSH inequality(Eq.(7)) and hence are Bell local(in \((3,3,2,2)\) scenario).
## VI Genuine activation of steerability
Most of the research works in the field of activation scenarios analyze activation of nonclassicality of quantum states with respect to any specific detection criterion of the nonclassical feature considered. To be precise, let \(\mathcal{C}\) denote a detection criterion for a specific notion of quantum nonclassicality. Activation is said to be observed in any protocol if using one or more quantum states(or identical copies of the same state), none of which satisfies \(\mathcal{C}\), another quantum state is generated(at the end of the protocol) that satisfies \(\mathcal{C}\). Using detection criterion of \(\mathcal{F}_{3}\) steerability[44; 52], so far we have obtained various cases of steering activation in both linear and nonlinear quantum networks. But quite obviously such a trend of activation analysis is criterion specific and in general can be referred to as _activation of \(\mathcal{F}_{3}\) steerability_. But here we approach to explore activa
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline State & \(p_{1}\) & \(p_{2}\) & Range \\ & & & of \(p_{3}\) \\ \hline \(\rho^{(1)}\) & 0.08 & 0.075 & \((0.2,1]\) \\ \hline \(\rho^{(6)}\) & 0.08 & 0.075 & \((0.071,0.467]\) \\ \hline \(\rho^{(7)}\) & 0.08 & 0.075 & \((0,071,0.465]\) \\ \hline \(\rho^{(8)}\) & 0.08 & 0.075 & \((0.2,1)\) \\ \hline \end{tabular}
\end{table}
Table 2: Some specific values of state parameters are enlisted here for which stochastic steering activation(in terms of reduced steering) is observed in nonlinear network(Fig.3). To be more precise, for \(\alpha{=}0.2\), other parameters \(p_{1}\), \(p_{2}\), \(p_{3}\) are specified for the three non identical states from the class given by Eq.(10). First column in the table gives the conditional state corresponding to which activation is observed.
Figure 4: _Shaded region in each of four sub figures in the grid gives the steering activation region obtained stochastically depending on the different possible outputs of party B’s measurement in orthonormal basis(Eq.(19)). Here the star network scenario(Fig.3) involves three non identical states from the class given by Eq.(10) for \(\alpha{=}0.2\). Starting from the top row and moving from left to right, shaded regions indicates reduced steering activation when B’s particles get projected along \(\delta_{1},\delta_{6},\delta_{7}\) and \(\delta_{8}\) respectively._
tion beyond the periphery of criterion specification. We refer to such activation as _genuine activation of steerability_.
Let us consider the linear chain of three parties(Fig.1). For genuine activation we use states which satisfy some criterion of unsteerability and then explore \(\mathcal{F}_{3}\) steerability of the conditional states resulting due to Bell basis measurement(BSM) by the intermediate party(Bob) in the protocol. Genuine activation of steerability occurs in case at least one of \(\rho_{AC}^{b_{1}b_{2}}\) satisfies Eq.(5). In [75], the authors proposed an asymmetric sufficient criterion of bipartite unsteerability.
Let \(\rho_{AB}\) be any two qubit state shared between Alice and Bob(say). In density matrix formalism \(\rho_{AB}\) is then provided by Eq.(1). Consider a positive, invertible linear map \(\Lambda\), whose action on \(\rho_{AB}\) is given by[75]:
\[\mathbb{I}_{2}\otimes\Lambda(\rho_{AB})=\mathbb{I}_{2}\otimes\rho_{B}^{-1} \rho_{AB}\mathbb{I}_{2}\otimes\rho_{B}^{-1}, \tag{20}\]
where \(\mathbb{I}_{2}\) is \(2\times 2\) identity matrix in Hilbert space associated with \(1^{st}\) party and \(\rho_{B}\)=Tr\({}_{A}(\rho_{AB})\). Let \(\rho_{AB}^{(1)}\) denote the state density matrix obtained after applying the above map to \(\rho_{AB}\). Local bloch vector corresponding to \(2^{nd}\) party(Bob) of \(\rho_{AB}^{(1)}\) becomes a null vector[75]:
\[\rho_{AB}^{(1)}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u}\cdot\vec{\sigma} \otimes\mathbb{I}_{2}+\sum_{j_{1},j_{2}=1}^{3}w^{{}^{\prime}}_{j_{1}j_{2}} \sigma_{j_{1}}\otimes\sigma_{j_{2}}), \tag{21}\]
On further application of local unitary operations to diagonalize correlation tensor, \(\rho_{AB}^{{}^{\prime}}\) ultimately becomes:
\[\rho_{AB}^{(2)}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u}\cdot\vec{\sigma} \otimes\mathbb{I}_{2}+\sum_{j=1}^{3}w^{{}^{\prime\prime}}_{j\uparrow} \otimes\sigma_{j}), \tag{22}\]
\(\rho_{AB}^{(2)}\)(Eq.(22)) is referred to as the _canonical form_ of \(\rho_{AB}\) in [75] where the authors argued that \(\rho_{AB}\) will be unsteerable if and only if \(\rho_{AB}^{(2)}\) is unsteerable. They showed that \(\rho_{AB}\) is unsteerable from Alice to Bob if[75]:
\[\text{Max}_{\hat{x}}((\vec{\alpha}.\hat{x})^{2}+2||\mathcal{W}^{{}^{\prime \prime}}\hat{x}||)\leq 1 \tag{23}\]
where \(\hat{x}\) is any unit vector indicating measurement direction, \(\mathcal{W}^{{}^{\prime\prime}}\) denotes the correlation tensor of \(\rho_{AB}^{{}^{\prime\prime}}\) and \(||.||\) denotes Euclidean norm.
For our purpose we consider the unsteerability criterion given by Eq.(23). Below we characterize arbitrary two qubit states in ambit of genuine activation of steerability.
### Characterizing Two Qubit States
Let \(\rho_{AB}\) and \(\rho_{BC}\)(Eqs.(15,16)) denote two arbitrary two qubit states used in the network. It turns out that local bloch vector corresponding to first party of the initial states play a significant role in determining possibility of genuine activation of steering in the network. Next we give two results. While one of those is provided with an analytical proof, analysis of the other one relies on numerical optimization. We first state the analytical result.
**Theorem.3**: _If canonical forms of both the initial states \(\rho_{AB}\) and \(\rho_{BC}\)(Eqs.(15,16)) satisfy the unsteerability criterion(Eq.(23)), then genuine activation of steerability is impossible if both of them have null local bloch vector corresponding to first party, i.e., \(\vec{u_{1}},\vec{u_{2}}\)=\(\Theta\)._
_Proof:_ See appendix.
Genuine activation being impossible in case both \(\vec{u_{1}},\vec{u_{2}}\) are null vectors, an obvious question arises whether it is possible in case at least one of \(\vec{u_{1}},\vec{u_{2}}\)\(\neq\)\(\Theta\). We provide next result in this context. As numerical procedure is involved in corresponding calculations(see Appendix C), our next result will be considered as a numerical observation only.
**Numerical Observation:** _If canonical forms of both \(\rho_{AB}\) and \(\rho_{BC}\)(Eqs.(15,16)) satisfy the unsteerability criterion(Eq.(23)), then genuine activation of steerability is impossible if any one of \(\rho_{AB}\) or \(\rho_{BC}\) has null local bloch vector corresponding to first party, i.e., at least one of \(\vec{u_{1}},\vec{u_{2}}\)=\(\Theta\)._
Justification of this observation is given in Appendix C Clearly, the above two results, combined together provide a necessary criterion for genuine activation of steerability:_When canonical forms of both the initial states satisfy Eq.(23), if steering is genuinely activated in the network then both the initial states must have non null local bloch vectors corresponding to first party, i.e., \(\vec{u_{1}}\)\(\neq\)\(\Theta\), \(\vec{u_{2}}\)\(\neq\)\(\Theta\)._
We next provide with examples in this context.
### Examples
Consider a family of states[75]:
\[\Omega=s|\chi\rangle\langle\chi|+(1-s)\Omega^{1}\otimes\frac{\mathbb{I}_{2}}{2}, \tag{24}\]
where \(|\chi\rangle\)=cos\((\beta)|00\rangle+\sin(\beta)|11\rangle\), 0\(\leq\)\(s\)\(\leq\)1, \(\mathbb{I}_{2}\) is 2x2 identity matrix in Hilbert space associated with \(2^{nd}\) party and \(\Omega^{1}\) is the reduced state of first party obtained by tracing out second party from \(|\chi\rangle\langle\chi|\), i.e., \(\Omega^{1}\)=cos\({}^{2}(\beta)|0\rangle\langle 0|+\sin^{2}(\beta)|1\rangle\langle 1|\).
For \(\beta\)\(\neq\)\(\frac{\pi}{4}\), any member from this class has non null local bloch vector corresponding to first party:\((0,0,\cos(2\beta))\). Canonical form(Eq.(22)) of any member of this class satisfies Eq.(23) if[75]:
\[\cos^{2}(2\beta)\geq\frac{2s-1}{(2-s)s^{3}}. \tag{25}\]
Let two non identical members \(\Omega_{1}\) and \(\Omega_{2}\) from this class(Eq.(24)) be used in the entanglement swapping protocol(Fig.1). Let \((\beta_{1},s_{1})\) and \((\beta_{2},s_{2})\) be state parameters of \(\Omega_{1}\) and \(\Omega_{2}\) respectively. Let both \(\Omega_{1}\) and
\(\Omega_{2}\) be unsteerable. Now, for some values of the state parameters, the conditional states generated in the protocol turn out to be steerable(see Fig.5) as they satisfy Eq.(5). Range of parameter \(s_{2}\)(for some fixed value of other three parameters \((\beta_{1},\beta_{2},s_{1})\)) for which genuine activation occurs, is provided in Table.3.
Now, as discussed above, the criterion of both the initial unsteerable states having non null local bloch vector(corresponding to first party) is necessary for genuine activation. The criterion however turns out to be insufficient for the same. We next provide an example in support of our claim.
Consider two distinct members \(\Omega_{3},\Omega_{4}\) from the family of states given by Eq.(24) corresponding to the parameters: \((\beta_{3},s_{3})\)\(=\)\((0.1,0.7)\) and \((\beta_{4},s_{4})\)\(=\)\((0.3,0.59)\). Local bloch vectors(corresponding to first party) of \(\Omega_{3}\) and \(\Omega_{4}\) are \((0,0,0.980067)\) and \((0,0,0.825336)\) respectively. Both of \(\Omega_{3}\) and \(\Omega_{4}\) satisfy the unsteerability criterion given by Eq.(25). These two states(in their canonical forms) are now used in the tripartite linear network. Bloch matrix representation of each of the conditional states are enlisted in Table.6(see Appendix D). Unsteerability criterion(Eq.(23)) is then tested for each of these conditional states. The optimal value(obtained numerically) in the maximization problem involved in Eq.(23) turns out to be less than unity for each of the conditional states(Table.6). Hence, all the conditional states are unsteerable. Consequently no genuine activation of steering is observed in the network using \(\Omega_{3}\) and \(\Omega_{4}\).
It may be noted that genuine activation occurs for any possible output of Bob when two identical copies of same state from this class are used in the network(see Fig.6). For instance, when two identical copies of \(\Omega_{1}\) for \(\beta_{1}\)\(=\)\(0.7\) are considered as initial states, steerability is activated genuinely for \(s_{1}\)\(\in\)\((0.77,1]\).
## VII Discussions
In different information processing tasks, involving steerable correlations, better efficiency of the related protocols(compared to their classical counterparts)
Figure 5: _Shaded regions in the sub figures give region of genuine activation of steerability for different ranges of state parameter \(s_{2}\). None of the steerable conditional states obtained in the protocol is Bell nonlocal in \((3,3,2,2)\) measurement scenario._
Figure 6: _Genuine activation region obtained for any possible conditional state when two identical copies of a state from \(\Omega\) class are used in the network._
basically rely upon quantum entanglement. Though pure entanglement is the most suitable candidate, but owing to environmental effects, mixed entanglement is used in practical scenarios. In this context, any steerable mixed entangled state is considered to be useful. In case it fails to generate steerable correlations, it will be interesting to exploit its steerability(if any) by subjecting to suitable sequence of measurements. Entanglement swapping protocol turns out to be an useful tool in this perspective. Let us consider the two families of states given by Eqs.(10,11). Both of them are noisy versions of pure entangled states \(\varphi\). To be more specific, these families are obtained via amplitude damping of \(\varphi\)[76]. As already discussed above, steering activation is obtained via entanglement swapping protocol for some members from these two families. This in turn point out that entanglement swapping protocol is useful in exploiting steerability from unsteerable(up to the steering criteria given by Eq.(5)) members from these two families. All such discussions in turn point out the utility of steerability activation in network scenarios from practical viewpoint. Characterization of arbitrary two qubits states will thus be helpful in exploiting utility of any given two qubit state in the ambit of steering activation(up to Eq.(5)). That steerability of depolarized noisy versions of pure entangled states cannot be activated(in approach considered here) is a direct consequence of such characterization owing to the fact that this class of noisy states has no local bloch vector. Apart from revealing hidden steerability, it will be interesting to explore whether the activation protocols can be implemented in any information processing task involving network scenario so as to render better results.
In [77], the authors have shown that if a two qubit state is \(\mathcal{F}_{3}\) steerable, i.e., satisfies Eq.(5), then it is useful for teleportation. This in turn points out the utility of the activation networks discussed here in perspective of information theoretic tasks. To be more precise, consider, for example, the tripartite linear network(Fig.1). Both the initial states \(\rho_{AB},\rho_{BC}\) used in the network violates Eq.(5). So \(\rho_{AB}(\rho_{BC})\) cannot be used to teleport qubit from Alice to Bob(from Bob to Charlie). Now, if steerability is activated in the network stochastically, resulting conditional state can be used for the purpose of teleportation. In case, activation occurs for all possible outputs of Bob, any of the four conditional states turns out to be useful in teleportation protocol.
Now our analysis of activation in network scenarios is criterion specific and we have just provided partial characterization of two qubit state space in context of genuine activation of steerability. The unsteerability criterion[75] involves maximization over arbitrary measurement directions(Eq.(23)). Deriving closed form of this criterion, genuine activation of steerability can be analyzed further. In star network scenario choice of the specific orthonormal basis(Eq.(19)) for joint measurement by the central party(\(B\)) served our purpose to show that increasing number of states non-linearly can yield better results compared to the standard three party network scenario(Fig.(1)). Also such activation scenario is significant as hidden steerability is revealed when at least one of the \(n\) parties sharing \(n\)-partite conditional state can steer the particles of another party without co-operation from remaining \(n-2\) parties. Further analysis of such form of steerability activation(via notion of reduced steering) using more general measurement settings of the central party \(B\) will be a potential direction of future research. It will also be interesting to analyze a scheme of \(m\) copies of bipartite states arranged in a linear chain where activation occurs only after projection on any of \(n{<}m\) copies.
In [78], the authors introduced notion of network steering and network local hidden state(NLHS) models in networks involving independent sources. They have provided with no-go results for network steering in a large class of network scenarios, by explicitly constructing NLHS models. In course of their analysis they have given an instance of both way steering activation using family of Doubly-Erased Werner (DEW) states[78]. Activation phenomenon considered there did not rely on testing any detection criterion in form of steering inequality. So from that perspective, the activation example[78] is comparable with that of genuine steering activation in our work. Characterization of two qubit state space based on genuine activation of steering discussed in subsec.VI.1 thus encompasses a broader class of steering activation results compared to a specific example of activation[78]. To this end one may note that for analysis made there, authors considered not only unsteerability but also separability of the states distributed by the sources. Following that approach, incorporating entanglement content of initial unsteerable states to explore genuine activation of steering will be an interesting direction of future research.
## VIII Data availability statement
The manuscript has no data associated with it.
## IX Acknowledgement
This preprint has not undergone peer review or any post-submission improvements or corrections. The
Version of Record of this article is published in The European Physical Journal D, and is available online at [https://doi.org/linsert](https://doi.org/linsert) DOII".
## X Appendix A
Proof:: Both \(\rho^{{}^{\prime}}_{AB}\)(Eq.(17)) and \(\rho^{{}^{\prime}}_{BC}\)(Eq.(18)) violate Eq.(5). Hence, \(\sum_{j=1}^{3}\sqrt{\underline{t}_{1jj^{\prime}}^{2}}\sqrt{\underline{t}_{2jj} ^{2}}\leq 1\) which imply that \(|\mathfrak{t}_{kjj}|\leq 1,\forall k=1,2\) and \(j=1,2,3\).
Let \(\mathcal{V}_{b_{1}b_{2}}\) denote the correlation tensor of conditional state \(\rho^{(b_{1}b_{2})}_{AC}\). Now, two cases are considered: either one or both the parties have no non null local block vectors. In both the cases, \(\text{Tr}(\mathcal{V}^{T}_{b_{1}b_{2}}\mathcal{V}_{b_{1}b_{2}})\)=\(\sum_{k=1}^{3}(t_{1kk}t_{2kk})^{2}\), \(\forall b_{1},b_{2}=0,1\). Hence, for each of \(\mathcal{V}_{b_{1}b_{2}}\), \(\sqrt{\text{Tr}(\mathcal{V}^{T}_{b_{1}b_{2}}\mathcal{V}_{b_{1}b_{2}})}\) takes the form:
\[\sqrt{\text{Tr}(\mathcal{V}^{T}_{b_{1}b_{2}}\mathcal{V}_{b_{1}b_{ 2}})}=\sqrt{\sum_{k=1}^{3}(t_{1kk}t_{2kk})^{2}}\] \[\leq\sqrt{\sqrt{\sum_{k=1}^{3}t_{1kk}^{4}\sqrt{\sum_{k=1}^{3}t_{2 kk}^{4}}}}\] \[\leq\sqrt{\sqrt{\sum_{k=1}^{3}t_{1kk}^{2}\sqrt{\sum_{k=1}^{3}t_{2 kk}^{2}}}}\] \[\leq 1. \tag{26}\]
The second inequality holds as \(|\mathfrak{t}_{kjj}|\leq 1,\forall k=1,2\) and \(j=1,2,3\) and the last is due to the fact that none of the initial states satisfies Eq.(5).
## XI Appendix B
Proof:: Here \(\vec{u_{1}}\)=\(\vec{u_{2}}\)=\(\Theta\). \(\rho_{AB}\) and \(\rho_{BC}\) thus have the form:
\[\rho_{AB} =\frac{1}{4}(\mathbb{I}_{2\times 2}+\mathbb{I}_{2}\otimes\vec{v_{1}} \vec{\sigma}+\sum_{j_{1}j_{2}=1}^{3}w_{1j_{1}j_{2}}\sigma_{j_{1}}\otimes \sigma_{j_{2}}),\] \[\rho_{BC} =\frac{1}{4}(\mathbb{I}_{2\times 2}+\mathbb{I}_{2}\otimes\vec{v_{2 }}\vec{\sigma}+\sum_{j_{1}j_{2}=1}^{3}w_{2j_{1}j_{2}}\sigma_{j_{1}}\otimes \sigma_{j_{2}}),\]
Let \(\Lambda\)(Eq.(20)) be applied on both \(\rho_{AB}\) and \(\rho_{BC}\) followed by local unitary operations(to diagonalize the correlation tensors). Let \(\rho^{(2)}_{AB}\) and \(\rho^{(2)}_{BC}\) denote the respective canonical forms(Eq.(22)) of \(\rho_{AB}\) and \(\rho_{BC}\)[75]:
\[\rho^{(2)}_{AB} =\frac{1}{4}(\mathbb{I}_{2\times 2}+\sum_{j=1}^{3}w^{{}^{\prime \prime}}_{1jj}\sigma_{j}\otimes\sigma_{j}), \tag{27}\] \[\rho^{(2)}_{BC} =\frac{1}{4}(\mathbb{I}_{2\times 2}+\sum_{j=1}^{3}w^{{}^{\prime \prime}}_{2jj}\sigma_{j}\otimes\sigma_{j}), \tag{28}\]
Now \(\rho^{(2)}_{AB}\) and \(\rho^{(2)}_{BC}\) both satisfy unsteerability criterion given by Eq.(23). This in turn gives:
\[\text{Max}_{x_{1},x_{2},x_{3}}\sqrt{\sum_{j=1}^{3}(x_{j}w^{{}^{\prime\prime}}_ {kjj})^{2}}\leq\frac{1}{2},\ \ k=1,2 \tag{29}\]
where \(\hat{x}=(x_{1},x_{2},x_{3})\) denotes a unit vector. We next perform maximization over \(\hat{x}\) so as to obtain a closed form of the unsteerability criterion in terms of elements of correlation tensors of the initial states \(\rho^{(2)}_{AB}\) and \(\rho^{(2)}_{BC}\).
_Maximization over unit vector \(\hat{x}\)_:
Taking \(\hat{x}=(\sin(\theta)\cos(\phi),\sin(\theta)\sin(\phi),\cos(\theta))\), maximization problem in L.H.S. of Eq.(41) can be posed as:
\[\text{Max}_{\theta,\phi}\sqrt{A(\theta,\phi)} \tag{30}\]
where,
\[A(\theta,\phi)=\sin^{2}(\theta)(\cos^{2}(\phi)(w^{{}^{\prime \prime}}_{k11})^{2}+\] \[\sin^{2}(\phi)(w^{{}^{\prime\prime}}_{k22})^{2})+\cos^{2}(\theta) (w^{{}^{\prime\prime}}_{k33})^{2} \tag{31}\]
Now for any \(g_{1},g_{2}\geq 0\), \(\text{Max}_{x}(g_{1}\cos^{2}(x)+g_{2}\sin^{2}(x))\) is \(g_{1}\) if \(g_{1}\)\(>\)\(g_{2}\) and \(g_{2}\) when \(g_{2}\)\(>\)\(g_{1}\). This relation is used for maximizing \(A(\theta,\phi)\). In order to consider all possible values of \((w^{{}^{\prime\prime}}_{k11})^{2}\), \((w^{{}^{\prime\prime}}_{k22})^{2}\) and \((w^{{}^{\prime\prime}}_{k33})^{2}\), we consider the following cases:
\(\textit{Case}:(w^{{}^{\prime\prime}}_{k11})^{2}\textgreater(w^{{}^{\prime\prime}}_ {k22})^{2}\): Then \(\text{Max}_{\phi}A(\theta,\phi)\) gives:
\[B(\theta)=\sin^{2}(\theta)(w^{{}^{\prime\prime}}_{k11})^{2}+\cos^{2}(\theta)(w^{{ }^{\prime\prime}}_{k33})^{2} \tag{32}\]
\(\textit{Subcase}:(w^{{}^{\prime\prime}}_{k11})^{2}\textgreater(w^{{}^{\prime \prime}}_{k33})^{2}\), i.e., \((w^{{}^{\prime\prime}}_{k11})^{2}\textgreater\text{Max}_{j=1,2,3}(w^{{}^{\prime \prime}}_{kjj})^{2}:\)
Then \(\text{Max}_{\phi}B(\theta)\)=\((w^{{}^{\prime\prime}}_{k11})^{2}\). Hence,
\[\text{Max}_{\phi,\phi}\sqrt{A(\theta,\phi)}=|w^{{}^{\prime\prime}}_{k11}|. \tag{33}\]
\(\textit{Subcase}:(w^{{}^{\prime\prime}}_{k11})^{2}\textless(w^{{}^{\prime \prime}}_{k33})^{2}\), i.e., \((w^{{}^{\prime\prime}}_{k22})^{2}\textless(w^{{}^{\prime\prime}}_{k11})^{2}\textless(w^{{}^{ \prime\prime}}_{k33})^{2}:\)
Then \(\text{Max}_{\phi}B(\theta)\)=\((w^{{}^{\prime\prime}}_{k33})^{2}\). Hence,
\[\text{Max}_{\phi,\phi}\sqrt{A(\theta,\phi)}=|w^{{}^{\prime\prime}}_{k33}|. \tag{34}\]
\(\textit{Case}:(w^{{}^{\prime\prime}}_{k11})^{2}\textless(w^{{}^{\prime\prime}}_{k22})^{2}\): Then \(\text{Max}_{\phi}A(\theta,\phi)\) gives:
\[B(\theta)=\sin^{2}(\theta)(w^{{}^{\prime\prime}}_{k22})^{2}+\cos^{2}(\theta)(w^{{}^{ \prime\prime}}_{k33})^{2} \tag{35}\]
\(\textit{Subcase}:(w^{{}^{\prime\prime}}_{k22})^{2}\textgreater(w^{{}^{\prime\prime}}_{k33 })^{2}\), i.e., \((w^{{}^{\prime\prime}}_{k22})^{2}\textgreater=\text{Max}_{j=1,2,3}(w^{{}^{\prime \prime}}_{kjj})^{2}:\)
Then \(\text{Max}_{\phi}B(\theta)\)=\((w^{{}^{\prime\prime}}_{k22})^{2}\). Hence,
\[\text{Max}_{\phi,\phi}\sqrt{A(\theta,\phi)}=|w^{{}^{\prime\prime}}_{k22}|. \tag{36}\]
\(\textit{Subcase}:(w^{{}^{\prime\prime}}_{k22})^{2}\textless(w^{{}^{\prime \prime}}_{k33})^{2}\), i.e., \((w^{{}^{\prime\prime}}_{k11})^{2}\textless(w^{{}^{\prime\prime}}_{k22})^{2}\textless(
So, combining all cases, we get:
\[\text{Max}_{\theta,\phi}\sqrt{A(\theta,\phi)}=\text{Max}_{j=1}^{3}|w_{kjj}^{{}^{ \prime\prime}}|,\ k=1,2. \tag{38}\]
So, the unsteerability criterion(Eq.(41)) turns out to be:
\[\text{Max}_{j=1,2,3}|w_{kjj}^{{}^{\prime\prime}}|\leq\frac{1}{2}. \tag{39}\]
where \(k{=}1,2\) correspond to states \(\rho_{AB}^{(2)}\) and \(\rho_{BC}^{(2)}\) respectively. So \(\rho_{AB}^{(2)}\) and \(\rho_{BC}^{(2)}\) and therefore \(\rho_{AB}\) and \(\rho_{BC}\) are unsteerable. Steerability of state remaining invariant under application of linear map(Eq.(20)), considering the canonical forms \(\rho_{AB}^{(2)}\) and \(\rho_{BC}^{(2)}\) as the initial states used in the network. Depending on the output of BSM obtained by Bob(and result communicated to Alice and Charlie), the conditional states shared between Alice and Charlie are given by \(\rho_{AC}^{ij}\), \(i,j{=}0,1\)(see Table.4). \(\forall i,j\), \(\rho_{AC}^{ij}\) has null local blochs and diagonal correlation tensor.
Hence, for each of the conditional states, L.H.S. of Eq.(23) turns out to be:
\[\text{Max}_{x_{1},x_{2},x_{3}}\sqrt{\sum_{j=1}^{3}(x_{j}w_{1jj}^{{}^{\prime \prime}}w_{2jj}^{{}^{\prime\prime}})^{2}} \tag{40}\]
Following the same procedure of maximization as above, the optimal expression of the maximization problem(Eq.(40)) is given by:
\[\text{Max}_{j=1,2,3}|w_{1jj}^{{}^{\prime\prime}}w_{2jj}^{{}^{\prime\prime}}|\]
Using Eq.(39), the maximum value of Eq.(40) turns out to be \(\frac{1}{4}\). Each of the four conditional states thus satisfies the unsteerability criterion(Eq.(23)). So if both the initial states satisfy Eq.(23) and have null local bloch vector(corresponding to first party), then none of the conditional states generated in the network is steerable. Hence genuine activation of steering does not occur.
## XII Appendix C
_Details of the numerical observation given in Sec.VI:_ Without loss of any generality, of two initial states, let \(\rho_{BC}\) has non null bloch vector corresponding to the first party, i.e., \(\vec{u_{1}}{=}\Theta,\vec{u_{2}}{\neq}\Theta\). \(\rho_{BC}\) thus has the form:
\[\rho_{BC}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u_{2}}.\vec{\sigma}\times \mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{v_{2}}.\vec{\sigma}+\sum_{j_{1},j_{2 }=1}^{3}w_{2j_{1}j_{2}}\sigma_{j_{1}}\otimes\sigma_{j_{2}}).\]
After applying \(\Lambda\)(Eq.(20))followed by local unitary operations, the canonical form \(\rho_{AB}^{(2)}\) of \(\rho_{AB}\) is given by Eq.(27) whereas that of \(\rho_{BC}\) is given by:
\[\rho_{BC}^{(2)}=\frac{1}{4}(\mathbb{I}_{2\times 2}+\vec{u_{2}}.\vec{\sigma} \times\mathbb{I}_{2}+\sum_{j=1}^{3}w_{2jj}^{{}^{\prime\prime}}\sigma_{j} \otimes\sigma_{j}), \tag{41}\]
Now \(\rho_{AB}^{(2)}\) and \(\rho_{BC}^{(2)}\) both satisfy unsteerability criterion given by Eq.(23). This in turn gives:
\[\text{Max}_{x_{1},x_{2},x_{3}}\sqrt{\sum_{j=1}^{3}(x_{j}w_{1jj}^{{}^{\prime \prime}})^{2}}\leq\frac{1}{2} \tag{42}\]
and
\[\text{Max}_{x_{1},x_{2},x_{3}}(\vec{u_{2}}.\vec{\sigma})^{2}+2\sqrt{\sum_{j=1 }^{3}(x_{j}w_{2jj}^{{}^{\prime\prime}})^{2}}\leq 1 \tag{43}\]
with \(\hat{x}{=}(x_{1},x_{2},x_{3})\) denoting unit vector. While the closed form of Eq.(42) is given by Eq.(39) for \(k{=}1\), the same for Eq.(43) is hard to derive owing to the complicated form of the maximization problem involved in it. Now as \(\rho_{BC}^{(2)}\) satisfies an unsteerability criterion(Eq.(43)) so it is unsteerable and consequently violates Eq.(5):
\[\sum_{j=1}^{3}(w_{2jj}^{{}^{\prime\prime}})^{2}\leq 1 \tag{44}\]
As discussed above, the canonical forms \(\rho_{AB}^{(2)}\) and \(\rho_{BC}^{(2)}\) as the initial states used in the network. Depending on Bob's output, the conditional states shared between Alice and Charlie are given by \(\rho_{AC}^{ij}\), \(i,j{=}0,1\)(see Table.5).
\begin{table}
\begin{tabular}{|c|c|c|} \hline State & \(\vec{X}_{1}\) & \(\vec{X}_{2}\) \\ \hline \(\rho_{AC}^{ij0}\) & \(\Theta\) & \(\Theta\) & \(\text{diag}(\vec{w_{11}}\vec{w_{211}^{{}^{\prime\prime}}},\) \\ & & \(-w_{12}^{{}^{\prime\prime}}w_{22}^{{}^{\prime\prime}},\)\(w_{133}^{{}^{\prime\prime}}w_{233}^{{}^{\prime\prime}})\) \\ \hline \(\rho_{AC}^{ij1}\) & \(\Theta\) & \(\Theta\) & \(\text{diag}(\vec{w_{11}}\vec{w_{211}^{{}^{\prime\prime}}},\) \\ & & \(w_{122}^{{}^{\prime\prime}}w_{22}^{{}^{\prime\prime}},\)\(w_{133}^{{}^{\prime\prime}}w_{233}^{{}^{\prime\prime}})\) \\ \hline \(\rho_{AC}^{ij0}\) & \(\Theta\) & \(\Theta\) & \(\text{diag}(\vec{w_{11}}\vec{w_{211}^{{}^{\prime\prime}}},\) \\ & & \(w_{122}^{{}^{\prime\prime}}w_{22}^{{}^{\prime\prime}},\)\(-w_{133}^{{}^{\prime\prime}}w_{233}^{{}^{\prime\prime}})\) \\ \hline \(\rho_{AC}^{11}\) & \(\Theta\) & \(\Theta\) & \(\text{diag}(\vec{w_{11}}\vec{w_{211}^{{}^{\prime\prime}}},\) \\ & & \(-w_{122}^{{}^{\prime\prime}}w_{22}^{{}^{\prime\prime}},\)\(-w_{133}^{{}^{\prime\prime}}w_{233}^{{}^{\prime\prime}})\) \\ \hline \end{tabular}
\end{table}
Table 4: State parameters of each of the four conditional states are specified here. \(\vec{X}_{1},\vec{X}_{2}\) denote the local bloch vectors corresponding to first and second party respectively whereas \(T\) denote the correlation tensor. diag\((*,*,*)\) stands for a diagonal matrix.
Let us consider \(\rho^{00}_{AC}\). Using state parameters(Table.5) of \(\rho^{00}_{AC}\), L.H.S. of Eq.(23) becomes:
\[\text{Max}_{x_{1},x_{2},x_{3}}((x_{1}u_{21}^{{}^{\prime\prime}}w_{111 }^{{}^{\prime\prime}}-x_{2}u_{22}^{{}^{\prime\prime}}w_{122}^{{}^{\prime\prime}}+ x_{3}u_{23}^{{}^{\prime\prime}}w_{133}^{{}^{\prime\prime}})^{2}+\] \[\sqrt{\sum_{j=1}^{3}(x_{j}w_{1jj}^{{}^{\prime\prime}}w_{2jj}^{{}^ {\prime\prime}})^{2})} \tag{45}\]
where \(u_{21}^{{}^{\prime\prime}},u_{22}^{{}^{\prime\prime}},u_{23}^{{}^{\prime\prime}}\) are the components of real valued vector bloch vector \(\vec{u_{2}^{{}^{\prime\prime}}}\). In Eq.(45), maximization is to be performed over \(x_{1},x_{2},x_{3}\) whereas the state parameters are arbitrary. Now the expression in Eq.(45) is numerically maximized over all the state parameters involved and also \(x_{1},x_{2},x_{3}\) under the following restrictions:
* \(w_{11}^{{}^{\prime\prime}}\)\(\leq\)\(\frac{1}{2}\)
* \(w_{12}^{{}^{\prime\prime}}\)\(\leq\)\(\frac{1}{2}\)
* \(w_{133}^{{}^{\prime\prime}}\)\(\leq\)\(\frac{1}{2}\)
* \(\sum_{j=1}^{3}(w_{2jj}^{{}^{\prime\prime}})^{2}\)\(\leq\)\(1\).
While the first three restrictions are due to the unsteerability of \(\rho^{(2)}_{AB}\), i.e., given by Eq.(39) for \(k\)=1, the last restriction is provided by Eq.(44)(a consequence of unsteerability of \(\rho^{(2)}_{BC}\)). Maximum value of the above maximization problem(Eq.(45)) turns out to be 0.75, corresponding maxima(alternate maxima exists) given by \(w_{111}^{{}^{\prime\prime}}\)=0.5, \(w_{122}^{{}^{\prime\prime}}\)=0.454199, \(w_{133}^{{}^{\prime\prime}}\)=0.46353, \(w_{211}^{{}^{\prime\prime}}\)=\(-1\), \(w_{222}^{{}^{\prime\prime}}\)=0, \(w_{233}^{{}^{\prime\prime}}\)=0, \(u_{21}^{{}^{\prime\prime}}\)=1, \(u_{22}^{{}^{\prime\prime}}\)=0, \(u_{23}^{{}^{\prime\prime}}\)=0, \(x_{1}\)=1, \(x_{2}\)=0 and \(x_{3}\)=0. Maximum value less than 1 implies that the original maximization problem(Eq.(45)), where maximization is to be performed only over \(x_{1},x_{2},x_{3}\)(for arbitrary state parameters) under the above restrictions(resulting from unsteerability of \(\rho^{(2)}_{AB},\rho^{(2)}_{BC}\)), cannot render optimal value greater than 1. Consequently conditional state \(\rho^{00}_{AC}\) satisfies the unsteerability criterion(Eq.(23)) and is therefore unsteerable. So, in case Bob's particles get projected along \(|\phi^{+}\rangle\), genuine activation of steering does not occur in the linear network. In similar way, considering, other three conditional states, it is checked that the unsteerability criterion(Eq.(23)) is satisfied in each case. Genuine activation of steering is thus impossible for all possible outputs of Bob. Hence when one of the initial states has null local bloch vector corresponding to first party, genuine activation of steering does not occur.
## XIII Appendix D
|
2304.04416 | High Dynamic Range Imaging with Context-aware Transformer | Avoiding the introduction of ghosts when synthesising LDR images as high
dynamic range (HDR) images is a challenging task. Convolutional neural networks
(CNNs) are effective for HDR ghost removal in general, but are challenging to
deal with the LDR images if there are large movements or
oversaturation/undersaturation. Existing dual-branch methods combining CNN and
Transformer omit part of the information from non-reference images, while the
features extracted by the CNN-based branch are bound to the kernel size with
small receptive field, which are detrimental to the deblurring and the recovery
of oversaturated/undersaturated regions. In this paper, we propose a novel
hierarchical dual Transformer method for ghost-free HDR (HDT-HDR) images
generation, which extracts global features and local features simultaneously.
First, we use a CNN-based head with spatial attention mechanisms to extract
features from all the LDR images. Second, the LDR features are delivered to the
Hierarchical Dual Transformer (HDT). In each Dual Transformer (DT), the global
features are extracted by the window-based Transformer, while the local details
are extracted using the channel attention mechanism with deformable CNNs.
Finally, the ghost free HDR image is obtained by dimensional mapping on the HDT
output. Abundant experiments demonstrate that our HDT-HDR achieves the
state-of-the-art performance among existing HDR ghost removal methods. | Fangfang Zhou, Dan Zhang, Zhenming Fu | 2023-04-10T06:56:01Z | http://arxiv.org/abs/2304.04416v4 | # High Dynamic Range Imaging with Context-aware Transformer
###### Abstract
Avoiding the introduction of ghosts when synthesizing LDR images as high dynamic range (HDR) images is a challenging task. Convolutional neural networks (CNNs) are effective for HDR ghost removal in general, but are challenging to deal with the LDR images if there are large movements or oversaturation/undersaturation. Existing dual-branch methods combining CNN and Transformer omit part of the information from non-reference images, while the features extracted by the CNN-based branch are bound to the kernel size with small receptive field, which are detrimental to the deblurring and the recovery of oversaturated/undersaturated regions. In this paper, we propose a novel hierarchical dual Transformer method for ghost-free HDR (HDT-HDR) images generation, which extracts global features and local features simultaneously. First, we use a CNN-based head with spatial attention mechanisms to extract features from all the LDR images. Second, the LDR features are delivered to the Hierarchical Dual Transformer (HDT). In each Dual Transformer (DT), the global features are extracted by the window-based Transformer, while the local details are extracted using the channel attention mechanism with deformable CNNs. Finally, the ghost free HDR image is obtained by dimensional mapping on the HDT output. Abundant experiments demonstrate that our HDT-HDR achieves the state-of-the-art performance among existing HDR ghost removal methods.
HDR debsting, Transformer, CNN, Attention
## I Introduction
HDR imaging methods are to produce an image with a wide dynamic range, which is closer to the human eye's perception and generated by multiple low dynamic range (LDR) images with varying exposures. If the LDR images are perfectly aligned, i.e. no camera shaking or object moving, we can fuse the LDR images directly and obtain perfect HDR images without ghost. However, this condition can be extremely difficult to achieve in the real-world.
Most of traditional HDR algorithms need to discard the unaligned pixels [1, 2, 3, 4] or align all pixels [5, 6, 7] in LDR images before fusing them. The former is to register all pixels in the LDR images, mark the unaligned parts, and remove the unaligned areas or replace them with pixels in the reference image. The HDR image generated by these methods will lose a lot of information in areas where pixels are shifted. For the latter methods, the key to producing a high-quality HDR image is to find an appropriate way to align other LDR images with the reference image. However, traditional image alignment methods, such as optical flow methods [8], patch-based optimization methods [10, 11] and mesh flow methods [9], will inevitably produce ghosts because they cannot strictly align when large movements occur.
In recent years, CNN-based algorithms [17, 21, 31, 32, 33, 34, 41, 45, 49] have been proved to be significantly superior to the traditional algorithms [42, 43, 44], in terms of both performance and computational cost. Liu et al. [46] proposed an improved YOLOv5 network architecture for agriculture image recognition. Especially, a tiny detector layer was introduced to enhance the performance. Bian et al. [47] and Song et al. [48] use CNN-based methods for medical images reconstruction. Kalantari et al. [12] were the first to propose using deep learning to generate HDR images. This method required first aligning LDR images using optical flow, and then synthesising HDR images with the aligned LDR images using a CNN-based method. Wu et al. [13] used the
Fig. 1: Qualitative comparison of three CNN-based methods with ours on the dataset of Kalantari et al. [12]. As shown above, our proposed HDT-HDR, which is used to extract global features and local features simultaneously, produces better results that are free from ghost artifacts and recover more details in the oversaturated regions.
alignment method in the early stage of HDR imaging, followed by a CNN-based method. Like traditional algorithms, such two-stage HDR methods would introduce ghosting due to the inability to strictly align when there are large moving, oversaturated, or occluded areas exist in LDR images. To solve this problem, many researchers [14, 15, 16, 17, 18] no longer align LDR images before inputting them into the networks, but take the unaligned LDRs and their gamma-corrected images directly as inputs to produce the HDR image. These methods utilize massive data-driven models to implicitly learn to align LDR images and synthesise the corresponding HDR image. This type of end-to-end HDR deghosting algorithm can achieve high-quality deghosting in most cases. However, when there are big movements or oversaturation in LDR images, HDR imaging will be accompanied by ghost or motion blur, as shown in Fig. 1. This is because the intrinsic properties of CNNs, such as parameter sharing and small receptive fields, determine their weak ability to deal with problems sensitive to global information [19, 20]. Therefore, to generate high-quality ghost-free HDR images when LDR images have large movements or oversaturation, CNN-based methods alone will not work.
Vision Transformer (ViT) [40] is known for its long-range modelling capability, and can flexibly adjust its receptive field. However, compared to CNN-based models, ViT requires a larger amount of data to train due to its lack of bias and weaker ability to extract local context. Therefore, some scholars [22, 39] have tried to combine Transformer with CNN to get better performance. Zhen Liu et al. [22] proposed a method called HDR-Transformer, which combines CNN and ViT (CA-ViT) for HDR deghosting. They divided the model into two branches, one branch using Transformer to extract global information, and the other branch using CNN-based channel attention mechanisms to supplement local information. However, HDR-Transformer did not apply spatial attention to the reference image during the shallow feature extraction, thus losing the opportunity for initial recovery of the oversaturated or undersaturated areas in the reference image. And the CA-ViT channel attention mechanism uses ordinary convolution, which can only extract features within a fixed kernel size, which is not conducive to recovering blur caused by small movements.
Based on these situations, we propose a novel HDR deghosting method, which mainly consists of hierarchical dual branches built with deformable CNNs and transformers. We take the medium exposure image as the reference image. Firstly, we use the spatial attention mechanism to extract the shallow features of all the LDR images, including the reference image, and concatenate them. It is worth noting that we use the spatial attention of the features of the long and short exposure images on the reference image. This is advantageous to make full use of the information in the long and short exposures to correct the oversaturated or undersaturated areas in the mid-exposure image. Secondly, the concatenated features are supplied into the HDT, where deformable convolutions can help the local branch to obtain different features with different receptive field to avoid the blur caused by small movements, while the Transformer in the global branch captures the global information, such as the long range movements. Finally, the local and global information are fused and passed through a convolution layer for channel transformation, and the final ghost-free HDR image is obtained. Our main contribution can be summarised as follows:
1. We show that adding spatial attention to the reference image can partially compensate for missing information caused by oversaturation or undersaturation.
2. We propose a novel hierarchical dual Transformer, named HDT-HDR, which uses deformable CNNs to extract the local texture information for small motion deblurring and Transformer to capture the global information for large motion deghosting.
3. Abundant experiments prove that our HDT-HDR outperforms the state-of-the-art in HDR deghosting algorithms both qualitatively and quantitatively.
## II Related works
To produce high-quality ghost-free HDR images, a large number of methods [29, 30, 31, 2, 12, 2, 14, 32] have been proposed by scholars. These methods can be classified into three broad categories: Traditional methods [1, 23, 24, 27, 28], CNN-based methods [33, 34, 12, 32, 32] and Transformer-CNN-based method [22].
**Traditional methods** Traditional methods are generally divided into two classes: motion rejection and motion registration. Under the assumption that most of the pixels in the LDR images are static and only a small number of pixels move, the first type is to register all the pixels in the LDR images, select the unaligned areas, replace them with pixels in the reference image or surrounding static pixels, and then fuse the aligned LDR images to generate HDR images. Grosch et al. calculated the predicted pixels according to the brightness consistency criterion and compared them with the real pixels to generate an error map to identify moving objects [1]. Gallo et al. used the consistency of patches in all exposures with the reference image to distinguish the moving pixels [2]. Jacobs et al. [23] and Reinhard et al. [24] used weighted irradiance map variance and intensity variance, respectively. Khan et al. calculated the probability maps [25]. Oh et al. [26] detected ghost regions by rank minimization. The second type focuses on the methods of aligning LDR images with the corresponding reference image, and then fusing the aligned LDR images. Bogoni et al. [27] predicted the motion vectors in multi-scale by optical flow to align LDR images. Kang et al. [28] mapped LDR image intensities to the luminance domain and used optical flow to align them. Jinno and Okuda [29] aligned the input images with dense corresponding maps predicted by Markov Random Field. Hu et al. [30] calculated the brightness and gradient consistencies to align input images in the transformed domain. For the first type of method, HDR imaging does not perform well due to the masking of a large amount of information. For the second type of methods, the resulting HDR image would
be accompanied by ghosting or motion blur, when objects have a large moving or saturation in LDR images.
**CNN-based methods** In recent years, a large amount of CNN-based methods have been put forward. Eilertsen et al. [31] adopted U-Net to map a single LDR image directly to an HDR image. Lee et al. [32] utilized GAN to generate pseudo multi-exposed images from a single image, and HDR image could be pixel-wisely fused by them. Kalantari et al. [12] first introduced a CNN-based method in HDR imaging after aligning multi-exposed LDR images with optical flow. Wu et al. [13] first used a simple homography transformation to align the background and generated an HDR image from the aligned multi-exposed LDR images with U-Net based networks. Pu et al. [33] applied deformable convolutions for pyramidal alignment, squeeze-and-excitation (SE) attentionto to fully exploit aligned features and mask-based weighting for refining HDR image reconstruction. Prabhakar et al. [14] proposed an efficient method with bilateral-guiding upsampler to generate HDR images. Yan et al. [17] adopted a spatial attention to reduce the ghost on HDR image and Yan et al. [34] used non-local blocks to capture the global information of unaligned inputs to help LDR images better aligned. Niu et al. [18] first employed a GAN-based method to synthesize HDR images by fusing multi-exposed LDR images. Liu et al. [16] applied a spatial attention module to handle multi-saturation, and a Pyramid, Cascading and Deformable (PCD) alignment module to tackle misalignments. Chung et al. [15] transformed the problem of motion alignment to the brightness adjustment to align images for next fusing. All the above methods, using only a single LDR image, will inevitably produce low-quality HDR image due to the lack of real exposure information. What's more, the inherent properties of CNN-based methods lack global information, and would not solve the ghosting problem for HDR imaging well when large objects are moving or extremely oversaturated/undersaturated regions exist in LDR images.
**Transformer-CNN-based methods** Transformers have been hugely successful in the field of natural language processing [20]. After embedding words, the attention mechanism of multiple heads is used to obtain a long range of connections for processing natural language. Recently, ViT [40] proved that the pure Transformer can achieve a comparable result to CNN networks in image classification tasks by treating image patches as words and adding tokens to denote the category. With the development of Swin-Transformer [35], which used the shift-window scheme to greatly reduce computational cost and made full use of image information, many Transformer-based algorithms emerged in computer vision. Liang et al. [19] proposed SwinIR for image super-resolution and denoising, and achieved the state-of-the-art performance. Liu et al. [22] not only used CNN, but also added Transformer in their HDR-Transformer architecture, combining the advantages of CNN in extracting local information and Transformer in capturing global information. However, HDR-Transformer did not do spatial attention on the reference image during the shallow feature extraction, which is not beneficial for recovering the oversaturation/undersaturation region, and the CNN-based branch has limited receptive field and is not good at local texture changes caused by small movements. Inspired by [22, 33], we propose HDT-HDR based on Transformers and deformable CNNs.
## III Method
Our goal is to use LDR images with multiple exposure times to produce the corresponding ghost-free HDR images. Following the previous studies [12, 13, 17], three LDR images with different exposure times (\(I_{i}\), i = 1, 2, 3) are used and the intermediate image \(I_{2}\) is used as the reference image in this paper.
Firstly, in order to make full use of the image information, the input images are mapped to the HDR space to obtain the corresponding gamma-corrected images \(\tilde{I}_{i}\).
\[\tilde{I}_{i}=\frac{(I_{i})^{\gamma}}{t_{i}},i=1,2,3 \tag{1}\]
Where \(\gamma\) denotes the gamma correction parameter (\(\gamma=2.2\) in this paper), and \(t_{i}\) denotes the exposure time of \(I_{i}\). According to [12], we simply concatenate \(I_{i}\) and \(\tilde{I}_{i}\) on the channel dimension, forming the input \(I_{ci}\) with 6 channels of the network. This method of feature enhancement not only can employ LDR images to reduce noise and identify saturation areas, but also can use gamma-corrected images to help with image alignment.
Secondly, the HDR image \(I_{H}\) are generated with our HDR deghosting model \(F(.)\):
\[I_{H}=F(I_{ci};\theta),i=1,2,3 \tag{2}\]
Where \(\theta\) represents the model parameters.
Finally, according to [11], we usually need to display the tonemapped HDR images in practical applications. In order to generate better HDR images, it is better to optimize the model with the loss calculated in the tonemapping domain through tonemapping the model output \(I_{H}\) and the label image \(I_{GT}\) by a certain rule. This paper uses \(\mu\)-law for tonemapping:
\[T(x)=\frac{log(1+\mu x)}{log(1+\mu)} \tag{3}\]
\(\mu\) determines the compression degree (\(\mu\)=5000, in this paper). Our model is optimized by \(L_{1}\) loss:
\[L=\parallel T(I_{GT})-T(I_{H})\parallel_{1} \tag{4}\]
The entire network architecture is shown in Fig. 2. HDT-HDR is mainly consists of Feature-Extraction head and HDR-Deghosting body. In the Feature-Extraction head, we first extract features by convolutional layers, and then use the spatial attention mechanism to reduce the interference of moving objects and initially correct the saturation area of the reference image. It is worth noting that this paper exploits the spatial attention of non-reference images and reference image, and performs attention filtering on the features of the reference image to better utilize the information of non-reference images. The HDR deghosting bodies are realized
by hierarchical dual-branch Transformers, in each of which, the local features and global information are extracted and fused simultaneously. Among them, the global branch can extract the global information to align the input images, while the local branch uses the convolutional layer to extract local features and the channel attention mechanism with deformable convolution to filter the importance of local textures. Then, the fusion of the features of the two branches can not only use the global information to ensure that the HDR image does not contain ghosts which caused by object long-distance movement in LDR images, but also use the local information to avoid small motion blur in the generated HDR image.
The detail of our proposed DT structure is introduced in Fig. 3.
### _Feature-Extraction head_
First, preliminary feature extraction and channel transformation on the input \(I_{ci}\in R^{H\times W\times 6}(i=1,2,3)\) to obtain \(f_{i}\in R^{H\times W\times C}\) by three convolutional layers, \(C\) denotes the number of channels. Second, two spatial attention modules are used to calculate the attention map of non-reference image features \(f_{i}(i=1,3)\) and the reference image feature \(f_{2}\), respectively. Third, all non-reference images \(f_{i}(i=1,3)\) are multiplied by the corresponding attention map \(m_{i}(i=1,3)\), so as to initially align the non-reference images with the reference image. The alignment process can be summarized as the follows:
\[m_{i}=Att(f_{i},f_{2}),i=1,3 \tag{5}\]
\[fm_{i}=f_{i}\odot m_{i},i=1,3 \tag{6}\]
\(\odot\) denotes element-wise multiplication, \(fm_{i}\) denotes the features output of the spatial attention. Unlike [22], we also calculate the average feature of spatial attention on \(f_{2}\) by \(m_{1}\) and \(m_{3}\), thereby the missing information in \(f_{2}\) due to the saturated region in the reference image can be supplied, and the output is denoted as \(fm_{2}\). We summarize the process as (7).
\[fm_{2}=(f_{2}\odot m_{1}+f_{2}\odot m_{3})/2 \tag{7}\]
Then, all \(fm_{i}(i=1,2,3)\) as well as \(f_{2}\) are concatenated on the channels to obtain \(f_{init}\) for the next process.
### _HDR-Deghosting body_
As in [22], our main module is Hierachical Dual-branch Transformer (HDT), and each DT is a parallel two-branch network structure composed of the Transformer-based global branch and the CNN-based local branch. The HDR-Deghosting body embeds \(f_{init}\) and takes the embeddings \(Em_{0}\) into HDT. After several skip-connections and convolution layers we will obtain the final output, as shown in Fig. 2.
**Transformer-based Global Branch** Following [22], we obtain the long-range information through a window-based multi-head Transformer encoder, which is composed of several LNs, a multi-head self-attention (MSA) module, a multi-layer perceptron (MLP) and several residual connections. Embedded feature \(Em_{0}\) will be taken as the input, and global feature extraction can be described as:
\[Em_{1}=MSA(LN(Em_{0}))+Em_{0} \tag{8}\]
Fig. 3: Illustration of the proposed dual-branch Transformer architecture DT. Multi-head Transformer encoder is used to obtain the context of all input images and extract the global information. The channel attention mechanism with deformable convolution is used to extract the local feature information of all images and the image connection between frames. In the end, the features from two branches are fused as DT output.
Fig. 2: The network architecture of HDT-HDR. The pipeline composed of two stages: (a) The feature extraction head, it uses the spatial attention module to extract the coarse features. (b) The HDR deghosting body, which consists of several DT based Blocks, the extracted coarse features are fed into it to recover the HDR images.
\[GF_{global}=MLP(LN(Em_{1}))+Em_{1} \tag{9}\]
Where, \(GF_{global}\) represents the global information.
**CNN-based Local Branch** We extracted the local information by a CNN-based module using the channel attention mechanism. For the token embedding vector \(Em_{0}\), we normalize it by LN layer, and reshape it to a vector shaped in \(N\times H\times W\times C\). And we realize the vector dimensionality reduction by an ordinary convolution and get a feature vector shaped in \(N\times H\times W\times C/10\). Then we use another ordinary convolution to undertake the feature. Each convolution except the one after the LN layer will be followed by a LeakyReLU activation layer to better select features. Next, we use two convolutions with deformable kernels [38] to fuse the context details with changes in surroundings and the cross-channel feature, which is a key design to avoid small motion blur. Now we have a vector whose shape is \(N\times H\times W\times 2C/5\). We do an average pooling on it and get a vector with a shape of \(N\times 2C/5\). We perform linearzation and sigmoid on it to obtain a vector with shape of \(N\times C\), which can be treated as the attention channel weight \(w_{c}\), and will be multiplied with the \(f_{in}\), which is a processed \(Em_{0}\) to accommodate subsequent convolution operations. The final local output is now obtained. The local feature extraction process can be described as follows:
\[f_{in}=Reshape(LN(Em_{0})) \tag{10}\]
\[f_{local}=\sigma_{1}(D(\sigma_{1}(D(\sigma_{1}(Conv(\sigma_{1}(Conv(f_{in})))))))) \tag{11}\]
\[w_{c}=\sigma_{2}(FC(f_{local})) \tag{12}\]
\[LF_{local}=w_{c}\odot f_{in} \tag{13}\]
\(\sigma_{1}\) denotes LeakyReLu, \(\sigma_{2}\) denotes sigmoid, \(D\) denotes deformable convolutional layer and \(LF_{local}\) represents the local information extracted by this branch.
The global feature information and the local feature information are fused by element-wise addition to \(f_{fusion}\), and the forward propagation of one DT module is completed. Then \(f_{fusion}\) would be the input for the next DT. DT repeats N times (N = 6 in this paper), forming DT groups, and M DT groups (M = 3 in this paper) form the main body of HDT. HDT output passes through a dilated convolutional layer which can expand the receptive field, participate in two global residuals in succession, pass through several convolutional layers and a sigmoid layer, and finally the deghosting HDR image can be obtained. The overall architecture is shown in Fig. 2.
## IV Experiments
### _Datasets, Metrics and Implementation Details_
**Datasets** Following [13, 17, 18, 22], we use Kalantari et al.'s dataset [12] as the training, validation and test sets for the experiments. Kalantari et al.'s dataset contains 74 training samples and 15 test samples, and each sample contains three LDR images with different exposure times as well as the corresponding ground truth HDR image. We also use Sen et al. [36]'s and Tursun et al. [37]'s datasets, which do not contain ground truth images, as test sets for visually evaluating the generalization ability of our model.
Before training, we apply the horizontal sliding window with a step size of 64, crop the original image into small patches with size of 128x128, and then use rotation and flipping for data enhancement.
**Metrics** We use PSNR and SSIM as evaluation metrics for quantitative comparison. We calculate \(\mathrm{PSNR}_{\mu}\) and \(\mathrm{SSIM}_{\mu}\) between the model output image \(I_{H}\) and the ground truth image \(I_{GT}\) in their tone-mapped domain, as well as \(\mathrm{PSNR}_{l}\) and \(\mathrm{SSIM}_{l}\) in the linear domain. To evaluate the visibility and quality of the synthesized HDR images under different luminance conditions, we also calculate HDR-VDP-2 according to [18].
**Implementation Details** All of our experiments are conducted on pytorch 3.8, an NVIDIA Tesla T4 GPU with 16GB per GPU. Our HDR deghosting model training adopts ADAM optimizer. The initial learning rate is 1e-4, \(\beta 1=0.9,\beta 2=0.999\) and \(\epsilon=10^{-8}\). The maximum number of epochs is 100, and the batch size is 16. Our model takes about 46 hours to train.
### _Comparison with State-of-the-art Methods_
**All Compared models** To evaluate our model, we compared it with several state-of-the-art HDR deghosting methods, including two traditional HDR algorithms as Sen et al. [36] and Hu et al. [11], CNN-based algorithms as Kalantari et al. [12], DeepHDR [13], AHDRNet [17], NHDRRNet [14], and HDR-GAN [18], a single Transformer algorithm SwinIR [19] and a combined CNN-Transformer algorithm HDR-Transformer [22].
**Quantitative comparison** To quantitatively compare our model with other state-of-the-art methods, we present the average results for each model on the 15 test samples from the Kalantari et al.'s dataset in Table I, and all values but ours are from HDR-Transformer. [22]. Several conclusions can be drawn from Table I. First, traditional models have obvious disadvantages. Second, the pure Transformer-based model can compete with most of CNN-based models to some extent, but is inferior to HDR-GAN. Third, HDR-Transformer
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Methods & \(\mathrm{PSNR}_{\mu}\) & \(\mathrm{PSNR}_{l}\) & \(\mathrm{SSIM}_{\mu}\) & \(\mathrm{SSIM}_{l}\) & HDR-VDP-2 \\ \hline Sen et al. [36] & 40.80 & 38.1 & 0.9808 & 0.9721 & 59.38 \\ Hu et al. [11] & 35.79 & 30.76 & 0.9717 & 0.9503 & 57.05 \\ Kalantari et al. [12] & 42.67 & 41.23 & 0.9888 & 0.9846 & 65.05 \\ DeepHDR [13] & 41.65 & 40.88 & 0.9860 & 0.9858 & 64.90 \\ AHDRNet [17] & 43.63 & 41.14 & 0.9900 & 0.9702 & 64.61 \\ NHDRRNet [17] & 42.41 & -0.9387 & - & 61.21 \\ HDR-GAN [18] & 43.92 & 41.57 & 0.9905 & 0.9865 & 65.45 \\ SwinIR [19] & 43.42 & 41.68 & 0.9882 & 0.9861 & 64.52 \\ HDR-Transformer [22] & 44.21 & 24.17 & 0.9918 & 0.9889 & 66.03 \\ Ours & **44.36** & **42.73** & **0.9919** & **0.9898** & **66.08** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Quantitative comparison with state-of-the-art methods on Kalantari et al. [12]’s test samples. PSNR, SSIM, and HDR-VDP-2 are as the metrics used to evaluate the models. ’\(\mu\)’ and ’\(l\)’ represent the values calculated on the tonemapped domain and the linear domain, respectively. The best results are **black bold**, while the second is underlined.
that combined with Transformer and CNN performs better than using CNN or Transformer alone. Finally, our HDT-HDR performs better than HDR-Transformer [22].
**Qualitative comparison** To further verify the visual performance of our model, qualitative comparisons are made with test samples from Kalantari et al.'s, Sen et al.'s and Tursun et al.'s datasets. The images are generated using the author's models when the pre-trained model is available online. Otherwise we retrain the model ourselves using the official realization. Fig. 4 shows the performance of several models on synthetic HDR image with long range of object motion and oversaturation in Kalantari et al.'s test samples. The first row in the figure shows the LDR images, the HDR image generated by our model, and the zoomed LDR patches that contain large range object movement and oversaturation, respectively. The second and third rows show the zoomed HDR patches generated by different models. In the second row with red borders, that other models are not so successful in processing the oversaturated area at the red arrow, while the texture information generated by our model is closer to the GT image. In the third row with green borders, the images generated by Sen et al., Kalantari et al. and AHDRNet have severe ghosting, while DeepHDR and HDR-GAN have much texture information missing in the oversaturated area. The remaining models also have slight ghosting and texture loss in visual. Generally, our model performs better on the Kalantari et al.'s datasets, both in terms of deblurring and deghosting.
Fig. 5 shows the visual comparison of HDR images composited by several models from Sen et al.'s and Tursun et al.'s datasets. Fig. 4(a) shows the HDR images generated by different methods when the LDR images contain large oversaturated regions. The second row of Fig. 4(a) shows that small motion deblurring of the HDR-GAN model on the oversaturated region is slightly better than our model. But from the area pointed by the red arrow, it can be seen that the HDR-GAN performs poorly in undersaturated region. Fig. 4(b) shows the HDR images generated by the same models when the LDR images contain large areas of undersaturated regions. As can be seen from the green arrow, the image generated by AHDRNet contains slight ghosting, the image generated by HDR-GAN still contains burr, and the image generated by HDR-Transformer contains obvious large-area ghosting, while our method performs well both in oversaturated region and undersaturated regions.
Fig. 4: Visual comparison of the state-of-the-art methods [12, 13, 17, 18, 22, 36] on Kalantari et al [12]’s test set. As shown, the traditional methods [36] have obvious disadvantages, while the CNN-based methods could not remove the long-range ghosts [12, 17] or generate local details in oversaturated regions [13, 18]. Transformer and CNN combined method [22] performs better than CNN-based methods but still faces the same problems as the CNN-based methods. Among them, our proposed HDT-HDR has the best performance both in texture information recovery and deghosting.
**Analysis of Computational Budgets** We compare the amount of parameters of each model and the test time on same data set in Table II. It can be seen from the table that the traditional HDR algorithms take even more than one minute, which is unbearable in practical applications. The inference time of the CNN-based methods is significantly reduced, and our model size is even smaller. Our model can inference faster than the CNN-based models. Although our model size is slightly larger than the HDR-Transformer, it can be ignored considering of its super HDR imaging performance.
### _Ablation Study_
All our ablation experiments are conducted on Kalantari et al.'s [12] dataset, PSNR and HDR-VDP-2 are used as quantitative evaluation metrics.
**Ablation on the network architecture** For the network design, we compare the proposed DT, the spatial attention on reference images (SAR) module, and the entire HDT-HDR with the baseline. In detail, we design the following variants:
**- Baseline.** We take the HDR-Transformer [22], which is constituted with a spatial attention (SA) module in shallow features extraction and Context-aware Vision Transformer (CA-ViT) encoders, as our baseline model. The baseline model keeps the same settings for training and testing as our proposed HDT-HDR.
**- + SAR.** In this variant, instead of using spatial attention only to align non-reference images, we add spatial attention to be applied to reference images (SAR).
**- + DT.** In this variant, the CA-ViT encoder used in the baseline model is replaced by the proposed dual-branch combined Transformer with deformable CNNs module.
**- + SAR + DT.** The entire network of the proposed HDT-HDR.
As shown in Table III, when we use SAR or DT on BL, it will increase the performance of the model, and the gain of DT is more significant than SAR. When both SAR and DT are adopted, the performance of the model reaches the best.
## V Conclusions
In this paper, we demonstrate several HDR imaging results in different scenes. State-of-the-art methods do not perform well enough in small motion deblurring or large motion debhosting. We propose a dual-branch Transformer that combines Transformers and deformable CNNs to overcome the lack of global information in local feature extraction, while global features are mainly extracted in vanilla ViTs. We first rectify the reference images by applying the spatial attention in the shallow feature extraction to avoid missing some information of non-reference images that could be used to recover the oversaturated/undersaturated region. Furthermore, we extend the channel attention by adding several deformable convolu
\begin{table}
\begin{tabular}{l c c c} \hline Methods & Environment & Time(s) & Parameters(M) \\ \hline Sen et al. [36] & CPU & 61.81 & - \\ Hu et al. [11] & CPU & 79.77 & - \\ Kalantari et al. [12] & CPU+GPU & 29.14 & 0.3 \\ Deep HDR [13] & GPU & 0.24 & 20.4 \\ AHDRNet [17] & GPU & 0.30 & 1.24 \\ NHDRRNet [34] & GPU & 0.31 & 38.1 \\ HDR-GAN [18] & GPU & 0.29 & 2.56 \\ HDR-Transformer [22] & GPU & 0.15 & 1.22 \\ Ours & GPU & 0.16 & 1.35 \\ \hline \end{tabular}
\end{table} TABLE II: The inference times and parameters of different models. Part of the values are from [22].
Fig. 5: Visual comparison of the state-of-the-art methods [12, 13, 17, 18, 22, 36] on Sen et al. [36]’s and Tursun et al. [37]’s datasets. When there are large areas of oversaturation and undersaturation in LDR images, our model generates HDR images with better detail restoration and boundary preservation than other models.
\begin{table}
\begin{tabular}{l c c c c c} \hline BL & SAR & DT & \(\mathrm{PSNR}_{\mu}\) & \(\mathrm{PSNR}_{l}\) & HDR-VDP-2 \\ \hline ✓ & & & 44.21 & 42.17 & 66.03 \\ ✓ & ✓ & & 44.28 & 42.26 & 66.01 \\ ✓ & & ✓ & 44.31 & 42.53 & 66.06 \\ ✓ & ✓ & ✓ & 44.36 & 42.73 & 66.08 \\ \hline \end{tabular}
\end{table} TABLE III: Quantitative results of the ablation experiments. BL: the baseline model, SAR: the spatial attention to modify the reference image method, DT: the proposed Vision Transformer and CNN combined module.
tional layers in the local branch, so that the feature extraction would not be limited to a certain kernel size, making it capable of capturing small movements in the local. Finally, we present HDT-HDR, which combines the advantages of Transformer and CNNs. Thus, our HDT-HDR has both the ability to remove ghosts caused by long-range movements and to remove blur caused by small motion. Extensive experiments show that the proposed method is quantitatively and qualitatively superior to state-of-the-art methods.
|
2302.10935 | Complex path simulations of geometrically frustrated ladders | Quantum systems with geometrical frustration remain an outstanding challenge
for numerical simulations due to the infamous numerical sign problem. Here, we
overcome this obstruction via complex path integration in a geometrically
frustrated ladder of interacting bosons at finite density. This enables studies
of the many-body ground state properties, otherwise inaccessible with standard
quantum Monte Carlo methods. Specifically, we study a chemical potential tuned
quantum phase transition, along which we track the emergence of
quasi-long-range order and critical softening of the single particle gap. We
chart future methodological improvements and applications in generalized
geometrically frustrated lattice models. | Elyasaf Y. Cohen, Andrei Alexandru, Snir Gazit | 2023-02-21T19:00:02Z | http://arxiv.org/abs/2302.10935v2 | # Complex path simulations of geometrically frustrated ladders
###### Abstract
Quantum systems with geometrical frustration remain an outstanding challenge for numerical simulations due to the infamous numerical sign problem. Here, we overcome this obstruction via complex path integration in a geometrically frustrated ladder of interacting bosons at finite density. This enables studies of the many-body ground state properties, otherwise inaccessible with standard quantum Monte Carlo methods. Specifically, we study a chemical potential tuned quantum phase transition, along which we track the emergence of quasi long range order and critical softening of the single particle gap. We chart future methodological improvements and applications in generalized geometrically frustrated lattice models.
_Introduction -_ A paradigmatic instance of the numerical sign problem, in the context of condensed matter physics, is geometrically frustrated antiferromagnets. The appearance of non-positive (or even complex) quantum amplitudes renders numerical calculations via Monte Carlo techniques uncontrolled, with statistical errors that scale exponentially with system size and overwhelm the signal. Geometrical frustration enhances quantum fluctuations, promoting exotic and inherently quantum phenomena. Examples thereof include valance bond solids comprising a spatially ordered pattern of singlet dimers [1], unconventional magnetic textures [2], and most remarkably, spin liquids that defy ordering down to absolute zero temperature [3]. It is, therefore, desirable to devise novel methodologies that overcome the obstruction imposed by the numerical sign problem and provide an accurate numerical solution to geometrically frustrated quantum spin models.
More broadly, a generic solution to the numerical sign problem is likely unfeasible. In fact, in some instances, no-go theorems preclude a complete elimination of the sign problem via local transformations [4; 5; 6; 7; 8]. Nevertheless, tremendous progress has been made in devising clever reformulations of the path integral representation, providing either a partial or complete elimination of the sign problem in specific models [9; 10; 11; 12]. Promising recent progress in controlling the numerical sign problem is the complex path integration (CPI) approach [13; 14]. In this method, the path integral is deformed into the complex plane. An informed choice of the integration manifold can then achieve a significant reduction in the severity of the numerical sign problem. Indeed, numerically accurate investigations of a wide range of many-body problems have been demonstrated, involving both bosonic [15; 16; 17; 18; 19], fermionic [20; 21; 22; 23], and spin degrees [24] of freedom. However, the applicability of the CPI approach to studies of collective many-body phenomena and quantum criticality in frustrated spin models remains an open question.
In this letter, we address this outstanding inquiry in a geometrically frustrated triangular chain of lattice bosons at a finite chemical potential, a setting for which the sign problem plagues standard quantum Monte Carlo (QMC) methods. Remarkably, we find that integration along complex plane manifolds allows taming the numerical sign problem and affords controlled numerical calcula
Figure 1: (a) An illustration of the frustrated (\(\pi\)-flux) bosonic triangular chain model of Eq. (1) and the corresponding global phase diagram as a function boson mass \(m^{2}\) and chemical potential \(\mu\). The yellow dot marks the quantum phase transition at zero boson density (\(\mu=0\)) between gapped and gapless phases. (b) The average sign, \(\left|\left<e^{-iS_{I}}\right>\right|\), of configuration weights as a function \(\mu\) along the chemical potential tuned condensation transition, as marked by the dashed black line and purple dot in (a). Different curves correspond to an increasing range of flow times \(T\). The grey line extrapolates the sign problem along the original integration manifold \(\mathbb{R}^{2N}\) to large \(\mu\). The vertical dashed line corresponds to the estimated critical chemical potential \(\mu_{c}=1.73\).
tions. In particular, we track with high precision a chemical potential tuned order-disorder quantum many-body phase transition and analyze the associated finite system size and finite temperature scaling of pertinent physical observables and the numerical sign problem.
_Microscopic model and phase diagram -_ As a concrete lattice model for testing the applicability of the CPI in the context of geometrically frustrated many-body quantum systems, we consider a bosonic lattice model defined on a triangular chain with \(L\) sites, see Fig. 1a. The dynamics is governed by the Hamiltonian,
\[\mathcal{H} =\frac{1}{2}\sum_{r}\Pi_{r}^{*}\Pi_{r}-t\sum_{\langle r,r^{ \prime}\rangle}\psi_{r}^{*}\psi_{r^{\prime}}+\text{h.c.}+\sum_{r}m^{2}{|\psi_{ r}|}^{2}\] \[+\sum_{r}U{|\psi_{r}|}^{4}+i\mu\sum_{r}(\psi_{r}\Pi_{r}^{*}-\psi_ {r}^{*}\Pi_{r}). \tag{1}\]
Here, the complex scalar field operators, \(\psi_{r}\), reside on lattice sites, \(r\). We employ linear indexing of sites along an effective one-dimensional chain that alternates between the upper and lower legs of the ladder. The canonical momenta, \(\Pi_{r}\), follows the standard commutation relations \([\psi_{r},\Pi_{r^{\prime}}]=i\delta_{r,r^{\prime}}\). Complement relations apply to their complex counterparts, \(\psi^{*}\) and \(\Pi^{*}\). Hamiltonian terms in the first line correspond to the quadratic ("free") part, comprising nearest-neighbor hoppings with amplitude \(t\) and a mass term of magnitude \(m^{2}\). The second line includes onsite quartic repulsive interactions of strength \(U\), and a chemical potential term. The Hamiltonian admits a global \(U(1)\) symmetry corresponding to particle number conservation, as defined by the boson charge operator, \(Q=\sum_{r}\left(\psi_{r}\Pi_{r}^{*}-\psi_{r}^{*}\Pi_{r}\right)\). For non-vanishing \(\mu\), particle-hole symmetry, \(\psi\rightarrow\psi^{*}\), is broken, which potentially induces a finite particle density, \(\langle Q\rangle\neq 0\).
Making contact with physical systems, we note that our model serves as a low energy effective description of various condensed matter systems with a conserved \(U(1)\) charge and broken particle-hole symmetry, see Ref. [25]. Two concrete examples are lattice bosons detuned from integer filling and easy plane (XY) magnets in an external magnetic field perpendicular to the magnetization axis. Our primary interest will be in cases where the hopping amplitude is negative, i.e., \(t<0\). On non-bipartite lattices, this choice leads to geometric frustration akin to antiferromagnetic interactions in lattice spin models. By contrast, the more standard case, \(t>0\), can be interpreted as ferromagnetic interactions and is commonly used in lattice regularizations of continuum quantum field theories [26].
For numerical simulations via QMC techniques, we evaluate the thermal partition function following the standard quantum to classical mapping, \(\mathcal{Z}=\int\mathcal{D}\psi\mathcal{D}\psi^{*}e^{-\mathcal{S}\left[\psi_ {r,r},\psi_{r,\tau}^{*}\right]}\), that sums over space-time histories of the complex scalar field \(\psi_{r,\tau}\). The imaginary time, \(\tau\), action then reads,
\[S =-\varepsilon t\sum_{\langle r,r^{\prime}\rangle,\tau}(\psi_{r, \tau}\psi_{r^{\prime},\tau}^{*}+\text{h.c.})\] \[-\frac{1}{2\varepsilon}\sum_{r,\tau}\left(\psi_{r,\tau}^{*}\psi_ {r,\tau+\varepsilon}(1-\varepsilon\mu)+\psi_{r,\tau}\psi_{r,\tau+\varepsilon} ^{*}(1+\varepsilon\mu)\right)\] \[+\sum_{\tau,\tau}\bigg{[}\bigg{(}\varepsilon m^{2}-\frac{ \varepsilon\mu^{2}}{2}+\frac{1}{\varepsilon}\bigg{)}{|\psi_{r,\tau}|}^{2}+ \varepsilon U{|\psi_{r,\tau}|}^{4}\bigg{]}. \tag{2}\]
In the above equation, the Trotter step equals \(\epsilon=\frac{\beta}{L_{\tau}}\), with \(\beta\) denoting the inverse temperature, and \(L_{\tau}\) is an integer that defines the length of the discrete imaginary time axis. Notably, the action involves complex weights for a finite \(\mu\), rendering direct Monte Carlo sampling uncontrolled due to the notorious numerical sign problem.
Interestingly, for \(t>0\), the path integral can be reformulated in terms of the so-called dual variables [27; 28], for which configuration weights are real and strictly non-negative. Physically, this representation tracks boson world line configurations, such that the case \(\mu>(<)0\) favors particles (holes) worldlines propagating along the positive (negative) imaginary time axis. By contrast, for negative hopping amplitudes, \(t<0\), on non-bipartite lattices, boson world lines acquire a \(\pi\) phase associated with trajectories encircling an odd number of bonds, see Fig. 1a. This effect reintroduces the numerical sign problem, and addressing it using CPI is the main focus of this work.
We now briefly chart the zero-temperature phase diagram of our model (Eq. (1)) in its limiting cases. For a vanishing chemical potential, \(\mu=0\), a quantum phase transition separates a gapped phase for large and positive mass, \(m^{2}\), from a gapless phase with quasi long range order (QLRO) in the opposite limit of large and negative \(m^{2}\). The transition belongs to the Berezinskii-Kosterlitz-Thouless (BKT) universality class and occurs at a critical coupling \(m_{c}^{2}\). We note that, due to the \(\pi\)-flux pattern, for negative hopping amplitudes, \(t<0\), QLRO correlations develop incommensurate spiral pattern at finite Bragg wave vector \(\tilde{q}=\cos^{-1}(-1/4)\)[29], see Fig. 1a.
For a given hopping amplitude, starting from the disordered phase, \(m^{2}>m_{c}^{2}\), the single particle gap can also be closed by increasing the chemical potential \(\mu\). Physically, this induces a BEC-like transition, where the chemical potential provides the necessary energy for particles to overcome the gap and condense [30]. However, unlike the \(\mu=0\) transition, here, the transition is nonrelativistic with a dynamical critical exponent \(z=2\). In the context of matter fields at finite density, such transitions are commonly termed the "Silver Blaze" effect [31].
_Numerical methods and observables -_ We now briefly review the construction of our CPI scheme using the generalized thimble method (GTM) [13; 14; 15; 32; 33]. Within
this approach, the complex plane integration manifold is determined through the holomorphic flow equation,
\[\frac{\mathrm{d}\psi_{r,\tau}}{\mathrm{d}t}=\overline{\frac{\partial S}{\partial \psi_{r,\tau}}}, \tag{3}\]
We set \(\mathbb{R}^{2N}\) as the initial condition, of the above differential equation, at flow time \(t=0\), which is associated with the real and imaginary parts of the complex fields \(\psi_{r,\tau}\) residing on \(N=L\times L_{\tau}\) space-time points. The equation is then integrated up to a flow time \(t=T\), which induces a mapping between \(\mathbb{R}^{2N}\) (the original integration manifold) and \(\mathcal{M}_{T}\), which is embedded in \(\mathbb{C}^{2N}\).
This construction is motivated by the limiting manifolds at \(T\to\infty\), known as the Lefschetz thimbles [14]. Importantly, along each Lefschetz thimble, the imaginary part of the action is constant, which, at least formally, eliminates the numerical sign problem. However, the Lefschetz thimble structure may fracture into multiple disconnected thimbles that assign potentially distinct phases to their quantum amplitudes in the complex plane. Interference between different thimbles can then reintroduce the numerical sign problem. The flow time \(T\) presents a trade-off between reducing the numerical sign problems at long flow times versus ergodicity issues in the Monte Carlo dynamics arising from the trapping of MC configurations in the vicinity of a specific thimble. To address this problem, we employ the parallel tempering technique that exchanges between configurations at varying flow times \(T\). This allows for smooth interpolation between distinct thimbles [33; 34]. Residual phases of configuration weights are considered through the standard reweighting approach.
We now turn to define physical observables, characterizing the various phases and phase transitions appearing in our problem. We first consider the particle number density \(Q=\frac{1}{L\beta}\frac{\partial\ln Z}{\partial\mu}\), explicitly given by
\[Q=\frac{1}{L\beta}\left\langle\sum_{r,\tau}\frac{1}{2}\big{(}\psi_{r,\tau}^{ \ast}\psi_{r,\tau+\varepsilon}-\psi_{r,\tau+\varepsilon}^{\ast}\psi_{r,\tau} \big{)}+\varepsilon\mu\big{|}\psi_{r,\tau}\big{|}^{2}\right\rangle, \tag{4}\]
which tracks the breaking of particle-hole symmetry. To probe the evolution of space-time correlations, we compute the single particle Green's function, \(G(q,\omega_{m})=\frac{1}{L\beta}\left\langle\left|\int_{0}^{\beta}d\tau\sum_{ r}\psi_{r,\tau}e^{i(qr+\omega_{m}\tau)}\right|^{2}\right\rangle\). Here, \(\omega_{m}=\frac{2\pi m}{\beta}\), are the standard bosonic Matsubara frequencies for integer \(m\), and integration along the imaginary time axis is discretized, as defined above. The corresponding imaginary time correlations, \(G(q,\tau)\), are obtained by Fourier relations.
With the above definitions, in the condensed phase, we expect to find QLRO, which we detect by examining the equal time Green's function evaluated at the Bragg vector \(g(\bar{q})=G(q=\bar{q},\tau=0)\), with \(\bar{q}\) taken as the closest approximate to \(\tilde{q}\) on our finite-size lattice.
The low energy dynamics is studied through the expected long imaginary time exponential decay,
\[G(\tilde{q},\tau>0)\sim e^{-\Delta_{\tilde{q}}\tau} \tag{5}\]
We estimate the single particle gap for particles, \(\Delta_{\tilde{q}}\) by fitting to the above form. The gap for anti-particles (holes) can be extracted similarly from the decay at negative times \(\tau<0\).
_Numerical results -_ For concreteness, we fix \(t=-1,U=1,m^{2}=2.25\). All energy scales are measured in units of \(U\). For this parameter choice and a vanishing chemical potential \(\mu=0\), we are located in the disordered (gapped) phase, \(m^{2}>m_{c}^{2}\approx-0.5\), see Fig. 1a and [29]. For finite size and finite temperature analysis, we consider systems sizes \(L=6,8,10\), and track the convergence to the ground state result by progressively increasing the inverse temperature, taking the values \(\beta=1,2,4,6\). We observed that within our parameter regime, \(\beta=4\) serves as a proxy for the ground state behavior. Throughout, the Trotter step equals \(\epsilon=0.17\).
We begin our analysis by probing the evolution of the numerical sign problem as a function of the chemical potential, \(\mu\), for an increasing range of the flow times \(T\), as shown in Fig. 1b. As expected, a naive integration over \(\mathbb{R}^{2N}\) displays a hard sign problem upon approach to the critical chemical potential \(\mu_{c}\approx 1.73\), as evident by the rapid drop towards zero of the average sign, \(|\left<e^{-iS_{I}}\right>|\). Remarkably, examining finite flow times, \(T\), progressively reduces the sign problem even for the challenging parameter regime \(\mu>\mu_{c}\).
To further substantiate the above result, in Fig. 2a, we examine the average sign for flow time \(T=0.2\), chemical
Figure 2: (a) The average sign, \(\big{|}\big{\langle}e^{-iS_{I}}\big{\rangle}\big{|}\), evaluated at \(\mu=1.6\), as a function of the inverse temperature \(\beta\). Solid curves correspond to flow time \(T=0.2\). The dashed line represents a vanishing flow time, \(T=0\). Different curves are associated with different system sizes. (b) The average particle number \(Q\) as a function of \(\mu\) for \(L=6\). Different curves correspond to different inverse temperatures. Solid (dashed) lines depict finite (vanishing) flow times.
potential \(\mu=1.6\), and an increasing range of system sizes and inverse temperatures. Crucially, at low temperatures and for all \(L\) values, the average sign along \(\mathcal{M}_{T}\) rises in almost three orders of magnitude compared with the one computed on \(\mathbb{R}^{2N}\) for \(L=6\). This key finding is one of our main results, which enables an accurate numerical study of our geometrically frustrated model via Monte Carlo sampling, as we demonstrate below.
Turning to physical observables, we test the advantage of finite flow times (\(T>0\)) simulation against "brute force" integration on \(\mathbb{R}^{2N}\). To that end, we measure the average particle number \(Q\) as a function of the chemical potential \(\mu\) for \(\beta=1,2,4\). In all cases, due to the severe sign problem, we considered a factor of 128 times more Monte Carlo samples for \(\mathbb{R}^{2N}\) integration than the finite flow time simulations. The results of this analysis are shown in Fig. 1(b). Indeed, at low temperatures, \(\beta=4\) converged results are only obtained using finite flow times, demonstrating the utility of the GTM sampling. Even more impressively, this advantage is obtained despite the significantly reduced MC samples. We note that, for larger system sizes, direct comparison is completely infeasible in realistic run times due to the vanishingly small average sign in \(\mathbb{R}^{2N}\) simulations.
After establishing control over the numerical sign problem within the parameter range of interest, we investigate the physical properties of our many-body problem in the vicinity of the chemical potential tuned quantum phase transition described above. In the following, we fix \(\beta=4\), and explore the finite size scaling properties with \(L=6,8\) and \(10\). First, we compute the evolution of the particle number, \(Q\), as a function \(\mu\), in Fig. 2(a). As expected, we find that at small \(\mu\), the particle number vanishes and starts to increase only above a critical coupling, \(\mu_{c}=1.73\), consistent with the estimate of the single particle gap computed at \(\mu=0\)[29].
Next, we track the order parameter \(g(\bar{q})\) as a function of \(\mu\), as depicted in Fig. 2(b). We observe a rise of \(g(\bar{q})\) for \(\mu>\mu_{c}\), signaling the appearance of QLRO. Lastly, in Fig. 2(c), we address dynamical properties by computing the gap for particle excitations \(\Delta\). The numerical results agree with a predicted gap closing transition at \(\mu_{c}\), which nucleates the condensed phase.
_Discussion and summary -_ In this work, we have demonstrated the effectiveness of the CPI approach in controlling the numerical sign problem appearing in a geometrically frustrated quantum many-body system. In particular, we identify complex plane manifolds, \(\mathcal{M}_{T}\), over which the severity of the sign problem is progressively reduced as a function of the flow time \(T\). This methodological headway enabled access to an accurate numerical study of collective effects in the vicinity of a quantum critical point.
Looking to the future, despite a great deal of progress, standard application of the GTM approach remains computationally demanding, mainly due to the repeated numerical solution of the holomorphic flow equation Eq. (3). In that regard, it would be beneficial to explore optimization techniques over families of analytically defined complex plane manifolds [35] or machine learning based approaches for constructing integration manifolds [36].
From the physics front, our results open the door to studies of more involved geometrically frustrated lattice models. Natural extensions include approaching the hard-core limit \(U\gg t\), and the more audacious goal of addressing the two-dimensional triangular lattice [37]. We leave these exciting research directions to future studies.
_Acknowledgments -_ We thank Erez Berg and Zohar Ringel for helpful discussions and Chris Rackauckas for support in employing the _DifferentialEquations.jl_ package. S.G. acknowledges support from the Israel Science
Figure 3: The chemical potential tuned quantum phase transition. (a) Boson particle number density, \(Q\), (b) equal time Green’s function at the Bragg vector \(g(\bar{q})\), and (c) the single particle gap \(\Delta\). Different curves correspond to increasing values of system size \(L\). Simulations were carried out at inverse temperature \(\beta=4\). The flow time for \(\mu<1.4\) equals \(T=0.15\) and for \(1.4\leq\mu\leq 2.4\), \(T=0.2\). The vertical line at \(\mu=1.73\) marks the approximate position of the phase transition obtained from vanishing chemical potential simulations.
Foundation (ISF) Grant no. 586/22 and the US-Israel Binational Science Foundation (BSF) Grant no. 2020264. E.C. acknowledges support from an anonymous donor from the United Kingdom. A.A. is supported in part by U.S. DOE Grant No. DE-FG02-95ER40907. This research used the Intel Labs Academic Compute Environment.
|
2307.04146 | Intrinsic Separation Principles | This paper is about output-feedback control problems for general linear
systems in the presence of given state-, control-, disturbance-, and
measurement error constraints. Because the traditional separation theorem in
stochastic control is inapplicable to such constrained systems, a novel
information-theoretic framework is proposed. It leads to an intrinsic
separation principle that can be used to break the dual control problem for
constrained linear systems into a meta-learning problem that minimizes an
intrinsic information measure and a robust control problem that minimizes an
extrinsic risk measure. The theoretical results in this paper can be applied in
combination with modern polytopic computing methods in order to approximate a
large class of dual control problems by finite-dimensional convex optimization
problems. | Boris Houska | 2023-07-09T10:32:22Z | http://arxiv.org/abs/2307.04146v1 | # Intrinsic Separation Principles
###### Abstract
This paper is about output-feedback control problems for general linear systems in the presence of given state-, control-, disturbance-, and measurement error constraints. Because the traditional separation theorem in stochastic control is inapplicable to such constrained systems, a novel information-theoretic framework is proposed. It leads to an intrinsic separation principle that can be used to break the dual control problem for constrained linear systems into a meta-learning problem that minimizes an intrinsic information measure and a robust control problem that minimizes an extrinsic risk measure. The theoretical results in this paper can be applied in combination with modern polytopic computing methods in order to approximate a large class of dual control problems by finite-dimensional convex optimization problems.
## 1 Introduction
The separation principle in stochastic control is a fundamental result in control theory [19, 26, 40], closely related to the certainty-equivalence principle [35]. It states that--under certain assumptions--the problem of optimal control and state estimation can be decoupled.
For general control systems, however, the separation theorem fails to hold. Thus, if one is interested in finding optimal output-feedback control laws for such systems, one needs to solve a rather complicated dual control problem [11]. There are two cases where such dual- or output-feedback control problems are of interest:
1. The first case is that we have an uncertain nonlinear system--in the easiest case, without state- and control constraints--for which the information content of future measurements depends on the control actions. In practice, this dependency can often be neglected, because, at least for small measurement errors and process noise, and under certain regularity assumptions, the separation theorem holds in a first order approximation [33]. Nevertheless, there are some nonlinear systems that can only be stabilized if this dependency is taken into account [12, 32].
2. And, the second case is that we have an uncertain linear system with state- and control constraints. Here, the process noise and future measurement errors have to be taken into account if one wants to operate the system safely, for instance, by designing output-feedback laws that ensure constraint satisfaction for all possible uncertainty scenarios.
The current paper is about the second case. This focus is motivated by the recent trend towards the development of safe learning and control methods [17, 42].
### Literature Review
Dual control problems have been introduced by Feldbaum in the early 1960s [11]. Maure game-theoretic and stochastic methods for analyzing such dual- and output feedback control problems have, however, only been developed much later. They go back to the seminal work of N.N. Krasovskii [21, 22] and A.B. Kurzhanskii, [23, 24]. Note that these historical articles are complemented by modern set-theoretic control theory [5, 6]. Specifically, in the context of constrained linear systems, set-theoretic notions of invariance under output feedback can be found in the work of Dorea [9, 1], which focuses on the invariance of a single information set, and in the work of Artstein and Rakovic [2], which focuses on the invariance of a collection of information sets. Moreover, a variety of set-theoretic output-feedback control methods for constrained linear systems have appeared in [3, 7, 10]. These have in common that they propose to take bounds on measurement errors into account upon designing a robust predictive controller. In this context, the work of Goulart and Kerrigan must be highlighted [15], who found a remarkably elegant way to optimize uncertainty-affine output feedback control laws for constrained linear systems. A general overview of methods for output-feedback and dual model predictive control (MPC) can be found in [13, 18, 27, 32], and the reference therein.
### Contribution
The three main contributions of this paper can be outlined as follows.
**Meta Information Theory.** While traditional information theories are based on the assumption that one can learn from accessible data, models for predicting the evolution of an uncertain control system require a higher level of abstraction. Here, one needs a prediction structure that is capable of representing the set of all possible future information states of a dynamic learning process without having access to future measurement data. Note that a comprehensive and thorough discussion of this aspect can be found in the above
mentioned article by Artstein and Rakovic [2], in which notions of invariance under output-feedback for collections of information sets are introduced. Similar to their construction, the current article proposes a meta information theoretic framework that is based on a class of information set collections, too. A novel idea of the current article in this regard, however, is the introduction of intrinsic equivalence relations that can be used to categorize information sets with respect to their geometric properties. This leads to an algebraic-geometric definition of meta information spaces in which one can distinguish between extrinsic and intrinsic information measures. Here, intrinsic information about a system is needed to predict what we will know about its states, while extrinsic information is needed to predict and assess the risk that is associated to control decisions.
**Intrinsic Separation Principle.** The central contribution of this paper is the introduction of the _intrinsic separation principle_. It formalizes the fact that the intrinsic information content of a constrained linear system does not depend on the choice of the control law. An important consequence of this result is that a large class of dual receding horizon control problems can be solved by separating them into a meta learning problem that predicts intrinsic information and a robust control problem that minimizes extrinsic risk measures. Moreover, the intrinsic separation principle can be used to analyze the existence of solutions to dual control problems under certain assumptions on the continuity and monotonicity of the objective function of the dual control problem.
**Polytopic Dual Control.** The theoretical results in this paper are used to develop practical methods to approximately solve dual control problems for linear systems with convex state- and control constraints as well as polytopic process noise and polytopic measurement error bounds. In order to appreciate the novelty of this approach, it needs to be recalled first that many existing robust output-feedback control methods, for instance the state-of-the-art output-feedback model predictive control methods in [13, 27], are based on a set-theoretic or stochastic analysis of a coupled system-observer dynamics, where the control law depends on a state estimate. This is in contrast to the presented information theoretic approach to dual control, where control decisions are made based on the system's true information state rather than a state estimate. In fact, for the first time, this paper presents a polytopic dual control method that neither computes vector-valued state estimates nor introduces an affine observer structure. Instead, the discretization of the control law is based on optimizing a finite number of control inputs that are associated to so-called extreme polytopes. The shapes, sizes, and orientations of these extreme polytopes encode the system's intrinsic information while their convex hull encodes the system's extrinsic information. The result of this discretization is a finite dimensional convex optimization problem that approximates the original dual control problem.
### Overview
The paper is structured as follows.
* Section 2 reviews the main idea of set-theoretic learning and introduces related notation.
* Section 3 establishes the technical foundation of this article. This includes the introduction of meta information spaces and a discussion of the difference between intrinsic and extrinsic information measures.
* Section 4 introduces the intrinsic separation principle for constrained linear systems, see Theorem 1.
* Section 5 discusses how to resolve dual control problems by intrinsic separation, see Theorem 2.
* Section 6 presents methods for discretizing dual control problems using polytopic information set approximations. The main technical result is summarized in Theorem 3. A numerical case study is presented. And,
* Section 7 summarizes the highlights of this paper.
### Notation
Throughout this paper, \(\mathbb{K}^{n}\) denotes the set of closed subsets of \(\mathbb{R}^{n}\), while \(\mathbb{K}^{n}_{\mathrm{c}}\) denotes the set of compact subsets of \(\mathbb{R}^{n}\). It is equipped with the Hausdorff distance
\[d_{\mathrm{H}}(X,Y)\stackrel{{\mathrm{def}}}{{=}}\max\left\{\max \limits_{x\in X}\min\limits_{y\in Y}\|x-y\|,\max\limits_{y\in Y}\min\limits_{ x\in X}\|x-y\|\right\}\]
for all \(X,Y\in\mathbb{K}^{n}_{\mathrm{c}}\), where \(\|\cdot\|:\mathbb{R}^{n}\to\mathbb{R}\) denotes a norm on \(\mathbb{R}^{n}\), such that \((\mathbb{K}^{n}_{\mathrm{c}},d_{\mathrm{H}})\) is a metric space. This definition can be extended to \(\mathbb{K}^{n}\) as follows: if the maxima in the above definition do not exist for \(X,Y\in\mathbb{K}^{n}\), we set \(d_{\mathrm{H}}(X,Y)=\infty\). The pair \((\mathbb{K}^{n},d_{\mathrm{H}})\) is called an extended metric space. Finally, the notation \(\mathrm{cl}(\cdot)\) is used to denote the closure, assuming that it is clear from the context what the underlying metric distance function is. For instance, if \(\mathfrak{X}\subseteq\mathbb{K}^{n}\) denotes a set of closed sets, \(\mathrm{cl}(\mathfrak{X})\) denotes the closure of \(\mathfrak{X}\) in \((\mathbb{K}^{n},d_{\mathrm{H}})\).
## 2 Information Spaces
An information space \((\mathcal{I},d,\sqcap)\) is a space in which learning can take place. This means that \((\mathcal{I},d)\) is an extended metric space that is equipped with a learning operator
\[\sqcap:\mathcal{I}\times\mathcal{I}\rightarrow\mathcal{I}\;,\]
such that \((\mathcal{I},\sqcap)\) is a semi-group. Among the most important examples for such spaces is the so-called set-theoretic information space, which is introduced below.
### Set-Theoretic Learning
In the context of set-theoretic learning [4, 5, 39], \(\mathcal{I}=\mathbb{K}^{n}\) denotes the set of closed subsets of the vector space \(\mathbb{R}^{n}\), while \(d=d_{\mathrm{H}}\) denotes the (extended) Hausdorff distance. Here, the standard intersection operator takes the role of a learning operator,
\[\sqcap=\cap\;,\]
recalling that the intersection of closed sets is closed. The motivation behind this definition can be outlined as follows: let us assume that we currently know that a vector \(x\in\mathbb{R}^{n}\) is contained in a given set \(X\in\mathbb{K}^{n}\). If we receive additional information, for instance, that the vector \(x\) is also contained in the set \(Y\in\mathbb{K}^{n}\), our posterior information is that \(x\) is contained in the intersection of the sets \(X\) and \(Y\), which is denoted by \(X\cap Y\).
Note that the above set-theoretic framework is compatible with continuous functions. If \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) denotes such a continuous function, the notation
\[\forall X\in\mathbb{K}^{n},\qquad f(X)\ \stackrel{{\mathrm{def}}}{{=}}\ \{f(x)\mid x\in X\}\]
is used to denote its associated continuous image map. It maps closed sets in \(\mathbb{R}^{n}\) to closed sets in \(\mathbb{R}^{m}\). Similarly, for affine functions of the form \(f(x)=Ax+b\), the notation
\[AX+b=\{Ax+b\mid x\in X\}\]
is used, where \(A\) and \(b\) are a matrix and a vector with compatible dimensions. And, finally, the Minkowski sum
\[X+Y\ \stackrel{{\mathrm{def}}}{{=}}\ \{x+y\mid x\in X,y\in Y\},\]
is defined for all sets \(X,Y\in\mathbb{K}^{n}\).
**Remark 1**: _Set theoretic learning models can be augmented by probability measures in order to construct statistical information spaces [8]. In such a context, every element of \(\mathcal{I}\) consists of a set \(X\) and a probability distribution \(\rho_{X}\) on \(X\). A corresponding metric is then constructed by using the Wasserstein distance [36]. Moreover, if \((X,\rho_{X})\in\mathcal{I}\) and \((Y,\rho_{Y})\in\mathcal{I}\) are two independent random variables, the learning operation_
\[(X,\rho_{X})\sqcap(Y,\rho_{Y})\ \stackrel{{\rm def}}{{=}}\ (\ X\cap Y,\ \rho_{XY}\ )\]
_has the form of a Bayesian learning update,_
\[\rho_{XY}(x)\ \stackrel{{\rm def}}{{=}}\ \frac{\rho_{X}(x)\rho_{Y}(x)}{ \int_{X\cap Y}\rho_{X}(y)\rho_{Y}(y)\,{\rm d}y}\;.\]
_Thus, as much as the current paper focuses--for simplicity of presentation--on set-theoretic learning, most of the developments below can be generalized to statistical learning processes by augmenting the support sets with probability distributions or probability measures [34, 41]._
### Expectation and Deviation
Expectation and deviation functions are among the most basic tools for analyzing learning processes [8]. The expectation function is defined by
\[\forall X\in\mathbb{K}_{\rm c}^{n},\qquad E(X)\ \stackrel{{\rm def }}{{=}}\ \int_{X}x\,{\rm d}x\;.\]
It is a continuous function on \(\mathbb{K}_{\rm c}^{n}\) that satisfies
\[E(AX+b)\ =\ AE(X)+b\;.\]
For the special case that the compact set \(X\) is augmented by its associated uniform probability distribution, as discussed in Remark 1, the above definition of \(E(X)\) corresponds to the traditional definition of expected value functions in statistics. Similarly, a deviation function \(D:\mathbb{K}_{\rm c}^{n}\to\mathbb{R}\) is a continuous and radially unbounded function that satisfies
1. \(D(X)\geq 0\),
2. \(D(X)=0\) if and only if \(X=\{E(X)\}\),
3. \(D(X)=D(X-E(X))\), and
4. \(D(X\cap Y)\leq D(X)\),
for all \(X,Y\in\mathbb{K}_{\rm c}^{n}\). While statistical learning models often use the variance of a random variable as a deviation measure, a more natural choice for \(D\) in the context of set theoretic learning is given by the diameter,
\[D(X)\ =\ {\rm diam}(X)\ \stackrel{{\rm def}}{{=}}\ \max_{x,y\in X}\ \|x-y\|\;.\]
A long and creative list of other possible choices for \(D\) can be found in [31].
## 3 Meta Learning
The above definition of information spaces assumes that information or data is accessible at the time at which learning operations take place. If one wishes to predict the future evolution of a learning process, however, one faces the problem that such data is not available yet. Therefore, this section proposes to introduce a meta information space in which one can represent the set of all possible posterior information states of a learning process without having access to its future data. Informally, one could say that a meta information space is an abstract space in which one can "learn how to learn".
### Information Ensembles
The focus of this and the following sections is on the set-theoretic framework recalling that \(\mathbb{K}_{\rm c}^{n}\) denotes the set of compact subsets of \(\mathbb{R}^{n}\). A set \(\mathfrak{X}\subseteq\mathbb{K}_{\rm c}^{n}\) is called an information ensemble of \(\mathbb{K}_{\rm c}^{n}\) if
\[\forall Y\in\mathbb{K}_{\rm c}^{n},\quad X\cap Y\in\mathfrak{X} \tag{1}\]
for all \(X\in\mathfrak{X}\). Because \(\varnothing=X\cap\varnothing\in\mathfrak{X}\), any information ensemble contains the empty set.
**Proposition 1**: _If \(\mathfrak{X}\subseteq\mathbb{K}_{\rm c}^{n}\) is an information ensemble, then \({\rm cl}(\mathfrak{X})\) is an information ensemble, too._
**Proof.** Let \(\mathfrak{X}\) be a given information ensemble and let \(X_{\infty}\in{\rm cl}(\mathfrak{X})\) be a given set in its closure. Then there exists a Cauchy sequence \(X_{1},X_{2},\ldots\in\mathfrak{X}\) such that
\[X_{\infty}\ \stackrel{{\rm def}}{{=}}\ \lim_{k\to\infty}X_{k}\ \in\ {\rm cl}(\mathfrak{X})\;.\]
Next, let \(Y\in\mathbb{K}_{\rm c}^{n}\) be an arbitrary compact set. The case \(X_{\infty}\cap Y=\varnothing\) is trivial, since \(\varnothing\in\mathfrak{X}\subseteq{\rm cl}(\mathfrak{X})\). Next, if \(X_{\infty}\cap Y\neq\varnothing\), then there exists for every \(\xi\in X_{\infty}\cap Y\) an associated sequence \(z_{1}(\xi)\in X_{1}\), \(z_{2}(\xi)\in X_{2}\),... with
\[\lim_{k\to\infty}z_{k}(\xi)\ =\ \xi\;.\]
This construction is such that the sets
\[Z_{k}\ \stackrel{{\rm def}}{{=}}\ {\rm cl}\,(\ \{z_{k}(\xi)\ \mid\ \xi\in X _{\infty}\cap Y\ \}\ )\]
satisfy \(Z_{k}=X_{k}\cap Z_{k}\in\mathfrak{X}\), since \(\mathfrak{X}\) is an information ensemble. Consequently, it follows that
\[X_{\infty}\cap Y\ =\ \lim_{k\to\infty}Z_{k}\ \in\ {\rm cl}(\mathfrak{X})\;.\]
Thus, \({\rm cl}(\mathfrak{X})\) is an information ensemble, as claimed by the statement of the proposition. \(\diamond\) Information ensembles can be used to construct information spaces, as pointed out below.
**Proposition 2**: _Let \(\mathfrak{X}\) be an information ensemble of \(\mathbb{K}_{\rm c}^{n}\). Then \((\mathfrak{X},d_{\rm H},\cap)\) is an information space._
**Proof.** Condition (1) implies that \(X\cap Y\in\mathfrak{X}\) for all \(X,Y\in\mathfrak{X}\). Thus, \((\mathfrak{X},\cap)\) is a subsemigroup of \((\mathbb{K}_{\rm c}^{n},\cap)\). Moreover, \(d_{\rm H}\) defines a metric on \(\mathfrak{X}\). Consequently, \((\mathfrak{X},d_{\rm H},\cap)\) is an information space.\(\diamond\)
**Remark 2**: _The difference between information ensembles and more general set collections, as considered in [2], is that Property (1) is enforced. Note that this property makes a difference in the context of developing a coherent learning algebra: if (1) would not hold, \((\mathfrak{X},\cap)\) would, in general, not be a subsemigroup of \((\mathbb{K}_{\rm c}^{n},\cap)\)._
### Extreme Sets
A set \(X\in{\rm cl}(\mathfrak{X})\) of a given information ensemble \(\mathfrak{X}\subseteq\mathbb{K}_{\rm c}^{n}\) is called an extreme set of \(\mathfrak{X}\) if
\[\forall Y\in{\rm cl}(\mathfrak{X})\setminus\{X\},\qquad X\cap Y\neq X\;.\]
The set of extreme sets of \(\mathfrak{X}\) is denoted by \(\partial\mathfrak{X}\). It is called the boundary of the information ensemble \(\mathfrak{X}\). Clearly, we have \(\partial X\subseteq{\rm cl}(\mathfrak{X})\), but, in general, \(\partial X\) is not an information ensemble. Instead, \(\partial\mathfrak{X}\) can be interpreted as a minimal representation of the closure of \(\mathfrak{X}\), because
\[{\rm cl}(\mathfrak{X})\ =\ \{\ Y\in\mathbb{K}_{\rm c}^{n}\ \mid\ \exists X\in \partial\mathfrak{X},\ Y\subseteq X\ \}\;.\]
Reversely, the closure of \(\mathfrak{X}\) can be interpreted as the smallest information ensemble that contains \(\partial\mathfrak{X}\).
### Meta Information Spaces
Let \(\mathbb{I}^{n}\) denote the set of closed information ensembles of \(\mathbb{K}^{n}_{\rm c}\); that is, the set of closed subsemigroups of \((\mathbb{K}^{n}_{\rm c},\cap)\) that are closed under intersection with sets in \(\mathbb{K}^{n}_{\rm c}\). Similarly, the notation \(\mathbb{I}^{n}_{\rm c}\) will be used to denote the set of compact information ensembles of the information space \((\mathbb{K}^{n}_{\rm c},d_{\rm H},\cap)\). Next, the meta learning operator \(\sqcap:\mathbb{I}^{n}\times\mathbb{I}^{n}\rightarrow\mathbb{I}^{n}\) is introduced by defining
\[\mathfrak{X}\sqcap\mathfrak{Y}\ \stackrel{{\rm def}}{{=}}\ \{\ X\cap Y\ \mid\ X\in\mathfrak{X},\ Y\in\mathfrak{Y}\ \}\]
for all \(\mathfrak{X},\mathfrak{Y}\in\mathbb{I}^{n}\). A corresponding metric distance function, \(\Delta_{H}\) is given by
\[\Delta_{\rm H}(\mathfrak{X},\mathfrak{Y})\] \[\stackrel{{\rm def}}{{=}}\ \max\left\{\max_{X\in \mathfrak{X}}\min_{Y\in\mathfrak{Y}}d_{\rm H}(X,Y),\max_{Y\in\mathfrak{Y}}\min_ {X\in\mathfrak{X}}d_{\rm H}(X,Y)\right\}\]
for all \(\mathfrak{X},\mathfrak{Y}\in\mathbb{I}^{n}_{\rm c}\) such that \((\mathbb{I}^{n}_{\rm c},\Delta_{\rm H})\) is a metric space. Similar to the construction of the Hausdorff distance \(d_{\rm H}\), the definition of \(\Delta_{\rm H}\) can be extended to \(\mathbb{I}^{n}\) by understanding the above definition in the extended value sense. The following proposition shows that the triple \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\) is an information space. It is called the meta information space of \((\mathbb{K}^{n}_{\rm c},d_{\rm H},\sqcap)\).
**Proposition 3**: _The triple \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\) is an information space. It can itself be interpreted as a set-theoretic information space in the sense that we have_
\[\mathfrak{X}\sqcap\mathfrak{Y}\ =\ \mathfrak{X}\cap\mathfrak{Y} \tag{2}\]
_for all \(\mathfrak{X},\mathfrak{Y}\in\mathbb{I}^{n}\)._
**Proof.** The proof of this proposition is divided into two parts: the first part shows that (2) holds and the second part uses this result to conclude that \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\) is an information space.
_Part I._ Let \(\mathfrak{X},\mathfrak{Y}\in\mathbb{I}^{n}\) be given information ensembles. For any \(X\in\mathfrak{X}\cap\mathfrak{Y}\) the intersection relation
\[X\cap X=X\in\mathfrak{X}\cap\mathfrak{Y}\]
holds. But this implies that
\[\mathfrak{X}\cap\mathfrak{Y}\ \subseteq\ \{\ X\cap Y\ |\ X\in\mathfrak{X},\ Y \in\mathfrak{Y}\ \}\ =\ \mathfrak{X}\sqcap\mathfrak{Y}\.\]
In order to also establish the reverse inclusion, assume that \(Z\in\mathfrak{X}\sqcap\mathfrak{Y}\) is a given set. It can be written in the form \(Z=X\cap Y\) with \(X\in\mathfrak{X}\) and \(Y\in\mathfrak{Y}\). Clearly, we have \(Z\subseteq X\) and \(Z\subseteq Y\). Moreover, we have \(Z\in\mathbb{K}_{\rm c}^{n}\), since the intersection of compact sets is compact. Thus, since \(\mathfrak{X}\) and \(\mathfrak{Y}\) are information ensembles, (1) implies that \(Z\in\mathfrak{X}\) and \(Z\in\mathfrak{Y}\). But this is the same as saying that \(Z\in\mathfrak{X}\cap\mathfrak{Y}\), which implies \(\mathfrak{X}\cap\mathfrak{Y}\supseteq\mathfrak{X}\sqcap\mathfrak{Y}\). Together with the above reverse inclusion, this yields (2).
_Part II._ Note that \((\mathbb{I}^{n},\cap)\) is a semigroup, which follows from the definition of intersection operations. Moreover, \((\mathbb{I}^{n},\Delta_{\rm H})\) is, by construction, an extended metric space. Thus, \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\) is indeed an information space, as claimed by the statement of this proposition.\(\diamond\)
**Corollary 1**: _The triple \((\mathbb{I}_{\rm c}^{n},\Delta_{\rm H},\sqcap)\) is also an information space. It can be interpreted as a sub-meta information space of \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\)._
**Proof.** The statement of this corollary follows immediately from the previous proposition, since the intersection of compact sets is compact; that is, \((\mathbb{I}_{\rm c}^{n},\sqcap)\) is a subsemigroup of \((\mathbb{I}^{n},\sqcap)\).\(\diamond\)
The statement of the above proposition about the fact that \((\mathbb{I}^{n},\Delta_{\rm H},\sqcap)\) can be interpreted as a set-theoretic information space can be further supported by observing that this space is naturally compatible with continuous functions, too. Throughout this paper, the notation
\[f(\mathfrak{X})\ \stackrel{{\rm def}}{{=}}\ \{\ f(X)\ \mid\ X\in\mathfrak{X}\ \}\]
is used for any \(\mathfrak{X}\in\mathbb{I}^{n}\), recalling that \(f(X)\) denotes the compact image set of a continuous function \(f\) on a compact set \(X\in\mathbb{K}_{\rm c}^{n}\). Due to this continuity assumption on \(f\), closed information ensembles are mapped to closed information ensembles.
### Interpretation of Meta Learning Processes
Meta information spaces can be used to analyze the evolution of learning processes without having access to data. In order to discuss why this is so, a guiding example is introduced: let us consider a set-theoretic sensor, which returns at each time instance a compact information set \(X\in\mathbb{K}_{\rm c}^{1}\) containing the scalar state \(x\) of a physical system, \(x\in X\). If the absolute value of the measurement error of the sensor is bounded by \(1\), this means that \(X\subseteq[a,a+2]\) for at least one lower bound \(a\in\mathbb{R}\). The closed but unbounded information ensemble that is associated with such a sensor is given by
\[\mathfrak{Y}=\{\ X\in\mathbb{K}_{\rm c}^{1}\ \mid\ \exists a\in\mathbb{R}:\ X \subseteq[a,a+2]\ \}\in\mathbb{I}^{1}. \tag{3}\]
It can be interpreted as the set of all information sets that the sensor could return when taking a measurement.
Next, in order to illustrate how an associated meta learning process can be modeled, one needs to assume that prior information about the physical state \(x\) is available. For instance, if \(x\) is known to satisfy \(x\in[-3,3]\), this would mean that our prior is given by
\[\mathfrak{X}=\{\ X\in\mathbb{K}^{1}_{\rm c}\ \mid\ X\subseteq[-3,3]\ \}\;.\]
In such a situation, a meta learning process is--due to Proposition 3--described by an update of the form
\[\mathfrak{X}^{+}\ =\ \mathfrak{X}\sqcap\mathfrak{Y}\ =\ \mathfrak{X}\cap \mathfrak{Y}\;,\]
where \(\mathfrak{X}^{+}\) denotes the posterior,
\[\mathfrak{X}^{+}=\left\{\begin{array}{l|l}X\in\mathbb{K}^{1}_{\rm c}&\exists a \in\mathbb{R}:\\ X\subseteq[\max\{a,-3\},2+\min\{1,a\}]&\end{array}\right\}\;.\]
It is computed without having access to any sensor data.
### Intrinsic Equivalence
Equivalence relations can be used to categorize compact information sets with respect to their geometric properties. In the following, we focus on a particular equivalence relation. Namely, we consider two sets \(X,Y\in\mathbb{K}^{n}_{\rm c}\) equivalent, writing \(X\simeq Y\), if they have the same shape, size, and orientation. This means that
\[X\simeq Y\qquad\Longleftrightarrow\qquad\exists a\in\mathbb{R}^{n}:\quad X +a=Y\;.\]
The motivation for introducing this particular equivalence relation is that two information sets \(X\) and \(Y\) can be considered equally informative if they coincide after a translation.
**Definition 1**: _Two information ensembles \(\mathfrak{X},\mathfrak{Y}\subseteq\mathbb{K}^{n}_{\rm c}\) are called intrinsically equivalent, \(\mathfrak{X}\sim\mathfrak{Y}\), if their quotient spaces coincide,_
\[(\mathfrak{X}/\simeq)\ =\ (\mathfrak{Y}/\simeq)\;. \tag{4}\]
The intrinsic equivalence relation \(\sim\) from the above definition is--as the name suggests--an equivalence relation. This follows from the fact that \(\mathfrak{X}\sim\mathfrak{Y}\) if and only if
\[\forall X\in\mathfrak{X}, \exists a\in\mathbb{R}^{n}:\ \ X+a \in \mathfrak{Y}\] \[\mbox{and}\qquad\forall Y\in\mathfrak{Y}, \exists b\in\mathbb{R}^{n}:\ \ Y+b \in \mathfrak{X}\;,\]
which, in turn, follows after substituting the above definition of \(\simeq\) in (4).
**Proposition 4**: _If \(\mathfrak{X},\mathfrak{Y}\subseteq\mathbb{I}_{\rm c}^{n}\) are intrinsically equivalent information ensembles, \(\mathfrak{X}\sim\mathfrak{Y}\), their closures are intrinsically equivalent, too,_
\[{\rm cl}(\mathfrak{X})\ \sim\ {\rm cl}(\mathfrak{Y})\;.\]
**Proof.** Proposition 1 ensures that the closures of \(\mathfrak{X}\) and \(\mathfrak{Y}\) are information ensembles, \({\rm cl}(\mathfrak{X})\in\mathbb{I}^{n}\) and \({\rm cl}(\mathfrak{Y})\in\mathbb{I}^{n}\). Next, there exists for every \(X_{\infty}\in{\rm cl}(\mathfrak{X})\) a convergent sequence of sets \(X_{1},X_{2},\ldots\in\mathfrak{X}\) such that
\[X_{\infty}=\lim_{k\to\infty}X_{k}\;.\]
Moreover, since \(\mathfrak{X}\sim\mathfrak{Y}\), there also exists a sequence \(a_{1},a_{2},\ldots\in\mathbb{R}^{n}\) such that the sequence
\[Y_{k}\ \stackrel{{\rm def}}{{=}}\ X_{k}+a_{k}\ \in\ \mathfrak{Y}\]
remains in \(\mathfrak{Y}\). Because \(\mathfrak{X}\) and \(\mathfrak{Y}\) are compact, the sequence of offsets \(a_{k}\) must be bounded. Thus, it has a convergent subsequence, \(a_{j_{1}},a_{j_{2}},\ldots\in\mathbb{R}^{n}\), with limit
\[a_{\infty}\ \stackrel{{\rm def}}{{=}}\ \lim_{k\to\infty}a_{j_{k}}\ \in\ \mathbb{R}^{n}\;.\]
This construction is such that
\[X_{\infty}+a_{\infty}\ =\ \lim_{k\to\infty}\ \{X_{j_{k}}+a_{j_{k}}\}\ \in\ {\rm cl}(\mathfrak{Y})\;.\]
A completely analogous statement holds after replacing the roles of \(\mathfrak{X}\) and \(\mathfrak{Y}\). Consequently, the closures of \(\mathfrak{X}\) and \(\mathfrak{Y}\) are intrinsically equivalent, which corresponds to the statement of the proposition. \(\diamond\)
### Extrinsic versus Intrinsic Information
Throughout this paper, it will be important to distinguish between extrinsic and intrinsic information. Here, the extrinsic information of an information ensemble is encoded by the union of its elements, namely, the extrinsic information set. It describes present information. The extrinsic information content of an information ensemble can be quantified by extrinsic information measures:
**Definition 2**: _An information measure \(f:\mathbb{I}_{\rm c}^{n}\to\mathbb{R}\) is called extrinsic, if there exist a function \(g:\mathbb{R}_{\rm c}^{n}\to\mathbb{R}\) with_
\[\forall\mathfrak{X}\in\mathbb{I}_{\rm c}^{n},\qquad f(\mathfrak{X})\ =\ g\left(\bigcup_{X\in\mathfrak{X}}X\right)\;.\]
In contrast to extrinsic information, the intrinsic information of an information ensemble \(\mathfrak{X}\) is encoded by its quotient space, \(\mathfrak{X}/\simeq\). It describes future information. In order to formalize this definition, it is helpful to introduce a shorthand for the meta quotient space
\[\mathbb{Q}^{n}_{\mathrm{c}}\ \stackrel{{\mathrm{def}}}{{=}}\ \mathbb{I}^{n}_{\mathrm{c}}/\sim\;.\]
In analogy to Definition 2, the intrinsic information of an information ensemble can be quantified by intrinsic information measures:
**Definition 3**: _An information measure \(f:\mathbb{I}^{n}_{\mathrm{c}}\to\mathbb{R}\) is called intrinsic, if there exist a function \(g:\mathbb{Q}^{n}_{\mathrm{c}}\to\mathbb{R}\) with_
\[\forall X\in\mathbb{I}^{n}_{\mathrm{c}},\qquad f(\mathfrak{X})\ =\ g(\mathfrak{X}/\simeq)\;.\]
In order to develop a stronger intuition about the difference between extrinsic and intrinsic information measures, it is helpful to extend the definitions of the expectation and deviation functions \(E\) and \(D\) from the original information space setting in Section 2.2. These original definitions can be lifted to the meta information space setting by introducing their associated extrinsic expectation \(\mathfrak{E}\) and extrinsic deviation \(\mathfrak{D}\), given by
\[\mathfrak{E}(\mathfrak{X})\ \stackrel{{\mathrm{def}}}{{=}}\ E \left(\bigcup_{X\in\mathfrak{X}}X\right)\quad\text{and}\quad\mathfrak{D}( \mathfrak{X})\ \stackrel{{\mathrm{def}}}{{=}}\ D\left(\bigcup_{X\in \mathfrak{X}}X\right)\]
for all \(\mathfrak{X}\in\mathbb{I}^{n}_{\mathrm{c}}\). Note that \(\mathfrak{E}\) and \(\mathfrak{D}\) are continuous functions, which inherit the properties of \(E\) and \(D\). Namely, the relation
\[\mathfrak{E}(A\mathfrak{X}+b)=A\mathfrak{E}(\mathfrak{X})+b\]
holds. Similarly, \(\mathfrak{D}\) satisfies all axioms of a deviation measure in the sense that
1. \(\mathfrak{D}(\mathfrak{X})\geq 0\),
2. \(\mathfrak{D}(\mathfrak{X})=0\) if and only if \(\mathfrak{X}=\{\{\mathfrak{E}(\mathfrak{X})\}\}\),
3. \(\mathfrak{D}(\mathfrak{X})=\mathfrak{D}(\mathfrak{X}-\mathfrak{E}(\mathfrak{ X}))\), and
4. \(\mathfrak{D}(\mathfrak{X}\sqcap\mathfrak{Y})\leq\mathfrak{D}(\mathfrak{X})\),
for all \(\mathfrak{X},\mathfrak{D}\in\mathbb{I}^{n}_{\mathrm{c}}\). Note that such extrinsic deviation measures need to be distinguished carefully from intrinsic deviation measures. Here, a function \(\mathfrak{D}^{\circ}:\mathbb{I}^{n}_{\mathrm{c}}\to\mathbb{R}\), is called an intrinsic deviation measure if it is a continuous and intrinsic function that satisfies
1. \(\mathfrak{D}^{\circ}(\mathfrak{X})\geq 0\),
2. \(\mathfrak{D}^{\circ}(\mathfrak{X})=0\) if and only if \(\mathfrak{X}\sim\{\{\mathfrak{E}(\mathfrak{X})\}\}\),
3. \(\mathfrak{D}^{\circ}(\mathfrak{X})=\mathfrak{D}^{\circ}(\mathfrak{X}- \mathfrak{E}(\mathfrak{X}))\), and
4. \(\mathfrak{D}^{\circ}(\mathfrak{X}\sqcap\mathfrak{Y})\leq\mathfrak{D}^{\circ} (\mathfrak{X})\),
for all \(\mathfrak{X},\mathfrak{Y}\in\mathbb{I}_{\mathrm{c}}^{n}\). The second axiom is equivalent to requiring that \(\mathfrak{D}^{\circ}\) is positive definite on the quotient space \(\mathbb{Q}_{\mathrm{c}}^{n}\). In order to have a practical example in mind, we introduce the particular function
\[\forall\mathfrak{X}\in\mathbb{I}_{\mathrm{c}}^{n},\qquad\mathfrak{D}^{\circ} _{\infty}(\mathfrak{X})\ =\ \max_{X\in\mathfrak{X}}\ \max_{x,y\in X}\ \|x-y\|, \tag{5}\]
which turns out to be an intrinsic information measure, as pointed out by the following lemma.
**Lemma 1**: _The function \(\mathfrak{D}^{\circ}_{\infty}\), defined by (5), is an intrinsic deviation measure on \(\mathbb{I}_{\mathrm{c}}^{n}\)._
**Proof.** Let \(\mathfrak{X}\in\mathbb{I}_{\mathrm{c}}^{n}\) be a given information ensemble and let \(X^{\star}\) be a maximizer of (5), such that
\[\mathfrak{D}^{\circ}_{\infty}(\mathfrak{X})\ =\ \mathrm{diam}(X^{\star})\ =\ \max_{x,y\in X^{\star}}\ \|x-y\|\;.\]
If \(\mathfrak{Y}\in\mathbb{I}_{\mathrm{c}}^{n}\) is an intrinsically equivalent ensemble with \(\mathfrak{X}\sim\mathfrak{Y}\), then there exists an offset vector \(a^{\star}\in\mathbb{R}^{n}\) such that \(X^{\star}+a^{\star}\in\mathfrak{Y}\). Thus, we have
\[\mathfrak{D}^{\circ}_{\infty}(\mathfrak{Y}) = \max_{Y\in\mathfrak{Y}}\ \mathrm{diam}(Y)\ \geq\ \mathrm{diam}(X^{\star}+a^{\star})\] \[= \mathrm{diam}(X^{\star}+a-E(X^{\star}+a))\] \[= \mathrm{diam}(X^{\star}-E(X^{\star}))\] \[= \mathrm{diam}(X^{\star})=\mathfrak{D}^{\circ}_{\infty}(\mathfrak{ X})\;,\]
where the equations in the second, third, and fourth line follow by using the axioms of \(D\) and \(E\) from Section 2.2. The corresponding reverse inequality follows by using an analogous argument exchanging the roles of \(\mathfrak{X}\) and \(\mathfrak{Y}\). Thus, we have \(\mathfrak{D}^{\circ}_{\infty}(\mathfrak{X})=\mathfrak{D}^{\circ}_{\infty}( \mathfrak{Y})\). This shows that \(\mathfrak{D}^{\circ}_{\infty}\) is an intrinsic information measure. The remaining required properties of \(\mathfrak{D}^{\circ}_{\infty}\) are directly inherited from the diameter function, recalling that the diameter is a continuous deviation function that satisfies the corresponding axioms from Section 2.2. This yields the statement of the lemma. \(\diamond\)
**Example 1**: _Let us revisit the tutorial example from Section 3.4, where we had considered the case that_
\[\mathfrak{X} =\{\ X\in\mathbb{K}^{1}_{\rm c}\ \mid\ X\subseteq[-3,3]\ \}\qquad\text{and}\] \[\mathfrak{X}^{+} =\left\{\ X\in\mathbb{K}^{1}_{\rm c}\ \middle|\ \begin{array}{l}\exists a\in \mathbb{R}:\\ X\subseteq[\max\{a,-3\},2+\min\{1,a\}]\end{array}\right\}\]
_denote the prior and posterior of a data-free meta learning process. If we set \(D(X)={\rm diam}(X)\) and define \(\mathfrak{D}\) and \(\mathfrak{D}^{\circ}_{\infty}\) as above, then_
\[\mathfrak{D}(\mathfrak{X})\ =\ \mathfrak{D}(\mathfrak{X}^{+})\ =\ 6\;.\]
_An interpretation of this equation can be formulated as follows: since our meta learning process is not based on actual data, the extrinsic information content of the prior \(\mathfrak{X}\) and the posterior \(\mathfrak{X}^{+}\) must be the same, which implies that their extrinsic deviations must coincide. This is in contrast to the intrinsic deviation measure,_
\[\mathfrak{D}^{\circ}_{\infty}(\mathfrak{X})\ =\ 6\ >\ 2\ =\ \mathfrak{D}^{\circ}_{\infty}( \mathfrak{X}^{+}),\]
_which predicts that no matter what our next measurement will be, the diameter of our posterior information set will be at most \(2\)._
## 4 Intrinsic Separation Principle
The goal of this section is to formulate an intrinsic separation principle for constrained linear systems.
### Constrained Linear Systems
The following considerations concern uncertain linear discrete-time control systems of the form
\[x_{k+1} = Ax_{k}+Bu_{k}+w_{k} \tag{6}\] \[\eta_{k} = Cx_{k}+v_{k}\.\]
Here, \(x_{k}\in\mathbb{R}^{n}\) denotes the state, \(u_{k}\in\mathbb{U}\) the control, \(w_{k}\in\mathbb{W}\) the disturbance, \(\eta_{k}\in\mathbb{R}^{n_{v}}\) the measurement, and \(v_{k}\in\mathbb{V}\) the measurement error at time \(k\in\mathbb{Z}\). The system matrices \(A\), \(B\), and \(C\) as well as the state, control, disturbance, and measurement error constraints sets, \(\mathbb{X}\in\mathbb{K}^{n}\), \(\mathbb{U}\in\mathbb{K}^{n_{u}}_{\rm c}\), \(\mathbb{W}\in\mathbb{K}^{n}_{\rm c}\), and \(\mathbb{V}\in\mathbb{K}^{n_{v}}_{\rm c}\), are assumed to be given.
### Information Tubes
The sensor that measures the outputs \(Cx_{k}\) of (6) can be represented by the information ensemble
\[\mathfrak{V}\ \stackrel{{\rm def}}{{=}}\ \left\{\,X\in\mathbb{K}_{ \rm c}^{n}\ |\ \exists\eta\in\mathbb{R}^{n_{v}}\!:\eta-CX\subseteq\mathbb{V}\,\right\}. \tag{7}\]
Since \(\mathbb{V}\) is compact, \(\mathfrak{V}\) is closed but potentially unbounded, \(\mathfrak{V}\in\mathbb{I}^{n}\). If \(\mathfrak{X}\in\mathbb{I}^{n}\) denotes a prior information ensemble of the state of (6) an associated posterior is given by \(\mathfrak{X}\sqcap\mathfrak{V}\). This motivates to introduce the function
\[F(\mathfrak{X},\mu)\ \stackrel{{\rm def}}{{=}}\ \left\{X^{+}\in\mathbb{K}_{ \rm c}^{n}\right|\ \frac{\exists X\in\mathfrak{X}\sqcap\mathfrak{V}:}{X^{+}\subseteq AX+B\mu(X) +\mathbb{W}}\ \right\},\]
which is defined for all \(\mathfrak{X}\in\mathbb{I}^{n}\) and all control laws \(\mu\!:\!\mathbb{I}^{n}\!\to\!\mathbb{U}\) that map the system's posterior information state to a feasible control input. Let \(\mathcal{U}\) denote the set of all such maps from \(\mathbb{I}^{n}\) to \(\mathbb{U}\). It is equipped with the supremum norm,
\[\|\mu\|\ \stackrel{{\rm def}}{{=}}\ \sup_{X\in\mathbb{K}^{n}}\ \|\mu(X)\|\,\]
such that \((\mathcal{U},\|\cdot\|)\) is a Banach space. As \(\mu\in\mathcal{U}\) is potentially discontinuous, \(F(\mathfrak{X},\mu)\) is not necessarily closed. Instead, the following statement holds.
**Proposition 5**: _If \(\mathfrak{X}\), \(\mathbb{U}\), \(\mathbb{V}\), and \(\mathbb{W}\) are closed, then the closure of the set \(F(\mathfrak{X},\mu)\) is for every given \(\mu\in\mathcal{U}\) a closed information ensemble,_
\[\overline{F}(\mathfrak{X},\mu)\ \stackrel{{\rm def}}{{=}}\ \mathrm{cl}(\ F(\mathfrak{X},\mu)\ )\in\mathbb{I}^{n}\;.\]
**Proof.** The statement of this proposition follows from Proposition 1 and the above definition of \(F\). \(\diamond\)
The functions \(F\) and \(\overline{F}\) are the basis for the following definitions.
**Definition 4**: _An information ensemble \(\mathfrak{X}_{\rm s}\in\mathbb{I}^{n}\) is called control invariant (6) if there exists a \(\mu_{\rm s}\in\mathcal{U}\) such that_
\[\mathfrak{X}_{\rm s}\supseteq F(\mathfrak{X}_{\rm s},\mu_{\rm s})\;.\]
**Definition 5**: _A sequence \(\mathfrak{X}_{0},\mathfrak{X}_{1},\ldots\in\mathbb{I}^{n}\) of information ensembles is called an information tube for (6) if there exists a sequence \(\mu_{0},\mu_{1},\ldots\in\mathcal{U}\) such that_
\[\forall k\in\mathbb{N},\quad\mathfrak{X}_{k+1}\supseteq F(\mathfrak{X}_{k},\mu _{k})\;.\]
**Definition 6**: _An information tube \(\mathcal{X}_{0},\mathcal{X}_{1},\ldots\in\mathbb{I}^{n}\) is called tight if it satisfies_
\[\forall k\in\mathbb{N},\qquad\mathfrak{X}_{k+1}=\overline{F}(\mathfrak{X}_{k}, \mu_{k})\]
_for at least one control policy sequence \(\mu_{k}\in\mathcal{U}\)._
### Intrinsic Separation
The following theorem establishes the fact that the intrinsic equivalence class of tight information tubes does not depend on the control policy sequence.
**Theorem 1**: _Let \(\mathfrak{X}_{0},\mathfrak{X}_{1},\ldots\in\mathbb{I}_{\mathrm{c}}^{n}\) and \(\mathfrak{Y}_{0},\mathfrak{Y}_{1},\ldots\in\mathbb{I}_{\mathrm{c}}^{n}\) be tight information tubes with compact elements. If the initial information ensembles are intrinsically equivalent, \(\mathfrak{X}_{0}\sim\mathfrak{Y}_{0}\), then all information ensembles are intrinsically equivalent; that is, \(\mathfrak{X}_{k}\sim\mathfrak{Y}_{k}\) for all \(k\in\mathbb{N}\)._
**Proof.** Because \(\mathfrak{X}\) and \(\mathfrak{Y}\) are tight information tubes, there exist control policies \(\mu_{k}:\mathfrak{X}_{k}\cap\mathfrak{Y}\to\mathbb{U}\) and \(\nu_{k}:\mathfrak{Y}_{k}\cap\mathfrak{Y}\to\mathbb{U}\) such that
\[\mathfrak{X}_{k+1}=\overline{F}(\mathfrak{X}_{k},\mu_{k})\quad\text{and}\quad \mathfrak{Y}_{k+1}=\overline{F}(\mathfrak{Y}_{k},\nu_{k}) \tag{8}\]
for all \(k\in\mathbb{N}\). Next, the statement of the theorem can be proven by induction over \(k\): since we assume \(\mathfrak{X}_{0}\sim\mathfrak{Y}_{0}\), this assumption can be used directly as induction start. Next, if \(\mathfrak{X}_{k}\sim\mathfrak{Y}_{k}\), there exists for every \(X_{k}\in\mathfrak{X}_{k}\cap\mathfrak{Y}\) an offset vector \(a_{k}\in\mathbb{R}^{n}\) such that \(Y_{k}=X_{k}+a_{k}\in\mathfrak{Y}_{k}\). Because \(\mathfrak{Y}\) satisfies
\[\forall a\in\mathbb{R}^{n},\ \forall V\in\mathfrak{Y},\qquad V+a\in \mathfrak{Y},\]
it follows that \(Y_{k}=X_{k}+a_{k}\in\mathfrak{Y}_{k}\cap\mathfrak{Y}\). Consequently, a relation of the form
\[AX_{k}+B\mu_{k}(X_{k})+\mathbb{W} = AY_{k}+(B\mu_{k}(X_{k})-Aa_{k})+\mathbb{W}\] \[= AY_{k}+B\nu_{k}(Y_{k})+\mathbb{W}-a_{k+1},\]
can be established, where the next offset vector, \(a_{k+1}\), is given by
\[a_{k+1}\ \stackrel{{\mathrm{def}}}{{=}}\ Ax_{k}+B\nu_{k}(Y_{k})-B \mu_{k}(X_{k})\ \in\ \mathbb{R}^{n}\;.\]
Note that a completely symmetric relation holds after exchanging the roles of \(\mathfrak{X}_{k}\) and \(\mathfrak{Y}_{k}\). In summary, it follows that an implication of the form
\[\mathfrak{X}_{k}\sim\mathfrak{Y}_{k}\qquad\Longrightarrow\qquad F(\mathfrak{X }_{k},\mu_{k})\sim F(\mathfrak{Y}_{k},\nu_{k})\]
holds. An application of Proposition 4 to the latter equivalence relation yields the desired induction step. This completes the proof of the theorem.\(\diamond\)
The above theorem allows us to formulate an intrinsic separation principle. Namely, Theorem 1 implies that the predicted future information content of a tight information tube does not depend on the choice of the control policy sequence with which it is generated. In particular, the tight information tubes from (8) satisfy
\[\forall k\in\mathbb{N},\qquad\mathfrak{D}^{\circ}(\mathfrak{X}_{k})=\mathfrak{ D}^{\circ}(\mathfrak{Y}_{k})\]
for any intrinsic information measure \(\mathfrak{D}^{\circ}\). Note that this property is independent of the choice of the control policy sequences \(\mu_{k}\) and \(\nu_{k}\) that are used to generate these tubes.
### Control Invariance
As mentioned in the introduction, different notions of invariance under output-feedback control have been analyzed by various authors [1, 2]. This section briefly discusses how a similar result can be recovered by using the proposed meta learning based framework. For this aim, we assume that
1. the sets \(\mathbb{V}\in\mathbb{K}_{\mathrm{c}}^{n_{v}}\) and \(\mathbb{W}\in\mathbb{K}_{\mathrm{c}}^{n_{w}}\) are compact,
2. the set \(\mathbb{U}\in\mathbb{K}^{n_{u}}\) is closed and convex,
3. the pair \((A,C)\) is observable, and
4. \((A,B,\mathbb{U},\mathbb{W})\) admits a robust control invariant set.
The first two assumptions are standard. The third assumption on the observability of \((A,C)\) could also be replaced by a weaker detectability condition. However, since one can always use a Kalman decomposition to analyze the system's invariant subspaces separately [20], it is sufficient to focus on observable systems. And, finally, the fourth assumption is equivalent to requiring the existence of a state-feedback law \(\overline{\mu}:\mathbb{R}^{n}\to\mathbb{U}\) and a set \(\overline{X}\in\mathbb{K}_{\mathrm{c}}^{n}\) such that
\[\forall x\in\overline{X},\ \forall w\in\mathbb{W},\qquad Ax+B\overline{\mu}(x)+w \in\overline{X}\,,\]
which is clearly necessary: if we cannot even keep the system inside a bounded region by relying on exact state measurements, there is no hope that we can do so without such exact data.
**Lemma 2**: _If the above four assumptions hold, (6) admits a compact control invariant information ensemble._
**Proof.** The proof of this lemma is divided into two parts, which aim at constructing an information tube that converges to a control invariant information ensemble.
_Part I._ The goal of the first part is to show, by induction over \(k\), that the recursion
\[\forall k\in\mathbb{N},\qquad\mathfrak{X}_{k+1}^{\circ}\ \stackrel{{ \rm def}}{{=}}\ A(\mathfrak{X}_{k}^{\circ}\cap\mathfrak{V}),\quad \mathfrak{X}_{0}^{\circ}\ \stackrel{{\rm def}}{{=}}\ \mathbb{K}_{ \rm c}^{n}\]
is set monotonous. Since \(\mathfrak{X}_{0}^{\circ}=\mathbb{K}_{\rm c}^{n}\), \(\mathfrak{X}_{1}^{\circ}\subseteq\mathfrak{X}_{0}^{\circ}\) holds. This is the induction start. Next, if \(\mathfrak{X}_{k+1}^{\circ}\subseteq\mathfrak{X}_{k}^{\circ}\) holds for a given integer \(k\geq 0\), it follows that
\[\mathfrak{X}_{k+2}^{\circ}\ =\ A(\mathfrak{X}_{k+1}^{\circ}\cap\mathfrak{V}) \ \subseteq\ A(\mathfrak{X}_{k}^{\circ}\cap\mathfrak{V})=\mathfrak{X}_{k+1}^{ \circ}\;, \tag{9}\]
where the inclusion in the middle follows directly by substituting the induction assumption. In summary, the monotonicity relation \(\mathfrak{X}_{k+1}^{\circ}\subseteq\mathfrak{X}_{k}^{\circ}\) holds for all \(k\in\mathbb{N}\).
_Part II._ The goal of the second part is to show that the sequence
\[\mathfrak{X}_{k}\ \stackrel{{\rm def}}{{=}}\ \left\{\ X-E(X)+ \overline{x}\ \big{|}\ X\in\mathfrak{X}_{k}^{\circ},\ \overline{x}\in{\rm cvx}(\overline{X})\ \right\}, \tag{10}\]
converges to an invariant information ensemble. Here, \({\rm cvx}(\overline{X})\) denotes the convex hull of the robust control invariant set \(\overline{X}\). Because we assume that \(\mathbb{U}\) is convex, \({\rm cvx}(\overline{X})\) is robust control invariant, too. This means that there exists a \(\overline{\mu}:\mathbb{R}^{n}\to\mathbb{U}\) such that
\[\forall x\in{\rm cvx}(\overline{X}),\forall w\in\mathbb{W},\quad Ax+B\overline{ \mu}(x)+w\in{\rm cvx}(\overline{X})\;.\]
Since \(E\) satisfies \(E(X)\in{\rm cvx}(X)\) for all \(X\in\mathbb{K}_{\rm c}^{n}\), (10) and the definitions of \(\mathfrak{X}_{k}\) and \(\mathfrak{V}\) imply that
\[\begin{array}{llll}&\forall X\in\mathfrak{X}_{k}\cap\mathfrak{V},&E(X)&\in {\rm cvx}(\overline{X}),\\ &\forall X\in\mathfrak{X}_{k},&X-E(X)&\in\mathfrak{X}_{k}^{\circ}\\ \mbox{and}&\forall X\in\mathfrak{V},&X-E(X)&\in\mathfrak{V}\end{array}\]
for all \(k\in\mathbb{N}\). Thus, the state estimation based auxiliary feedback law
\[\forall X\in\mathbb{K}_{\rm c}^{n},\qquad\mu(X)\ \stackrel{{\rm def}}{{=}}\ \overline{\mu}(E(X)) \tag{11}\]
ensures that the recursive feasibility condition
\[AX+B\mu(X)+\mathbb{W}\] \[=\ A(X-E(X))+\underbrace{AE(X)+B\overline{\mu}(E(X))+\mathbb{W} }_{\subseteq\,{\rm cvx}(\overline{X})}\in\mathfrak{X}_{k+1}\]
holds for all \(X\in\mathfrak{X}_{k}\cap\mathfrak{Y}\). Consequently, the auxiliary sequence \(\mathfrak{X}_{k}\) is a monotonous information tube,
\[\forall k\in\mathbb{N},\qquad\mathfrak{X}_{k}\ \supseteq\ \mathfrak{X}_{k+1}\ \supseteq\ F(\mathfrak{X}_{k},\mu)\;,\]
where monotonicity follows from (9) and the considerations from Part I. Moreover, since \((A,C)\) is observable, \(\mathfrak{X}_{k}\) is compact for all \(k\geq n-1\). In summary, \(\mathfrak{X}_{k}\) is a monotonously decreasing sequence of information ensembles, which--due to the monotone convergence theorem--converges to a compact control invariant information ensemble,
\[\mathfrak{X}_{\infty}\ =\ \lim_{k\to\infty}\ \mathfrak{X}_{k}\ \in\ \mathbb{I}_{\rm c}^{n_{x}}\quad\mbox{ and }\quad F(\mathfrak{X}_{\infty},\mu)\ \subseteq\ \mathfrak{X}_{\infty}\;.\]
This corresponds to the statement of the lemma. \(\diamond\)
**Remark 3**: _The purpose of Lemma 2 is to elaborate on the relation between control invariant information ensembles and existing notions in linear control theory--such as observability and robust stabilizability. Lemma 2 does, however, not make statements about feasibility: the state constraint set \(\mathbb{X}\) is not taken into account. Moreover, the construction of the feedback law \(\mu\) in (11) is based on the vector-valued state estimate \(E(X)\) rather than the information state \(X\), which is, in general, sub-optimal. Note that these problems regarding feasibility and optimality are resolved in the following section by introducing optimal dual control laws._
## 5 Dual Control
This section is about dual control problems for constrained linear systems. It is discussed under which assumptions such problems can be separated into a meta learning and a robust control problem.
### Receding Horizon Control
Dual control problems can be implemented in analogy to traditional model predictive control (MPC) methods. Here, one solves the online optimization problem
\[J(X_{0})\ =\ \ \ \ \inf_{\mathfrak{X},\mu} \sum_{k=0}^{N-1}L(\mathfrak{X}_{k},\mu_{k})+M(\mathfrak{X}_{N})\] (12) s.t. \[\left\{\begin{array}{ll}\forall k\in\{0,1,\ldots,N-1\},\\ F(\mathfrak{X}_{k},\mu_{k})\subseteq\mathfrak{X}_{k+1},\ \ X_{0}\in\mathfrak{X}_{0}\\ \mu_{k}\in\mathcal{U},\\ \forall X_{k}\in\mathfrak{X}_{k},\ \ X_{k}\subseteq\mathbb{X}\end{array}\right.\]
on a finite time horizon \(\{0,1,\ldots,N\}\), where \(0\) is the current time. The optimization variables are the feedback policies \(\mu_{0},\mu_{1},\ldots,\mu_{N-1}\in\mathcal{U}\) and their associated information tube, \(\mathfrak{X}_{0},\mathfrak{X}_{1},\ldots,\mathfrak{X}_{N}\in\mathbb{I}_{\rm c}^ {n}\). In the most general setting, the stage and terminal cost functions,
\[L:\mathbb{I}_{\rm c}^{n}\times\mathcal{U}\to\mathbb{R}\qquad\text{and}\qquad M :\mathbb{I}_{\rm c}^{n}\to\mathbb{R},\]
are assumed to be lower semi-continuous, although some of the analysis results below will be based on stronger assumptions. We recall that \(\mathbb{X}\) denotes the closed state constraint set. The parameter \(X_{0}\in\mathbb{I}_{\rm c}^{n}\) corresponds to the current information set. It is updated twice per sampling time by repeating the following steps online:
* Wait for the next measurement \(\eta\).
* Update the information set, \[X_{0}\gets X_{0}\cap\{x\in\mathbb{R}^{n}\mid\eta-Cx\in\mathbb{V}\}\;.\]
* Solve (12) and denote the first element of the optimal feedback sequence by \(\mu_{0}^{\star}\in\mathcal{U}\).
* Send \(u^{\star}=\mu_{0}^{\star}(X_{0})\) to the real process.
* Propagate the information set, \[X_{0}\gets AX_{0}+Bu^{\star}+\mathbb{V}\;.\]
* Set the current time to \(0\) and continue with Step i).
Note that Step iii) assumes that the "inf" operator in (12) can be replaced by a "min" and that an associated optimal feedback policy exists. Conditions under which this can be guaranteed are discussed in Section 5.4.
### Objectives
Tube model predictive control formulations [30, 37, 38] use risk measures as stage cost functions. In principle, any lower semi-continuous function of the form
\[R:\mathbb{K}_{\mathrm{c}}^{n}\to\mathbb{R}\cup\{\infty\},\]
can be regarded as such a risk measure, although one would usually require that the monotonicity condition
\[X\ \subseteq\ Y\qquad\Longrightarrow\qquad R(X)\ \leq\ R(Y) \tag{13}\]
holds for all \(X,Y\in\mathbb{K}_{\mathrm{c}}^{n}\). Similarly, this paper proposes to call \(\mathfrak{R}:\mathbb{I}_{\mathrm{c}}^{n}\to\mathbb{R}\cup\{\infty\}\) an extrinsic risk measure if
\[\mathfrak{R}(\mathfrak{X})\ =\ R\left(\ \bigcup_{X\in\mathfrak{X}}X\ \right) \tag{14}\]
for a lower semi-continuous function \(R\) that satisfies (13).
**Remark 4**: _Problem (12) enforces state constraints explicitly. Alternatively, one can move them to the objective by introducing the indicator function \(I_{\mathsf{K}}\) of the state constraint set \(\mathbb{X}\). Because we have_
\[(\ \forall X_{k}\in\mathfrak{X}_{k},\ \ X_{k}\subseteq\mathbb{X}\ )\quad \Longleftrightarrow\quad I_{\mathsf{K}}\left(\bigcup_{X\in\mathfrak{X}_{k}}X \right)<\infty,\]
_enforcing state constraints is equivalent to adding an extrinsic risk measure to the stage cost; here with \(R=I_{\mathsf{K}}\)._
By using the language of this paper, the traditional objective of dual control [11] is to tradeoff between extrinsic risk and intrinsic deviation. This motivates to consider stage cost functions of the form
\[L(\mathfrak{X},\mu)\ =\ \mathfrak{R}(\mathfrak{X})+\tau\cdot\mathfrak{D}^{ \circ}(\mathfrak{X}). \tag{15}\]
Here, \(\mathfrak{R}\) denotes a lower semi-continuous extrinsic risk measure and \(\mathfrak{D}^{\circ}\) a lower semi-continuous intrinsic information measure. For general nonlinear systems, the parameter \(\tau>0\) can be used to tradeoff between risk and deviation. In the context of constrained linear systems, however, such a tradeoff is superfluous, as formally proven in the sections below.
**Remark 5**: _The stage cost function (15) can be augmented by a control penalty. For example, one could set_
\[L(\mathfrak{X},\mu)\ =\ \mathfrak{R}(\mathfrak{X})+\tau\cdot\mathfrak{D}^{\circ}( \mathfrak{X})+\mathfrak{C}(\mu)\;, \tag{16}\]
_where \(\mathfrak{C}:\mathcal{U}\to\mathbb{R}\) models a lower semi-continuous control cost. This additional term does, however, not change the fact that the parameter \(\tau\) does not affect the optimal solution of (12). Details about how to construct \(\mathfrak{C}\) in practice will be discussed later on in this paper, see Section 6._
### Separation of Meta-Learning and Robust Control
The goal of this section is to show that one can break the dual control problem (12) into an intrinsic meta learning problem and an extrinsic robust control problem. We assume that
1. the stage cost function \(L\) has the form (15),
2. the function \(\mathfrak{R}\) is an extrinsic risk measure,
3. the function \(\mathfrak{D}^{\circ}\) is intrinsic and \(\tau\geq 0\), and
4. the function \(M\) is extrinsic and monotonous, \[\mathfrak{X}\subseteq\mathfrak{Y}\qquad\Longrightarrow\qquad M(\mathfrak{X} )\leq M(\mathfrak{Y})\;.\]
In this context, the meta learning problem consists of computing a constant information tube that is found by evaluating the recursion
\[\begin{array}{ccc}\forall k\in\mathbb{N},&\mathfrak{Y}_{k+1}&\stackrel{{ \rm def}}{{=}}&\overline{F}(\mathfrak{Y}_{k},\nu_{k})\\ \mbox{with}&\mathfrak{Y}_{0}&\stackrel{{\rm def}}{{=}}&\{X\in \mathbb{K}_{\rm c}^{n}\mid X\subseteq X_{0}\},\end{array} \tag{17}\]
for a constant sequence \(\nu_{0},\nu_{1},\ldots\in\mathcal{U}\). For simplicity of presentation, we assume \(0\in\mathbb{U}\) such that we can set \(\nu_{k}(X)=0\) without loss of generality. Due to Theorem 1, \(L\) satisfies
\[L(\mathfrak{X}_{k})\ =\ \mathfrak{R}(\mathfrak{X}_{k})+\tau\cdot\mathfrak{D}^{ \circ}(\mathfrak{Y}_{k})\]
along any optimal tube of (12). Consequently, (12) reduces to a robust control problem in the sense that all objective and constraint functions are extrinsic, while the shapes, sizes and orientations of the sets of the optimal information tube are constants, given by (17).
In summary, the contribution of intrinsic information to the objective value of (12), denoted by
\[J_{\rm I}(X_{0})\ \stackrel{{\rm def}}{{=}}\ \tau\cdot\sum_{k=0}^{N-1} \mathfrak{D}^{\circ}(\mathfrak{Y}_{k}),\]
depends on \(X_{0}\) but it does not depend on the choice of the control law. It can be separated from the contribution of extrinsic objective terms, as elaborated below.
### Existence of Solutions
In order to discuss how one can--after evaluating the meta-learning recursion (17)--rewrite (12) in the form of an extrinsic robust control problem, a change of variables is introduced. Let \(\mathcal{B}_{k}\) denote the set of bounded functions of the form \(c_{k}:\mathfrak{Y}_{k}\to\mathbb{R}^{n}\). It is a Banach space with respect to its supremum norm
\[\|c_{k}\|\ \stackrel{{\rm def}}{{=}}\ \sup_{X\in\mathfrak{Y}_{k}}\|c_{k}(X)\|.\]
Due to Theorem 1, any tight information tube \(\mathfrak{X}_{0},\mathfrak{X}_{1},\ldots\in\mathbb{I}_{\rm c}^{n}\), started at \(\mathfrak{X}_{0}=\mathfrak{Y}_{0}\), is intrinsically equivalent to the precomputed tube \(\mathfrak{Y}_{0},\mathfrak{Y}_{1},\ldots\in\mathbb{I}_{\rm c}^{n}\) and can be written in the form
\[\mathfrak{X}_{k}=\{\ Y+c_{k}(Y)\ |\ Y\in\mathfrak{Y}_{k}\ \} \tag{18}\]
for suitable translation functions \(c_{k}\in\mathcal{B}_{k}\). In the following, we introduce the auxiliary set
\[\mathcal{C}_{k}\ \stackrel{{\rm def}}{{=}}\ \left\{\begin{array}{l}(c,c^{+}) \ \left|\begin{array}{l}\forall X\in\partial\left[\mathfrak{Y}_{k}\cap \mathfrak{Y}\right],\\ Ac(X)-c^{+}(AX+\mathbb{W})\in(-B\mathbb{U})\end{array}\right.\right\}\end{array}\]
recalling that \(\partial\) denotes the boundary operator that returns the set of extreme sets of a given information ensemble. Because \(\mathbb{U}\) is compact, \(\mathcal{C}_{k}\subseteq\mathcal{B}_{k}\times\mathcal{B}_{k+1}\) is a closed set. Additionally, we introduce the shorthands
\[\mathcal{R}_{k}(c_{k}) \ \stackrel{{\rm def}}{{=}}\ \mathfrak{R}\left(\begin{array}{l} \{Y+c_{k}(Y)\ |\ Y\in\mathfrak{Y}_{k}\}\end{array}\right)\] \[\text{and}\ \ \ \mathcal{R}_{N}(c_{N}) \ \stackrel{{\rm def}}{{=}}\ M\left(\begin{array}{l} \{Y+c_{N}(Y)\ |\ Y\in\mathfrak{Y}_{N}\}\end{array}\right)\.\]
Since we assume that \(\mathfrak{R}\) and \(M\) are lower-semicontinuous on \(\mathbb{I}_{\rm c}^{n}\), the functions \(\mathcal{R}_{k}:\mathcal{B}_{k}\to\mathbb{R}\) are lower semi-continuous on the Banach spaces \(\mathcal{B}_{k}\). They can be used to formulate the
extrinsic robust control problem1
Footnote 1: If the sets \(\mathbb{U}\) and \(\mathbb{X}\) and the functions \(\mathcal{R}_{k}\) are convex, (19) is a convex optimization problem.
\[J_{\rm E}(X_{0})\ =\min_{c_{0},c_{1},\ldots,c_{N}} \ \ \sum_{k=0}^{N-1}\mathcal{R}_{k}(c_{k})+\mathcal{R}_{N}(c_{N}) \tag{19}\] \[\mbox{s.t.} \left\{\begin{array}{l}\forall k\in\{0,1,\ldots,N-1\},\\ (c_{k},c_{k+1})\in\mathcal{C}_{k},\ c_{0}\equiv 0,\\ \forall Y\in\mathfrak{Y}_{k},\ \ Y+c_{k}(Y)\subseteq\mathbb{X},\end{array}\right.\]
which can be used to find the optimal solution of (12). In detail, this result can be summarized as follows.
**Theorem 2**: _Let \(\mathbb{X}\in\mathbb{K}^{n}\) be a closed set, let \(\mathbb{U}\in\mathbb{K}_{\rm c}^{n_{u}}\), \(\forall\in\mathbb{K}_{\rm c}^{n_{v}}\), and \(\mathbb{W}\in\mathbb{K}_{\rm c}^{n_{w}}\) be compact sets, let \(L\) be given by (15) with \(\mathfrak{R}\) and \(M\) being set-monotonous and lower semi-continuous extrinsic risk measures, and let \(\mathfrak{D}^{\circ}\) be an intrinsic lower semi-continuous information measure. Then the following statements hold._
1. _Problem (_19_) admits a minimizer or is infeasible._
2. _Problem (_12_) is intrinsically separable; that is,_ \[J(X_{0})\ =\ J_{\rm E}(X_{0})+J_{\rm I}(X_{0}).\]
3. _If_ \(c_{0},c_{1},\ldots,c_{N}\) _is a minimizer of (_19_), its associated sequence of information ensembles, given by (_18_), is an optimal information tube of (_12_)._
**Proof.** Because the objective functions \(\mathcal{R}_{k}\) of (19) are lower semicontinuous and since the feasible set of (19) is closed under the listed assumptions, it follows directly from Weierstrass' theorem that this optimization problem admits a minimizer or is infeasible. Next, a relation between (12) and (19) needs to be established. For this aim, we divide the proof into three parts.
_Part I._ Let us assume that \(\mathfrak{X}_{0},\mathfrak{X}_{1},\ldots,\mathfrak{X}_{N}\in\mathbb{I}_{\rm c} ^{n}\) is a tight information tube for given \(\mu_{0},\mu_{1},\ldots,\mu_{N-1}\in\mathcal{U}\),
\[\forall k\in\{0,1,\ldots,N-1\},\qquad\mathfrak{X}_{k+1}\ =\ \overline{F}( \mathfrak{X}_{k},\mu_{k})\;.\]
Due to Theorem 1, there exist functions \(c_{k}:\mathcal{B}_{k}\to\mathbb{R}^{n}\) such that \(\mathfrak{X}_{k}=\{Y+c_{k}(Y)\mid Y\in\mathfrak{Y}_{k}\}\). The goal of the first part of this proof is to show that \((c_{k},c_{k+1})\in\mathcal{C}_{k}\). Because the information tube is tight, we have
\[AX+B\mu_{k}(X)+\mathbb{W}\ \in\ \partial\mathfrak{X}_{k+1}\]
for all \(X\in\partial[\mathfrak{X}_{k}\cap\mathfrak{Y}]\). Since any set \(Y\in\partial[\mathfrak{Y}_{k}\cap\mathfrak{Y}]\) is mapped to an extreme set
\[X=Y+c_{k}(Y)\in\partial[\mathfrak{X}_{k}\cap\mathfrak{Y}],\]
it follows that
\[A(Y+c_{k}(Y))+B\mu_{k}(X)+\mathbb{W}\in\partial\mathfrak{X}_{k+1}\] \[\Longrightarrow \underbrace{(AY+\mathbb{W})}_{\in\ \partial\mathfrak{Y}_{k+1}}+ \underbrace{(c_{k}(Y)+B\mu_{k}(X))}_{\in\ \mathbb{R}^{n}}\ \in\ \partial\mathfrak{X}_{k+1}\]
for any such pair \((X,Y)\). But this is only possible if
\[c_{k}(Y)+B\mu_{k}(X)\ =\ c_{k+1}(AY+\mathbb{W}). \tag{20}\]
Since \(\mu_{k}(X)\in\mathbb{U}\) and since the choice of \(Y\in\partial[\mathfrak{Y}_{k}\cap\mathfrak{Y}]\) is arbitrary, it follows from (20) that \((c_{k},c_{k+1})\in\mathcal{C}_{k}\).
_Part II._ The goal of the second part of this proof is to reverse the construction from the first part. For this aim, we assume that we have functions \(c_{k}:\mathcal{B}_{k}\to\mathbb{R}^{n}\) that satisfy the recursivity condition \((c_{k},c_{k+1})\in\mathcal{C}_{k}\) for all \(k\in\{0,1,\ldots,N-1\}\) while the sets \(\mathfrak{X}_{k}\) are given by (18). Since every set \(X\in\mathfrak{X}_{k}\cap\mathfrak{Y}\) is contained in at least one extreme set \(\overline{X}\in\partial\left[\mathfrak{X}_{k}\cap\mathfrak{Y}\right]\), there exists for every such \(X\) a set \(\overline{Y}\in\partial[\mathfrak{Y}_{k}\cap\mathfrak{Y}]\) with
\[X\ \subseteq\ \overline{X}\ =\ \overline{Y}+c_{k}(\overline{Y}).\]
Note that this is equivalent to stating that there exists a function \(\Sigma_{k}:\mathfrak{X}_{k}\cap\mathfrak{Y}\to\partial[\mathfrak{Y}_{k}\cap \mathfrak{Y}]\) that satisfies
\[\forall X\in\mathfrak{X}_{k}\cap\mathfrak{Y},\quad X\subseteq \Sigma_{k}(X)+c_{k}(\Sigma_{k}(X)). \tag{21}\]
It can be used to define the control laws
\[\mu_{k}(X)\stackrel{{\rm def}}{{=}}B^{\dagger}[c_{k+1}(A\Sigma_{ k}(X)+\mathbb{W})-Ac_{k}(\Sigma_{k}(X))], \tag{22}\]
where \(B^{\dagger}\) denotes the pseudo-inverse of \(B\). Because we assume \((c_{k},c_{k+1})\in\mathcal{C}_{k}\), we have \(\mu_{k}(X)\in\mathbb{U}\) and
\[AX+B\mu_{k}(X)+\mathbb{W}\] \[\stackrel{{\eqref{eq:B1},\eqref{eq:B2}}}{{\subseteq}} A\Sigma_{k}(X)+\mathbb{W}+c_{k+1}(A\Sigma_{k}(X)+\mathbb{W})\in\ \mathfrak{X}_{k+1}\]
for all \(X\in\mathfrak{X}_{k}\cap\mathfrak{Y}\), where the latter inclusion follows from (18) and the fact that \(A\Sigma_{k}(X)+\mathbb{W}\in\mathfrak{Y}_{k+1}\). Consequently, we have \(\mathfrak{X}_{k+1}\supseteq F(\mathfrak{X}_{k},\mu_{k})\).
_Part III._ The construction from Part I can be used to start with any feasible information tube of (12) to construct a feasible sequence \(c_{0},c_{1},\ldots,c_{N}\) such that
\[J_{\mathrm{E}}(X_{0}) \leq\ \sum_{k=0}^{N-1}\mathcal{R}_{k}(c_{k})+\mathcal{R}_{N}(c_{N})\] \[=\ \sum_{k=0}^{N-1}L(\mathfrak{X}_{k},\mu_{k})+M(\mathfrak{X}_{N} )-J_{\mathrm{I}}(X_{0})\;. \tag{23}\]
Thus, we have \(J_{\mathrm{E}}(X_{0})+J_{\mathrm{I}}(X_{0})\leq J(X_{0})\). Similarly, the construction from Part II can be used to start with an optimal solution of (19) to construct a feasible point of (12), which implies \(J_{\mathrm{E}}(X_{0})+J_{\mathrm{I}}(X_{0})\geq J(X_{0})\). Thus, the second and the third statement of the theorem hold. \(\diamond\)
### Recursive Feasibility and Stability
Feasible invariant information ensembles \(\mathfrak{X}_{\mathrm{s}}\in\mathbb{I}_{\mathrm{c}}^{n}\) exist if and only if the optimization problem
\[\min_{\mathfrak{X}_{\mathrm{s}},\mu_{\mathrm{s}}}\ L(\mathfrak{X}_{\mathrm{s}},\mu_{\mathrm{s}})\quad\text{s.t.}\quad\left\{\begin{array}{l}F(\mathfrak{X }_{\mathrm{s}},\mu_{\mathrm{s}})\subseteq\mathfrak{X}_{\mathrm{s}},\\ \mu_{\mathrm{s}}\in\mathcal{U},\\ \forall X\in\mathfrak{X}_{\mathrm{s}},\ \ X\subseteq\mathbb{X}\end{array}\right. \tag{24}\]
is feasible. By solving this optimization problem, one can find optimal invariant information ensembles avoiding the constructions from the proof of Lemma 2; see Remark 3. In analogy to terminal regions in traditional MPC formulations [30] invariant information ensembles can be used as a terminal constraint,
\[\mathfrak{X}_{N}\subseteq\mathfrak{X}_{\mathrm{s}}.\]
If (12) is augmented by such a terminal constraint, recursive feasibility can be guaranteed. Similarly, if one chooses the terminal cost \(M\) such that
\[\min_{\mu\in\mathcal{U}}\ L(\mathfrak{X},\mu)+M\left(\overline{F}(\mathfrak{X}, \mu)\right)\ \leq\ M(\mathfrak{X})\]
for all \(\mathfrak{X}\in\mathbb{I}_{\mathrm{c}}^{n}\), the objective value of (12) descends along the trajectories of its associated closed-loop system. Under additional assumptions on the continuity and positive definiteness of \(L\), this condition can be used as a starting point for the construction of Lyapunov functions. The details of these constructions are, however, not further elaborated at this point, as they are analogous to the construction of terminal costs for traditional Tube MPC schemes [30, 37].
## 6 Polytopic Approximation Methods
This section discusses how to solve the dual control problem (12) by using a polytopic approximation method. For this aim, we assume that \(\mathbb{V}\) and \(\mathbb{W}\) are given convex polytopes, while \(\mathbb{X}\) and \(\mathbb{U}\) are convex sets.
### Configuration-Constrained Polytopes
Polytopic computing [14] is the basis for many set-theoretic methods in control [6]. Specifically, tube model predictive control methods routinely feature parametric polytopes with frozen facet directions [28, 29]. In this context, configuration-constrained polytopes are of special interest, as they admit a joint parameterization of their facets and vertices [38]. They are defined as follows.
Let \(Y\in\mathbb{R}^{m\times n}\) and \(G\in\mathbb{R}^{n_{G}\times m}\) be matrices that define the parametric polytope
\[\forall y\in\mathcal{G},\qquad P(y) \stackrel{{\mathrm{def}}}{{=}} \{x\in\mathbb{R}^{n}\mid Yx\leq y\}\] on \[\mathcal{G} \stackrel{{\mathrm{def}}}{{=}} \{y\in\mathbb{R}^{m}\mid Gy\leq 0\};\]
and let \(\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{\nu}\in\mathbb{R}^{n\times m}\) be vertex maps, such that
\[P(y)=\mathrm{conv}(\Lambda_{1}y,\Lambda_{2}y,\ldots,\Lambda_{\nu}y)\quad \Longleftrightarrow\quad y\in\mathcal{G}\,\]
where \(\mathrm{conv}(\cdot)\) denotes the convex hull. The condition \(y\in\mathcal{G}\) is called a configuration-constraint. It restricts the parameter domain of \(P\) to a region on which both a halfspace and a vertex representation is possible. Details on how to construct the template matrix \(Y\) together with the cone \(\mathcal{G}\) and the matrices \(\Lambda_{i}\) can be found in [38].
### Polytopic Information Ensembles
As pointed out in Section 3.2, the minimal representation of a closed information ensemble \(\mathfrak{X}\in\mathbb{I}_{\mathrm{c}}^{n}\) is given by its set of extreme sets, denoted by \(\partial\mathfrak{X}\). This motivates to discretize (12), by introducing a suitable class of information ensembles, whose extreme sets are configuration-constrained polytopes. In detail,
\[\mathfrak{P}(z)\ \stackrel{{\mathrm{def}}}{{=}}\ \{\ X\in\mathbb{K}_{ \mathrm{c}}^{n}\ |\ \exists y\in\mathbb{P}(z):\ X\subseteq P(y)\ \}\]
defines such a class of information ensembles with the polytope
\[\mathbb{P}(z)\ \stackrel{{\mathrm{def}}}{{=}}\ \{\ y\in\mathbb{R}^{m}\ \ |\ \ Gy \leq 0,\ Zy\leq z\ \}\subseteq\mathcal{G}\]
being used to parameterize convex subsets of \(\mathcal{G}\). The choice of \(Z\in\mathbb{R}^{l\times m}\) influences the polytopic discretization accuracy and \(z\in\mathbb{R}^{l}\) denotes its associated discretization parameter. Note that \(\mathfrak{P}(z)\in\mathbb{I}_{\mathrm{c}}^{n}\) is for any such \(z\) a compact but potentially empty information ensemble.
### Polytopic Meta Learning
Traditional set-theoretic methods face a variety of computational difficulties upon dealing with output feedback problems, as summarized concisely in [6, Chapter 11]. The goal of this and the following sections is to show that the proposed meta learning framework has the potential to overcome these difficulties. Here, the key observation is that Proposition 3 alleviates the need to intersect infinitely many information sets for the sake of predicting the evolution of a learning process. Instead, it is sufficient to compute one intersection at meta level in order pass from a prior to a posterior information ensemble.
In detail, if we assume that our prior information about the system's state is represented by a polytopic ensemble, \(\mathfrak{P}(z)\), the posterior
\[\mathfrak{P}(z)\sqcap\mathfrak{V}\ =\ \mathfrak{P}(z)\cap\mathfrak{V}\]
needs to be computed, where \(\mathfrak{V}\) is given by (7). Since \(\mathbb{V}\) is assumed to be a polytope, \(\mathfrak{V}\) can be written in the form
\[\mathfrak{V}\ =\ \{\ X\in\mathbb{K}_{\mathrm{c}}^{n}\ |\ \exists y\in \mathcal{G}:\ X\subseteq P(y),\ Z_{1}y\leq\overline{v}\ \}\,,\]
as long as the template matrices \(Y\), \(G\), and \(Z_{1}\in\mathbb{R}^{l_{1}\times m}\) as well as the vector \(\overline{v}\in\mathbb{R}^{l_{1}}\) are appropriately chosen. Here, we construct the matrix \(Z=(Z_{1}^{\intercal},Z_{2}^{\intercal})^{\intercal}\) such that its first \(l_{1}\leq l\) rows coincide with \(Z_{1}\). This particular construction of \(Z\) ensures that the intersection
\[\mathfrak{P}(z)\cap\mathfrak{V}\ =\ \mathfrak{P}(\zeta)\qquad\text{with} \qquad\left\{\begin{array}{l}\zeta_{1}=\min(z_{1},\overline{v})\\ \zeta_{2}=z_{2}\end{array}\right.\]
can be computed explicitly, where \(\min(z_{1},\overline{v})\) denotes the componentwise minimizer of the vectors \(z_{1}\) and \(\overline{v}\). The latter condition is not jointly convex in \(z\) and \(\zeta\). Therefore, the following constructions are based on the convex relaxation
\[\left(\begin{array}{c}\overline{v}\\ z_{2}\end{array}\right)\ \leq\ \left(\begin{array}{c}\zeta_{1}\\ \zeta_{2}\end{array}\right)\quad\Longrightarrow\quad\mathfrak{P}(z)\cap \mathfrak{V}\ \subseteq\ \mathfrak{P}(\zeta)\;. \tag{25}\]
Note that the conservatism that is introduced by this convex relaxation is negligible if the measurement error set \(\mathbb{V}\) is small. In fact, for the exact output feedback case, \(\mathbb{V}=\{0\}\), we have \(\min(z_{1},\overline{v})=\overline{v}\), since the measurements are exact and, as such, always informative.
### Extreme Vertex Polytopes
In analogy to the construction of the domain \(\mathcal{G}\), a configuration domain
\[\mathcal{H}\ =\ \{\ z\in\mathbb{R}^{l}\ \mid\ Hz\leq 0\ \}\]
can be chosen. In detail, by using the methods from [38], a matrix \(H\in\mathbb{R}^{l\times n_{H}}\) and matrices \(\Omega_{1},\ldots,\Omega_{\overline{\nu}}\in\mathbb{R}^{m\times l}\) can be pre-computed, such that
\[\mathbb{P}(\zeta)=\operatorname{conv}(\Omega_{1}\zeta,\Omega_{2}\zeta,\ldots, \Omega_{\overline{\nu}}\zeta)\quad\Longleftrightarrow\quad\zeta\in\mathcal{ H}\;.\]
This has the advantage that the vertices \(\Omega_{j}\zeta\) of the polytope \(\mathbb{P}(\zeta)\) are known. In order to filter the vertices that are associated to extreme polytopes, the index set
\[\mathbb{J}\ \stackrel{{\mathrm{def}}}{{=}}\ \{\ j\in\{1,2,\ldots, \overline{\nu}\}\ |\ P(\Omega_{j}\zeta)\ \in\ \partial[\mathfrak{P}(\zeta)]\ \}\]
is introduced. Its definition does not depend on the choice of the parameter \(\zeta\in\mathcal{H}\). This follows from the fact that the normal cones of the vertices of \(\mathbb{P}(\zeta)\) do not depend on \(\zeta\in\mathcal{H}\)--recalling that the facet normals of \(\mathbb{P}(\zeta)\) are given constants.
**Definition 7**: _The polytopes \(P(\Omega_{j}\zeta)\), with \(j\in\mathbb{J}\), are called the extreme vertex polytopes of \(\mathfrak{P}(\zeta)\)._
Extreme vertex polytopes play a central role in the context of designing polytopic dual control methods. This is because their shapes, sizes, and orientations can be interpreted as representatives of the intrinsic information of \(\mathfrak{P}(\zeta)\). Moreover, the convex hull of the extreme vertex polytopes encodes the extrinsic information of \(\mathfrak{P}(\zeta)\),
\[\operatorname{conv}\left(\ \{\ P(\Omega_{j}\zeta)\ \mid\ j\in\mathbb{J}\ \}\ \right)\ =\ \bigcup_{X\in\mathfrak{P}(\zeta)}X\;. \tag{26}\]
The latter equation follows from the fact that the vertices of the extreme polytopes of \(\mathfrak{P}(\zeta)\) are contained in the convex hull of the vertices \(\Lambda_{i}\Omega_{j}\zeta\) of the extreme vertex polytopes, with \(i\in\{1,\ldots,\nu\}\) and \(j\in\mathbb{J}\).
### Polytopic Information Tubes
The goal of this section is to show that it is sufficient to assign one extreme control input \(u_{j}\in\mathbb{U}\) to each extreme vertex polytope \(P(\Omega_{j}\zeta)\) in order to discretize the control law, without introducing conservatism. This construction is similar in essence to the introduction of the vertex control inputs that are routinely used to compute robust control invariant polytopes [16, 25, 38]. The key difference here, however, is that the "vertices" \(P(\Omega_{j}\zeta)\) of the information ensemble \(\mathfrak{P}(\zeta)\) are sets rather than vectors. They represent possible realizations of the system's information state, not a state estimate.
Let us assume that \(\mathbb{W}=P(\overline{w})\) is a polytope with given parameter \(\overline{w}\in\mathcal{G}\). Moreover, we assume that the vertices of \(\mathbb{P}(\cdot)\) are enumerated in such a way that
\[\mathbb{J}=\{1,2,\ldots,|\mathbb{J}|\},\]
where \(|\mathbb{J}|\leq\overline{\nu}\) denotes the cardinality of \(\mathbb{J}\). Let us introduce the convex auxiliary set
\[\mathcal{F} \stackrel{{\mathrm{def}}}{{=}} \left\{(z,z^{+})\left|\begin{array}{l}\exists(\zeta,\xi,u)\in \mathbb{R}^{l}\times(\mathbb{R}^{m})^{|\mathbb{J}|}\times\mathbb{U}^{| \mathbb{J}|}\\ \forall i\in\{1,\ldots,\nu\},\forall j\in\mathbb{J},\\ \overline{v}\leq\zeta_{1},\ z_{2}\leq\zeta_{2},\\ YA\Lambda_{i}\Omega_{j}\zeta+YBu_{j}+\overline{w}\leq\xi_{j},\\ G\xi_{j}\leq 0,\ H\zeta\leq 0,\ Z\xi_{j}\leq z^{+}\end{array}\right.\right\}.\]
The rationale behind the convex constraints in this definition can be summarized as follows.
* We start with the current information ensemble \(\mathfrak{P}(z)\).
* The constraints \(\overline{v}\leq\zeta_{1}\) and \(z_{2}\leq\zeta_{2}\) subsume (25).
* The constraint \(H\zeta\leq 0\) ensures that the vertices of \(P(\Omega_{j}\zeta)\) are given by \(\Lambda_{i}\Omega_{j}\zeta\), with \(i\in\{1,\ldots,\nu\}\).
* The extreme controls \(u_{j}\) are used to steer all vertices of \(P(\Omega_{j}\zeta)\) into the auxiliary polytope \(P(\xi_{j})\).
* And, finally, the constraints \(G\xi_{j}\leq 0\) and \(Z\xi_{j}\leq z^{+}\) ensure that \(P(\xi_{j})\) is contained in \(\mathfrak{P}(z^{+})\).
The above points can be used as road-map for the rather technical proof of the following theorem.
**Theorem 3**: _Let \({\cal F}\) and \({\mathfrak{V}}\) be defined as above, recalling that \({\mathbb{W}}=P(\overline{w})\) denotes the uncertainty set and that \({\mathbb{U}}\) is assumed to be convex. Then, the implication_
\[(z,z^{+})\in{\cal F}\quad\Longrightarrow\quad\exists\mu\in{\cal U},\quad{ \mathfrak{P}}(z^{+})\supseteq F({\mathfrak{P}}(z),\mu)\]
_holds for all \(z,z^{+}\in{\mathbb{R}}^{l}\)._
**Proof.** Let us assume that \((z,z^{+})\in{\cal F}\). As discussed in Section 6.3, the inequalities \(\overline{v}\leq\zeta_{1}\) and \(z_{2}\leq\zeta_{2}\) in the definition of \({\cal F}\) ensure that \({\mathfrak{P}}(z)\sqcap{\mathfrak{V}}\ \subseteq\ {\mathfrak{P}}(\zeta)\). Moreover, there exists for every \(X\in{\mathfrak{P}}(\zeta)\) a \(y\in{\mathbb{P}}(\zeta)\) with
\[X\ \subseteq\ P(y)\ \in\ \partial[{\mathfrak{P}}(\zeta)]\;.\]
Next, since we enforce \(H\zeta\leq 0\), \(y\) is in the convex hull of the extreme vertices. That is, there exist scalar weights \(\theta_{1},\theta_{2},\ldots,\theta_{|,|\!|}\in[0,1]\) with
\[\sum_{j\in{\cal J}}\theta_{j}\ =\ 1\qquad\mbox{and}\qquad y\ =\ \sum_{j\in{\cal J}} \theta_{j}\Omega_{j}\zeta\;,\]
keeping in mind that these weights depend on \(X\). They can be used to define the control law
\[\mu(X)\ \stackrel{{\rm def}}{{=}}\ \sum_{j\in{\cal J}}\theta_{j}u_{ j}\ \in\ {\mathbb{U}}\quad\mbox{and}\quad\xi\ \stackrel{{\rm def}}{{=}}\ \sum_{j\in{\cal J}} \theta_{j}\xi_{j}\ \in\ {\cal G}\]
where \(u_{1},u_{2},\ldots,u_{|,|\!|}\in{\mathbb{U}}\) are the extreme control inputs and \(\xi_{1},\xi_{2},\ldots,\xi_{|,|\!|}\in{\cal G}\) are the auxiliary variables that satisfy the constraints from the definition of \({\cal F}\). Note that this construction is such that the vertices of the polytope \(P(y)\), which are given by \(\Lambda_{i}y\), satisfy
\[A\Lambda_{i}y+B\mu(X)+w= \sum_{j\in{\cal J}}\theta_{j}[A\Lambda_{i}\Omega_{j}\zeta+Bu_{j} +w]\in P(\xi),\]
where the latter inclusion holds for all \(w\in W\). Consequently, since this holds for all vertices of \(P(y)\), we have
\[AX+B\mu(X)+{\mathbb{W}}\quad\subseteq\quad AP(y)+B\mu(X)+{\mathbb{W}}\ \subseteq\ P(\xi).\]
Moreover, the above definition of \(\xi\) and the constraints \(G\xi_{j}\leq 0\) and \(Z\xi_{j}\leq z^{+}\) from the definition of \({\cal F}\) imply that \(Z\xi\leq z^{+}\) and \(P(\xi)\in{\mathfrak{P}}(z^{+})\). But this yields
\[F({\mathfrak{P}}(z),\mu)\ \subseteq\ {\mathfrak{P}}(z^{+})\;,\]
which completes the proof. \(\diamond\)
### Polytopic Dual Control
In order to approximate the original dual control problem (12) with a convex optimization problem, we assume that the stage and terminal cost functions have the form
\[L(\mathfrak{P}(z),\mu)=\mathfrak{l}(z,u)\quad\text{and}\quad M(\mathfrak{P}(z))= \mathfrak{m}(z)\]
for given convex functions \(\mathfrak{l}\) and \(\mathfrak{m}\), where the stacked vector \(u=\big{(}u_{1}^{\intercal},u_{2}^{\intercal},\ldots,u_{\mid\mid\mathfrak{J} \mid}\big{)}^{\intercal}\) collects the extreme control inputs. Due to Theorem 3 a conservative approximation of (12) is given by
\[\min_{z,\zeta,\xi,u} \sum_{k=0}^{N-1}\mathfrak{l}(z_{k},u_{k})+\mathfrak{m}(z_{N})\] (27) s.t. \[\left\{\begin{array}{l}\forall k\in\{0,\ldots,N-1\},\\ \forall i\in\{1,\ldots,\nu\},\ \forall j\in\J,\\ \overline{v}\leq\zeta_{k,1},\ z_{k,2}\leq\zeta_{k,2},\\ YA\Lambda_{i}\Omega_{j}\zeta_{k}+YBu_{k,j}+\overline{w}\leq\xi_{k,j},\\ G\xi_{k,j}\leq 0,\ H\zeta_{k}\leq 0,\ Z\xi_{k,j}\leq z_{k+1},\\ u_{k,j}\in\U,\ \Lambda_{i}\Omega_{j}\zeta_{k}\in\Xor,\ Z\hat{y}\leq z_{0}\.\end{array}\right.\]
Since \(\U\) and \(\Xor\) are convex sets, this is a convex optimization problem. Its optimization variables are the parameters \(z_{k}\in\R^{l}\) of the polytopic information tube, the associated extreme control inputs \(u_{k,j}\in\U\) and the auxiliary variables \(\zeta_{k}\in\mathcal{H}\) and \(\xi_{k}\in\mathcal{G}\), needed to ensure that
\[\forall k\in\{0,1,\ldots,N-1\},\quad F(\mathfrak{P}(z_{k}),\mu_{k})\ \subseteq\ \mathfrak{P}(z_{k+1})\;.\]
Here, \(X_{0}=P(\hat{y})\) denotes the information set at the current time, modeled by the parameter \(\hat{y}\in\mathcal{G}\). The constraint \(Z\hat{y}\leq z_{0}\) corresponds to the initial value condition \(X_{0}\in\mathfrak{P}(z_{0})\). Additionally, it is pointed out that the extrinsic information content of the auxiliary ensemble \(\mathfrak{P}(\zeta)\supseteq\mathfrak{P}(z)\sqcap\mathfrak{V}\) overestimates the extrinsic information content of \(\mathfrak{P}(z)\). Thus, the extrinsic state constraints can be enforced by using the implication chain
\[\forall i\in\{1,\ldots,\nu\},\ \forall j\in\J,\quad\Lambda_{i} \Omega_{j}\zeta_{k}\in\Xor \tag{28}\] \[\implies\ \bigcup_{X\in\mathfrak{P}(\zeta)}X\ \subseteq\Xor\quad \Longrightarrow\quad\bigcup_{X\in\mathfrak{P}(z)}X\ \subseteq\Xor.\]
Finally, (27) is solved online whenever a new information set \(X_{0}=P(\hat{y})\) becomes available, denoting the optimal extreme controls by \(u_{k,j}^{\star}\). A corresponding feasible control law can then be recovered by setting
\[\mu_{0}^{\star}(X_{0})\ \stackrel{{\rm def}}{{=}}\ \sum_{j\in\J} \theta_{j}^{\star}(X_{0})u_{0,j},\]
where the scalar weights \(\theta^{\star}_{j}(X_{0})\) can, for instance, be found by solving the convex quadratic programming problem
\[\theta^{\star}(X_{0}) \stackrel{{\mathrm{def}}}{{=}} \operatorname*{argmin}_{\theta\geq 0}\left(\sum_{j\in\mathscr{J}} \theta^{\star}_{j}u_{0,j}\right)^{2}\] \[\text{s.t.}\ \left\{\begin{array}{l}\sum_{j\in\mathscr{J}} \theta_{j}\Omega_{j}\zeta^{\star}_{0}=\hat{y}\\ \sum_{j\in\mathscr{J}}\theta_{j}=1,\end{array}\right.\]
although, clearly, other choices for the weights \(\theta^{\star}_{j}\) are possible, too. Finally, the receding horizon control loop from Section 5.1 can be implemented by using the above expression for \(\mu^{\star}_{0}\), while the information set update and propagation step can be implemented by using standard polytopic computation routines [6].
**Remark 6**: _By solving the convex optimization problem_
\[\min_{z^{\mathrm{s}},\zeta^{\mathrm{s}},\xi^{\mathrm{s}},u^{ \mathrm{s}}} \mathsf{I}(z^{\mathrm{s}},u^{\mathrm{s}})\] (29) s.t. \[\left\{\begin{array}{l}\forall i\in\{1,\ldots,\nu\},\ \forall j\in\mathscr{J},\\ \overline{v}\leq\zeta^{\mathrm{s}}_{1},\ z^{\mathrm{s}}_{2}\leq\zeta^{ \mathrm{s}}_{2},\\ YA\Lambda_{i}\Omega_{j}\zeta^{\mathrm{s}}+YBu^{\mathrm{s}}_{j}+\overline{w} \leq\xi^{\mathrm{s}}_{j},\\ G^{\mathrm{s}}_{j}\leq 0,\ H\zeta^{\mathrm{s}}\leq 0,\ Z\xi^{\mathrm{s}}_{j} \leq z^{\mathrm{s}},\\ u^{\mathrm{s}}_{j}\in\mathbb{U},\ \Lambda_{i}\Omega_{j}\zeta^{\mathrm{s}}\in \mathbb{X}\end{array}\right.\]
_an optimal control invariant polytopic information ensemble can be computed._
### Structure and Complexity
Problem (27) admits a tradeoff between the computational complexity and the conserv-vatism of polytopic dual control. In detail, this tradeoff can be adjusted by the choice of the following variables.
1. The number of facets, \(m\), the number of vertices, \(\nu\), and the number of configuration constraints, \(n_{G}\), depend on our choice of \(Y\) and \(G\). The larger \(m\) is, the more accurately we can represent the system's intrinsic information content.
2. The number of information ensemble parameters, \(l\), the number of extreme vertex polytopes, \(|\mathscr{J}|\), and the number of meta configuration constraints, \(n_{H}\), depends on how we choose \(Z\) and \(H\). The larger \(|\mathscr{J}|\) is, the more degrees of freedom we have to parameterize the optimal dual control law.
In contrast to these numbers, the number of controls, \(n_{u}\), is given. If we assume that \(\mathbb{U}\) and \(\mathbb{X}\) are polyhedra with \(n_{\mathbb{U}}\) and \(n_{\mathbb{X}}\) facets, these number are given by the problem formulation, too. Additionally, we recall that \(N\) denotes the prediction horizon of the dual controller. The number of optimization variables \(n_{\mathrm{opt}}\) and the number of constraints, \(n_{\mathrm{con}}\), of Problem (27) are given by
\[n_{\mathrm{opt}}=(2N+1)l+N|\mathbb{J}|(n_{u}+m)\] \[n_{\mathrm{con}}=N\left(l+|\mathbb{J}|(n_{G}+n_{H}+l+n_{\mathbb{U }}+\nu(m+n_{\mathbb{X}}))+l.\right.\]
In this context, however, it should also be taken into account that the constraints of (27) are not only sparse but also possess further structure that can be exploited via intrinsic separation. For instance, the algebraic-geometric consistency conditions
\[\begin{array}{rcl}GY&=&0,\quad\Lambda_{i}Y=\mathbb{1},\\ HZ&=&0,\quad\Omega_{j}Z=\mathbb{1},\quad\mbox{and}\quad Z_{1}Y=0\end{array} \tag{30}\]
hold for all \(i\in\{1,\ldots,\nu\}\) and all \(j\in\mathbb{J}\), which can be used to re-parameterize (27), if a separation of the centers and shapes of the information sets is desired.
Last but not least, more conservative but computationally less demanding variants of (27) can be derived by freezing some of its degrees of freedom. For instance, in analogy to Rigid Tube MPC [25, 28], one can implement a _Rigid Dual MPC_ controller by pre-computing a feasible point \((z^{\mathrm{s}},\zeta^{\mathrm{s}},\xi^{\mathrm{s}},u^{\mathrm{s}})\) of (29). Next, we set
\[\begin{array}{rclrcl}z_{k}&=&ZY\overline{x}_{k}+z^{\mathrm{s}},\quad\zeta_{k }&=&ZY\overline{x}_{k}+\zeta^{\mathrm{s}},\\ \mbox{and}\quad u_{j,k}&=&\overline{u}_{k}+u^{\mathrm{s}}_{j},\qquad\xi_{k,j} &=&Y\overline{x}_{k+1}+\xi^{\mathrm{s}}_{j}\,,\end{array}\]
where \(\overline{x}\) and \(\overline{u}\) denote a central state- and a central control trajectory that are optimized online, subject to \(\overline{x}_{k+1}=A\overline{x}_{k}+B\overline{u}_{k}\). By substituting these restrictions in (27) and by using (30), the resulting online optimization problem can be simplified and written in the form
\[\begin{array}{rl}\min_{\overline{x},\overline{u}}&\sum_{k=0}^{N-1}\overline {\ell}(\overline{x}_{k},\overline{u}_{k})+\overline{m}(\overline{x}_{N})\\ \\ \mbox{s.t.}&\left\{\begin{array}{l}\forall k\in\{0,\ldots,N-1\},\\ A\overline{x}_{k}+B\overline{u}_{k}=\overline{x}_{k+1},\\ Z\hat{y}\leq ZY\overline{x}_{0}+z^{\mathrm{s}},\\ \overline{u}_{k}\in\overline{\mathbb{U}},\ \overline{x}_{k}\in\overline{ \mathbb{X}}.\end{array}\right.\end{array} \tag{31}\]
Problem (31) can be regarded as a conservative approximation of (27). The sets
\[\overline{X} \stackrel{{\rm def}}{{=}} \left\{\begin{array}{l}\overline{x}\in\mathbb{R}^{n}\ \left|\begin{array}{l}\forall i\in\{1,\ldots,\nu\},\forall j\in\mathbb{J},\\ \overline{x}+\Lambda_{i}\Omega_{j}\zeta^{\rm s}\in\mathbb{X}\end{array}\right. \right\}\] \[\mbox{and}\quad\overline{\mathbb{U}} \stackrel{{\rm def}}{{=}} \left\{\begin{array}{l}x\in\mathbb{R}^{n}\ \left|\begin{array}{l}\forall j\in\mathbb{J}, \quad\overline{u}+u_{j}^{\rm s}\in\mathbb{U}\end{array}\right.\right\}\]
take the robustness constraint margins into account, while \(\overline{\ell}\) and \(\overline{m}\) are found by re-parameterizing the objective function of (27). Problem (31) is a conservative dual MPC controller that has--apart from the initial value constraint--the same complexity as certainty-equivalent MPC. Its feedback law depends on the parameter \(\hat{y}\) of the initial information set \(X_{0}=P(\hat{y})\).
### Numerical Illustration
We consider the constrained linear control system
\[A=\frac{1}{4}\left(\begin{array}{cc}6&4\\ 1&3\end{array}\right),\quad B=\left(\begin{array}{c}0\\ 1\end{array}\right),\quad C=(1\ 0)\,,\] \[\mathbb{X}=\{x\in\mathbb{R}^{2}\mid x_{2}\geq-45\},\quad\mathbb{ U}=[-55,55],\] \[\mathbb{W}=[-\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}]^{2}\subseteq \mathbb{R}^{2},\quad\mathbb{V}=[-1,1]\;. \tag{32}\]
In order to setup an associated polytopic dual controller for this system, we use the template matrices
\[Y=\left(\begin{array}{rrrrrr}1&&\\ 1&1&1\\ &&1\\ -1&&\\ -1&-1\\ &&-1\end{array}\right)\quad\mbox{and}\quad\ G=\left(\begin{array}{rrrrrr}-1&1&- 1&&&&&\\ &-1&1&-1&&&&&\\ &&-1&1&-1&&&&&\\ &&&-1&1&-1&\\ -1&&&&-1&1&-1\\ \end{array}\right)\]
setting \(m=\nu=n_{G}=6\). Here, \(Y\) and \(G\) are stored as sparse matrices: the empty spaces are filled with zeros. By using analogous notation, we set
\[Z=\left(\begin{array}{ccccc}1&&&1&&\\ &&1&&&1\\ 1&&&&\\ &1&&&&\\ &&1&&&\\ &&&1&&\\ &&&1&&\\ &&&&1&\\ &&&&1\end{array}\right),\ \ H=\left(\begin{array}{ccccc}&-1&-1&1&&1\\ -1&1&1&-1&&-1\\ 1&&-1&1&-1&&\\ 1&&1&-1&&&-1\\ 1&1&-1&&&-1&\\ &-1&&&1&-1&1\\ -1&1&&&-1&1&-1\\ 1&&&&-1&1&-1\end{array}\right),\]
\(l=8\), and \(n_{H}=9\), which can be used to represent six dimensional meta polytopes with \(6+8=14\) facets and \(\overline{\nu}=68\) vertices. They have up to \(|\mathbb{J}|=60\) extreme vertex polytopes. The first row of \(Z\) corresponds to the block matrix \(Z_{1}\). It can be used to represent the set \(\mathfrak{V}\) by setting \(\overline{v}=2\), since the diameter of \(\mathbb{V}\) is equal to \(2\). Moreover, due to our choice of \(Y\) and \(\mathbb{W}\), we have
\[\overline{w}\ =\ [\ \mbox{\tiny$\frac{1}{2}$},\ \ 1,\ \ \mbox{\tiny$\frac{1}{2}$},\ \ \mbox{\tiny$\frac{1}{2}$},\ \ 1,\ \
Last but not least a control penalty function needs to be introduced, which depends on the extreme control inputs, for instance, we can set
\[\mathfrak{c}(u)=\sum_{i=1}^{|J|}\left[u_{i}^{2}+50\cdot\left(u_{i}-\frac{1}{| \mathfrak{J}|}\sum_{j=1}^{|J|}u_{j}\right)^{2}\,\right]\]
in order penalize both the extreme inputs as well as the distances of these extreme inputs to their average value. The final stage cost is given by
\[\mathfrak{l}(z,u)=\mathfrak{r}(z)+\tau\cdot\mathfrak{d}^{\circ}(z)+\mathfrak{ c}(u),\]
where we set the intrinsic regularization to \(\tau=0.01\).
The optimal invariant information ensemble \(\mathfrak{P}(z_{\mathrm{s}})\) is found by solving (29). It is visualized in the left plot of Figure 1. Note that the light blue shaded hexagon corresponds to the union of all sets in \(\mathfrak{P}(z_{\mathrm{s}})\), which can be interpreted as a visualization of its extrinsic information content. The 60 extreme vertex polytopes of \(\mathfrak{P}(\zeta_{\mathrm{s}})\), given by \(P(\Omega_{j}\zeta_{\mathrm{s}})\) for
Figure 1: LEFT: Visualization of the 60 extreme vertex polytopes of \(\mathfrak{P}(\zeta_{\mathrm{s}})\), colored in gray. The convex hull of these extreme polytopes is colored in light blue. It corresponds to the union of all sets in \(\mathfrak{P}(z_{\mathrm{s}})\), as predicted by (26). RIGHT: Visualization of the extrinsic tube (light blue shaded polytopes) that corresponds to the first prediction of the dual controller (27) with horizon \(N=10\). These sets correspond to the union of the sets of the information ensembles \(\mathfrak{P}(z_{k})\). The union of the sets in the first information ensemble, \(\mathfrak{P}(z_{0})\), happens to coincide with the initial information set \(X_{0}=[17,23]\times[17,23]\). The extrinsic state constraints are active at the third polytope—the active vertices at which this constraint is active are colored red. The union of the sets of the optimal invariant information ensemble is shown in dark blue. It is used as terminal region.
\(j\in\{1,2,\ldots,60\}\), are difficult to plot as they are all clustered at the vertices of the extrinsic hexagon (they partly obscure each other; not all 60 are clearly visible), but an attempt is made to visualize them in different shades of gray. As the optimal solution happens to satisfy \(\mathfrak{P}(z_{\mathrm{s}})\cap\mathfrak{V}=\mathfrak{P}(\zeta_{\mathrm{s}})\), at least for this example, the convex relaxation (25) does not introduce conservatism.
Next, a closed-loop simulation of the polytopic dual controller (27) is started with the initial information set
\[X_{0}=[17,23]\times[17,23]\]
using the prediction horizon \(N=10\) while the terminal cost is set to
\[\mathfrak{m}(z_{N})=\left\{\begin{array}{ll}0&\mbox{if}\quad z_{N}\leq z_{ \mathrm{s}}\\ \infty&\mbox{otherwise}\end{array}\right.\]
in order to enforce recursive feasibility. The right plot of Figure 1 shows an extrinsic image of the first predicted tube; that is, the union of the sets \(\mathfrak{P}(z_{k})\) along the optimal solution of (27) for the above choice of \(X_{0}\), which are shown in light blue. The dark blue shaded polytope corresponds to the terminal region that is enforced by the above choice of \(\mathfrak{m}\).
**Remark 7**: _The proposed polytopic dual control method optimizes feedback laws that depend on the system's information state. Note that such dual control laws are, in general, less conservative than robust output feedback laws that are based on state estimation or observer equations with affine structure, as considered in [15] or [27]._
## 7 Conclusions
This paper has presented a set-theoretic approach to dual control. It is based on meta information spaces that enable a data-free algebraic characterization of both the present and the future information content of learning processes. In detail, an intrinsic equivalence relation has been introduced in order to separate the computation of the future information content of a constrained linear system from the computation of its robust optimal control laws. An associated intrinsic separation principle is summarized in Theorem 1. It is the basis for analyzing the existence of solutions of a large class of dual control problems under certain structural and continuity assumptions that are summarized in Theorem 2.
For the first time, this paper has presented a polytopic dual control method for constrained linear systems that is based on convex optimization. In contrast to existing robust output-feedback control schemes, this method optimizes control laws that depend on the system's
information state. This alleviates the need to make control decisions based on state estimates or observer equations that induce a potentially sub-optimal feedback structure. Instead, (27) optimizes a finite number of control inputs that are associated to the extreme vertex polytopes of the predicted information ensembles.
A numerical case study for a system with two states has indicated that (27) can be solved without numerical problems for moderately sized problems. For larger systems, however, the computational complexity of accurate dual control can become exorbitant. In anticipation of this problem, this paper has outlined strategies towards reducing the computational complexity at the cost of more conservatism. For instance, the Rigid Dual MPC problem (31) has essentially the same online complexity as a comparable certainty-equivalent MPC problem. The development of more systematic methods to tradeoff conservatism and computational complexity of polytopic dual control methods as well as extensions of polytopic dual control for constrained linear systems that aim at simultaneously learning their state and their system matrices \(A\), \(B\), and \(C\) appear to be challenging and practically relevant directions for future research.
|
2306.03500 | Towards Adaptable and Interactive Image Captioning with Data
Augmentation and Episodic Memory | Interactive machine learning (IML) is a beneficial learning paradigm in cases
of limited data availability, as human feedback is incrementally integrated
into the training process. In this paper, we present an IML pipeline for image
captioning which allows us to incrementally adapt a pre-trained image
captioning model to a new data distribution based on user input. In order to
incorporate user input into the model, we explore the use of a combination of
simple data augmentation methods to obtain larger data batches for each newly
annotated data instance and implement continual learning methods to prevent
catastrophic forgetting from repeated updates. For our experiments, we split a
domain-specific image captioning dataset, namely VizWiz, into non-overlapping
parts to simulate an incremental input flow for continually adapting the model
to new data. We find that, while data augmentation worsens results, even when
relatively small amounts of data are available, episodic memory is an effective
strategy to retain knowledge from previously seen clusters. | Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag | 2023-06-06T08:38:10Z | http://arxiv.org/abs/2306.03500v1 | # Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory
###### Abstract
Interactive machine learning (IML) is a beneficial learning paradigm in cases of limited data availability, as human feedback is incrementally integrated into the training process. In this paper, we present an IML pipeline for image captioning which allows us to incrementally adapt a pre-trained image captioning model to a new data distribution based on user input. In order to incorporate user input into the model, we explore the use of a combination of simple data augmentation methods to obtain larger data batches for each newly annotated data instance and implement continual learning methods to prevent catastrophic forgetting from repeated updates. For our experiments, we split a domain-specific image captioning dataset, namely VizWiz, into non-overlapping parts to simulate an incremental input flow for continually adapting the model to new data. We find that, while data augmentation worsens results, even when relatively small amounts of data are available, episodic memory is an effective strategy to retain knowledge from previously seen clusters.
## 1 Introduction
Image Captioning (IC) is the task of generating a description in natural language for a given image (Stefanini et al., 2021). For the training of most state-of-the-art IC models, large amounts of annotated training data are required (Zhou et al., 2020). However, whenever models need to caption user-specific images without large-scale annotations, this is an impractical requirement. In this case, a potential solution can be found in an _interactive_ framework, in which the model can be efficiently adapted to new data based on user feedback (Ling and Fidler, 2017; Shen et al., 2019). Additionally, interactivity renders AI/ML-systems more user-friendly and trustworthy (Bussone et al., 2015; Guo et al., 2022).
In interactive ML settings, training takes place with small amounts of data, and often in an incremental manner. These properties can lead to _overfitting_, on the one hand, which is the lack of generalization ability of the model, and _catastrophic forgetting_, on the other hand, which refers to the drop in performance on older tasks, when a model is trained on new data. For our interactive approach, we tackle these problems using a combination of methods previously proposed in the literature. To tackle overfitting, we apply data augmentation to each instance of user feedback to obtain larger batches of data, which the model is then updated on (Wang et al., 2021). Nevertheless, we find that this strategy fails to improve results in our image captioning task, indicating that the data augmentation methods we used are not suitable for this kind of task. In order to prevent catastrophic forgetting, we rely on continual learning methods. In the following, we present and test an IC pipeline that can be used in an interactive setting. Our work is guided by the following research questions:
1. How does data augmentation benefit a system which is trained incrementally with (simulated) user feedback? How does this system perform in few-shot scenarios?
2. How effective is an episodic memory replay module (de Masson d'Autume et al., 2019) for knowledge retention from previous trainings?
Our contributions are as follows:
* We propose a lightweight continual learning IC pipeline that leverages data augmentation, which can be used in an interactive machine learning setting.
* We adapt a continual learning method, namely sparse memory replay, proposed by de Masson d'Autume et al. (2019), for IC.
* We test a combination of data augmentation
methods for interactive IC in both image and text modalities.
* Since we report negative results for the system using data augmentation methods on the user feedback, we additionally investigate why these methods do not work in our case, and we offer some possible explanations for the deteriorating performance.
* We propose a method based on nominal phrase similarity between captions of different images for splitting a dataset into different tasks suitable for evaluating task-incremental continual learning when only image captions are given.
For our simulated user feedback, we use a domain-specific dataset, namely VizWiz Gurari et al. (2020); Simons et al. (2020), which consists of images taken by visually impaired people. We choose this dataset exactly because of this property: the quality of the images is lower than in most general-use IC datasets, resembling the image quality of user images.
## 2 Related work
Image captioning (IC)Deep-learning based IC models Xu et al. (2015); Anderson et al. (2018) traditionally consist of two parts: an _encoder_ and a _decoder_. The visual encoder breaks the image down into features or creates an intermediate representation. The decoder is a language model, which takes the encoder output as input and generates a caption. For _grounded_ approaches, more supervision is required: image features, such as regions, are additionally inserted into the visual encoder Lu et al. (2018). Following the trend in other deep learning tasks, recent approaches include large-scale vision-language pre-training, as well as generalized models that work for a variety of computer vision and vision-language tasks, including image retrieval, referring segmentation, and visual question answering Zou et al. (2022); Li et al. (2022).
Interactive ICInteractive IC has not gained as much attention as other ML tasks. Jia and Li (2020) involve the human-in-the-loop by providing incomplete sequences as input, in addition to each image, during inference time. Biswas et al. (2020) extend the Show, Attend, and Tell architecture by combining high-level and low-level features, which provide explainability, as well as beam search during decoding time.
Data augmentationData augmentation (DA) is widely applied to multiple tasks which include learning from large data, whenever there is a lack of annotated instances. It can additionally be used as a regularization technique to avoid overfitting by introducing noise into the dataset. In Computer Vision, transformations like cropping, warping, and horizontal/vertical flipping are often applied Takahashi et al. (2019); Katiyar and Borgohain (2021).
For text, augmentation methods need to be more elaborate, since image-inspired techniques often change the semantics of the text drastically. Popular methods include, but are not restricted to, EDA Wei and Zou (2019) (including random insertion, deletion, word swap), back-translation Sennrich
Figure 1: Our pipeline. Following the pre-training/fine-tuning paradigm, we first train our IC model on the MS COCO dataset. We then continue to train our model incrementally, by adding a new cluster each time from the VizWiz dataset, after applying DA methods on it to obtain more training data. During training on the VizWiz data for each cluster, an episodic memory module is activated, which is used to retrieve old data points from previously seen clusters.
et al., 2016; Turkerud and Mengshoel, 2021), synonym replacement and contextual augmentation Kobayashi (2018); Atliha and Sesok (2020), often using a pre-trained language model Devlin et al. (2019). For both modalities, retrieval-based augmentation from additional resources is possible as well Li et al. (2021).
Continual LearningIn cases where a model is trained repeatedly on new data, _catastrophic forgetting_Kirkpatrick et al. (2017) can be observed. This refers to the degradation of model performance on older tasks when it is trained on new ones. In order to overcome this, continual learning methods are often applied. Methods such as weight regularization, encoder/decoder freezing, pseudo-labeling, and knowledge distillation, have been previously applied to IC models Nguyen et al. (2019); Del Chiaro et al. (2020). In the natural language processing domain, de Masson d'Autume et al. (2019) use a combination of episodic memory replay during training and local adaptation of the model during inference.
## 3 Method
In this section, we describe the approach followed, including our benchmark strategy, our DA methods, as well as the episodic memory module. Our pipeline is illustrated in Figure 1.
### Interactive IC pipeline
ArchitectureWe experiment with a concrete implementation of the interactive approach outlined in Hartmann et al. (2022). We use a PyTorch implementation of _Show, Attend and Tell_Xu et al. (2015). This architecture consists of a convolutional neural network (CNN) encoder, which is used to extract feature vectors from images, and a long-short-term memory (LSTM) decoder, which generates a caption conditioned on these vectors, with the use of attention. Following Dognin et al. (2022), we replace the ResNet encoder with a ResNext network Xie et al. (2016).
For the decoder, an LSTM network is used. A problem arising from incremental training here is the expansion of the vocabulary. In order to tackle this problem, we rely on the subword vocabulary given by the BERT Devlin et al. (2019) tokenizer provided by Huggingface1. By using a pre-trained subword tokenizer, we account for new words learned incrementally, without the need to expand the model size. The training strategy used is cross-entropy loss.
Footnote 1: We use bert-base-uncased.
While current state-of-the-art architectures achieve better scores, we adapt this particular architecture because of its simplicity, and because its inputs are raw images, as opposed to image features like bounding boxes and labels from object recognition models, which further decreases pre-processing time. The pipeline can potentially be adapted to any IC model that takes images as input, rather than image regions and classes.
Data augmentation methodsFor our experiments, we use DA on Image (img), Text (txt), and both modalities simultaneously (both). For img, we use the Albumentations Buslaev et al. (2020) library. We create a pipeline of different operations, including CLAHE, optical and grid distortion, blur, flip, and rotation. Our goal here is to introduce noise to the input data, in order to help the model generalize better to unseen data. For the
Figure 2: Generated data points generated based on the DA methods described in subsection 3.1. Top: image DA (combination of several DA methods). Bottom: text DA.
txt modality, we aim at generating meaningful captions. For this reason, we employ two paraphrasing models provided by Huggingface, namely pegasus_paraphrase, a PEGASUS (Zhang et al., 2019) model fine-tuned for paraphrasing, and paws_paraphrase, a T5 (Raffel et al., 2020) model trained on the PAWS (Zhang et al., 2019; Yang et al., 2019) dataset. The reason we use two different paraphrasing tools is that we found out that the quality of the generated samples is different. In addition, paraphrasing quality drops in each tool when the number of paraphrases increases. In order to introduce more variety without compromising the quality, we decide to utilize two paraphrasing tools. In the case of combined (both) DA, img augmented images are combined with synthetically generated captions. In every case, we generate batches that are 10 times bigger than the initial ones. Examples of generated data points can be found in Figure 2.
Episodic memory for lifelong learningIn order to help the model retain old knowledge when being adapted to new data, we implement a continual learning method, more specifically a sparse memory replay that operates during training. We adapt the method described by de Masson d'Autume et al. (2019): During training, some samples/experiences are written into the memory. Every training sample has a certain probability to be selected for memory writing. These experiences are then sparsely replayed (i.e. 1 sample from memory for every 200 new data points, see subsection 3.2) while the model is trained on new data. This way, the model retains information from previous training iterations with very low additional computational effort.
### Procedure and training details
We follow the pre-training/fine-tuning paradigm, where we first train the model on a _supervised pre-training_ task using a large, generic dataset, namely MS COCO (Lin et al., 2014) (details below). During (supervised) pre-training, we do not use any DA or continual learning method. After obtaining the best model, we continue with our incremental _model adaptation_, during which we apply DA and continual learning.
Training detailsFor the supervised pre-training step, we train our model on MS COCO in two stages: during the first training, we freeze the encoder and only train the decoder. The encoder is then trained in the second stage. For the adaptation step, we train our models on each task once.
We train with a batch size of 32 and a learning rate of 4e-4 for the decoder. For our memory module, the replay frequency is 200, as mentioned in subsection 3.1; that means that for every 200 batches, one batch is drawn from the memory and added to the current training batch. The memory writing probability is 20%.
We use early stopping. During our initial experiments, we trained with higher (p=10) and lower (p=2) patience values for early stopping. During our initial experiments, lower patience seems to produce better results, hence we adopt this value for our adaptation training. During supervised pre-training, we used 20 as the default patience value.
### Datasets
Supervised pre-training stepWe first train our model on the MS COCO dataset (Lin et al., 2014). It contains 328k images, and it is broadly used as a pre-training dataset for vision tasks, including object recognition, object segmentation, and IC. We use the 2014 release, which contains 82,783 training and 40,504 validation images. Each image is annotated with five captions, describing the content of each image. We make use of the Karpathy splits (Karpathy and Fei-Fei, 2017).
AdaptationAfter obtaining the best possible captioning model trained on MS COCO, we train our model incrementally using VizWiz (Gurari et al., 2020; Simons et al., 2020), a dataset consisting of images taken by visually impaired people. Since there are no test captions available, we use the validation set as our test set. A part of the training samples is used as our validation set.
\begin{table}
\begin{tabular}{c|r r r|r|r} \hline \hline & train & val & test & all & WT \\ \hline
1 & 3,332 & 954 & 2,476 & 6,762 & 10,047 \\
2 & 1,535 & 302 & 488 & 2,325 & 4,988 \\
3 & 5,668 & 1,402 & 2,199 & 9,269 & 13,497 \\
4 & 333 & 83 & 113 & 529 & 2,931 \\
5 & 6,160 & 1,516 & 2,474 & 10,150 & 12,407 \\ \hline all & 17,028 & 4,257 & 7,750 & 29,035 & 21,955 \\ \hline \hline \end{tabular}
\end{table}
Table 1: VizWiz cluster (task) statistics after filtering out bad quality images (according to the procedure mentioned in subsection 3.3). WT stands for word types.
Dataset processingWe want to simulate a continual learning setting where we incrementally adapt the IC model to new sets of user-specific data. For this, we split VizWiz into non-overlapping clusters representing user-specific datasets. We follow the procedure for other continual learning datasets, where data is split according to classes/concepts, and each new class/concept represents a new task Del Chiaro et al. (2020). As the VizWiz dataset does not contain object annotations for its images, we resort to splitting the data according to the objects mentioned in the captions, using the procedure described below. The resulting clusters resemble the user-specific data we might expect to receive from different users in a real-world setup: Whereas one user might be more interested in captioning screenshots or images of IT-related concepts, another user might be interested in captioning images of containers of food and drinks, etc. Example NPs for each cluster can be found in Appendix A.
We follow the steps below:
1. We collect all nominal phrases (NPs) in the entire caption corpus. We use TextBlob2 for the extraction of the NPs. Footnote 2: [https://textblob.readthedocs.io/en/dev/](https://textblob.readthedocs.io/en/dev/)
2. From all the NPs, we choose so-called _keywords_, namely phrases that appear at least 15 times in the dataset.
3. Using GloVe Pennington et al. (2014) embeddings, we extract word embeddings for each keyword. In case a keyword is phrasal, we average between individual word embeddings.
4. We cluster the keyword embeddings in 5 clusters, using K-means clustering Hartigan and Wong (1979).
5. We iterate over all captions for each image, looking for relevant keywords, and assigning them to clusters. In case one image corresponds to more than one cluster according to its keywords, we favor the smaller cluster.
VizWiz contains some images of bad quality: in some cases, the caption reads _'Quality issues are too severe to recognize visual content'_. In order to avoid the generation of these captions during inference, they can be removed from the training set Cayli et al. (2022). In our work, we exclude an image from training, if at least 3 out of the five captions in the image contain this caption; that means that more than 50% of the annotators could not describe the content of the image. If _Quality Issues_ are brought up only once or twice, we remove this caption and duplicate one or two of the other captions, so that, in the end, each image is annotated with five captions. We do not remove _Quality Issues_ images and captions from our test set. We exclude a total of 2,146 images.
While we technically do not use the complete dataset provided, it is justified by the fact that we test our pipeline in a low-resource scenario. Table 1 includes statistics over our tasks, including word type counts.
## 4 Evaluation & Results
In this section, we present the evaluation metrics we used, our procedure, as well as the results from our core experiments.
### Evaluation metrics & splits
Since IC is a natural language generation task, results are evaluated using standard metrics for evaluating text generation tasks. These metrics measure similarity to the ground truths. The metrics most commonly used are BLEU Papineni et al. (2017),
\begin{table}
\begin{tabular}{l|c c c|c c c c|c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c|}{+ cluster 1 [332]} & \multicolumn{4}{c|}{+ cluster 2 [1535]} & \multicolumn{4}{c|}{+ cluster 3 [5668]} & \multicolumn{4}{c}{+ cluster 4 [333]} & \multicolumn{4}{c}{+ cluster 5 [6160]} \\ DA & no & img & text & both & no & img & text & both & no & img & text & both & no & img & text & both & no & img & text & both \\ \hline
1 & 18.8 & 6.4 & 15.8 & 15.3 & 12.4 & 2.2 & 11.3 & 4.4 & 15.9 & 2.4 & 13.0 & 7.3 & 12.7 & 1.9 & 9.8 & 3.9 & 11.8 & 2.8 & 9.7 & 7.1 \\
2 & & & & & & 26.0 & 6.9 & 19.8 & 16.4 & 25.0 & 5.5 & 18.7 & 11.3 & 18.7 & 4.6 & 13.0 & 7.2 & 22.6 & 3.5 & 14.9 & 13.8 \\
3 & & & & & & & & 27.7 & 4.2 & 24.5 & 16.3 & 21.1 & 2.3 & 16.4 & 4.9 & 22.4 & 2.9 & 16.9 & 11.9 \\
4 & & & & & & & & & & & & 26.7 & 4.6 & 20.5 & 13.1 & 20.4 & 3.4 & 15.4 & 10.6 \\
5 & & & & & & & & & & & & & & & & 25.9 & 3.7 & 19.2 & 15.3 \\ \hline all & 18.8 & 6.4 & 15.8 & 15.3 & 16.4 & 3.4 & 14.6 & 7.4 & 23.6 & 3.6 & 19.9 & 12.2 & 18.4 & 2.4 & 14.2 & 5.0 & 21.2 & 3.3 & 16.2 & 12.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: CIDEr results on our experiments on VizWiz data clustered according to the procedure described in subsection 3.3. We start with the model resulting from the supervised pre-training step on MS COCO and continue to train this model incrementally on the VizWiz clusters (+cluster...). We include the amount of (original) training data in brackets. DA: Data augmentation, no: no DA, img: image DA, text: text DA, both: image and text DA. The numbers in the left column stand for clusters evaluated on. ’all’ refers to the micro avg.
2002), ROUGE Lin (2004), METEOR Banerjee and Lavie (2005), CIDEr Vedantam et al. (2015), and SPICE Anderson et al. (2016). For our hyperparameter tuning on the validation set, we use the BLEU metric. We report CIDEr scores in the main paper for brevity, scores for the other evaluation metrics can be found in Appendix B. We use the Pycocoevalcap3 library for evaluation. In order to evaluate the continual learning abilities of our IC model, we report scores per cluster, as well as micro-averages over the clusters trained so far.
Footnote 3: [https://github.com/salaniz/pycocoevalcap.git](https://github.com/salaniz/pycocoevalcap.git)
### Results
We present our results in Table 2. The use of our DA methods does not improve the results. Especially when img DA is involved, performance drops dramatically compared to the no DA baseline. This leads us to the conclusion that the DA operations we applied to the images were not suitable. Unexpectedly, we observe thattxt DA does not improve results compared to the no DA baseline, which is in contrast to findings of previous work showing that caption augmentation is beneficial for low-resource IC Atliha and Sesok (2020). We analyze this in more detail in section 5.
## 5 Analysis
In this section, we take a closer look into the quality of the captions generated by our models. We focus on the no andtxt models since they produce better results. We also conduct two ablation studies: one considers training without the use of the memory module, and the other one tests our method in a low-resource scenario.
### Caption quality
In order to gain a better insight into our results, in particular the observation thattxt DA worsens results compared to the no DA baseline, we compare the generated captions based on their average length and the number of unique word types contained in the captions. One aspect that strikes immediately when comparing captions generated withtxt DA vs no DA is variation. While we find that no captions andtxt captions share a similar amount of unique word types, their average length is different, withtxt captions being more than 2 words shorter than no captions.
We include some examples of generated captions in Figure 3. While we see that the captions generated are not necessarily erroneous, captions generated with the models trained withtxt DA are less informative than the gold captions and captions generated without DA. Automated evaluation metrics often penalize changes in the length of the output. Captions generated by thetxt DA model tend to be more similar to the paraphrases generated by the PEGASUS paraphrasing model (which was used to generate data for the training of thetxt DA model), which are shorter and less informative. Hence, this paraphrasing tool is not suitable for this
Figure 3: Generated captions without DA and withtxt DA, compared with one of the gold captions.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & no & img &txt & Both \\ \hline no. of types & 1,383 & 2,418 & 1,397 & 1,053 \\ \(\emptyset\) (median) & 10.0 & 10.0 & 8.0 & 10.0 \\ \(\emptyset\) (mean) & 10.229 & 10.464 & 7.949 & 9.894 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics over captions generated with our models. \(\emptyset\) : average caption length.
particular task. In the future, we plan to compare more paraphrasing tools for DA on IC tasks.
To confirm our qualitative observations in a quantitative manner, we carried out a small manual analysis. We randomly sampled 100 captions generated with the txt models and compared them to the gold captions. Our criterion was informativeness: we ranked each generated caption as _non-informative_, _partially informative_, or _very informative_. We find that 46 of them are very or partially informative, while for some of the rest, the lack of informativeness comes from the fact that the image quality is low (since seven of them contain severe quality issues).
### Ablation study: Training without episodic memory replay
In order to investigate the effect of the sparse episodic memory replay on the continual learning abilities of the model, we train models in the same settings as in our core experiments, except for the use of sparse episodic memory replay. Results for these experiments are shown in Table 4. We observe that, in general, there is an improvement in performance in almost all cases, both in models trained with no DA and in models trained with txt DA. The only exception is the model after training with cluster 4, which is significantly smaller than the rest of the other clusters (approx. 1/3 the size of the next smaller cluster). This shows that, while the episodic memory module positively influences performance when at least 1000 samples are present, it is not as effective with very few samples.
### Ablation study: Training with parts of the dataset
In an interactive setup, we cannot assume large amounts of annotated data provided by the user, hence we evaluate our models after training on only 10%, 20%, and 50% of the data of each clus
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{+ cluster 1} & \multicolumn{4}{c}{+ cluster 2} & \multicolumn{4}{c}{+ cluster 3} & \multicolumn{4}{c}{+ cluster 4} & \multicolumn{4}{c}{+ cluster 5} \\ \hline DA & no & txt & no & txt & no & txt & no & txt & no & txt & no & txt & no & txt \\ \hline MEM & + & - & + & - & + & - & + & - & + & - & + & - & + & - & + & - & + & - & + & - \\ \hline
1 & 27.1 & 27.1 & 20.9 & 20.8 & 16.5 & 15.6 & 14.3 & 9.7 & 22.8 & 22.9 & 17.7 & 16.3 & 19.3 & 20.8 & 13.1 & 15.2 & 17.6 & 19.0 & 13.3 & 13.5 \\
2 & & & & & & & 26.0 & 27.0 & 22.2 & 20.1 & 25.2 & 24.9 & 18.7 & 17.2 & 19.3 & 17.2 & 16.0 & 15.3 & 23.3 & 23.0 & 18.3 & 15.2 \\
3 & & & & & & & & & 32.4 & 31.3 & 28.1 & 24.1 & 24.2 & 23.8 & 17.9 & 18.4 & 25.1 & 24.2 & 18.1 & 19.1 \\
4 & & & & & & & & & & & & 25.3 & 23.7 & 17.5 & 20.9 & 18.5 & 18.9 & 13.5 & 12.2 \\
5 & & & & & & & & & & & & 27.1 & 25.6 & 19.9 & 19.0 \\ \hline all & 27.1 & 27.1 & 20.9 & 20.8 & **21.0** & 20.4 & **18.5** & 14.3 & **29.7** & 29.1 & **24.8** & 22.0 & 23.4 & **23.5** & 17.2 & **18.1** & **24.9** & 24.3 & **18.6** & 18.4 \\ \hline \end{tabular}
\end{table}
Table 4: CIDEr results on the validation set for no and txt augmentation with (+) and without (-) episodic memory replay. We mark in **bold** the cases in which episodic memory contributes to an improvement, and in **red** the cases in which it does not.
Figure 4: CIDEr results on the validation set for each task training with 10%, 20%, 50%, and 100% of the data.
ter. Training data points for each cluster are chosen randomly - for this reason, we present average scores over 3 trainings with the same settings. Our training takes place without memory since in most cases, the amount of data is too small for the memory to be activated. The results for models trained on reduced amounts of data for each cluster are shown in Figure 4.
It seems that txt DA does not improve results even in a low-resource scenario - the curves for no and txt DA are similar for the larger clusters (1, 3, 5). For task 2, a slight improvement in performance can be observed when training with 50% of the data. This, in turn, leads to an additional observation, namely the fact that almost all our no DA models deter when trained with half of the data of each cluster. This might be attributed to the data distribution of the clusters with which we trained.
## 6 Conclusion
We have presented a pipeline for interactive IC, which combines simple methods for incremental model improvement. This framework allows incremental adaptation of a pre-trained IC model to new data that is entered by users. The user input is transformed into a larger data batch using various data augmentation methods (for image, text, and both modalities). We additionally adapted a continual learning method for IC, which prevents catastrophic forgetting after repeated updates. In order to simulate incremental user input, we split the relatively small, domain-specific VizWiz dataset into non-overlapping clusters based on nouns mentioned in the image captions. VizWiz is a good test bed for our pipeline, as it contains real-world images with varying quality.
We analyzed the effectiveness of DA in our experiments, and we noticed a lower performance of our models when trained with augmented data. The drop in performance resulting from the application of DA methods was evident in our low-resource experiments as well. We concluded that, especially for IC, img DA must be applied carefully. The same applies to txt DA: since brevity is penalized in this task, the DA outputs should be of similar length and descriptiveness as the gold captions. We confirmed that sparse memory replay does enable the models to retain knowledge learned from previous datasets while adapting to new data.
In the future, we plan to experiment with more elaborate joint DA methods for IC. Apart from evaluating the approach with respect to model performance using automated performance metrics, we intend to evaluate its usefulness and usability for end-users in a human study. Since prompting using large models is a popular paradigm recently, we intend to experiment with models like CLIP (Radford et al., 2021) as well, additionally assessing the trade-off between initial training cost and adaptation cost. Last but not least, applying active learning methods to select the best sample(s) for the episodic memory module can potentially increase the effectiveness of the continual learning method used in our pipeline.
## Limitations
Despite the promising results of our IML pipeline for image captioning, our work has some limitations. Firstly, the experiments were conducted on a domain-specific dataset, VizWiz, and may not generalize to other datasets or domains. Secondly, our approach may not be suitable for scenarios where user feedback is sparse or unreliable, as the effectiveness of IML heavily depends on the quality and quantity of the feedback. Thirdly, our use of episodic memory to retain knowledge from previously seen clusters may not scale well to smaller datasets and other methods may be required. Lastly, our approach does not address the challenge of bias in the data, which can lead to biased models.
## Ethical Statement
As of now, we do not see ethical concerns with the study presented in this paper. We used a dataset that is publicly available. The study is currently not applied to human subjects with personal data; in this case, the use of user feedback in the training process could potentially introduce biases if the feedback is not diverse or representative of the population. Lastly, our approach may be used to develop image captioning models that generate harmful or inappropriate content, such as captions that perpetuate harmful stereotypes or stigmatize certain groups of people.
## Acknowledgments
We thank the reviewers for their insightful comments and suggestions. The research was funded by the XAINES project (BMBF, 01IW20005) and by the No-IDLE project (BMBF, 01IW23002). |
2308.15816 | Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement | This paper presents a new dataset and general tracker enhancement method for
Underwater Visual Object Tracking (UVOT). Despite its significance, underwater
tracking has remained unexplored due to data inaccessibility. It poses distinct
challenges; the underwater environment exhibits non-uniform lighting
conditions, low visibility, lack of sharpness, low contrast, camouflage, and
reflections from suspended particles. Performance of traditional tracking
methods designed primarily for terrestrial or open-air scenarios drops in such
conditions. We address the problem by proposing a novel underwater image
enhancement algorithm designed specifically to boost tracking quality. The
method has resulted in a significant performance improvement, of up to 5.0%
AUC, of state-of-the-art (SOTA) visual trackers. To develop robust and accurate
UVOT methods, large-scale datasets are required. To this end, we introduce a
large-scale UVOT benchmark dataset consisting of 400 video segments and 275,000
manually annotated frames enabling underwater training and evaluation of deep
trackers. The videos are labelled with several underwater-specific tracking
attributes including watercolor variation, target distractors, camouflage,
target relative size, and low visibility conditions. The UVOT400 dataset,
tracking results, and the code are publicly available on:
https://github.com/BasitAlawode/UWVOT400. | Basit Alawode, Fayaz Ali Dharejo, Mehnaz Ummar, Yuhang Guo, Arif Mahmood, Naoufel Werghi, Fahad Shahbaz Khan, Jiri Matas, Sajid Javed | 2023-08-30T07:41:26Z | http://arxiv.org/abs/2308.15816v2 | # Improving Underwater Visual Tracking With a Large Scale Dataset and Image Enhancement
###### Abstract
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT). Despite its significance, underwater tracking has remained unexplored due to data inaccessibility. It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles. Performance of traditional tracking methods designed primarily for terrestrial or open-air scenarios drops in such conditions. We address the problem by proposing a novel underwater image enhancement algorithm designed specifically to boost tracking quality. The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers. To develop robust and accurate UVOT methods, large-scale datasets are required. To this end, we introduce a large-scale UVOT benchmark dataset consisting of 400 video segments and 275,000 manually annotated frames enabling underwater training and evaluation of deep trackers. The videos are labelled with several underwater-specific tracking attributes including watercolor variation, target distractors, camouflage, target relative size, and low visibility conditions. The UVOT400 dataset, tracking results, and the code are publicly available on: [https://github.com/BasitAlawode/UMVOT400](https://github.com/BasitAlawode/UMVOT400).
Visual Object Tracking, Underwater Tracking, Dataset, Underwater Image Enhancement, Vision Transformer.
## 1 Introduction
Visual Object Tracking (VOT) is one of the fundamental and long-standing problems in computer vision [38]. The main aim of VOT is to estimate the location of the generic moving target object in a video sequence, given its position in the first frame [38]. It is quite challenging to learn a class-agnostic, robust-to-noise, and distractor-aware target appearance model in the presence of occlusion, lighting variations, target rotation, scale variations, and fast motion etc. [62, 22, 38].
VOT has numerous applications such as video surveillance, autonomous driving, robotics manipulation, and navigation, sports video analysis, and human activity recognition [74, 72, 41, 70]. In the past decade, the tracking community has considerably progressed and many end-to-end deep learning based trackers have been proposed [6, 8, 9, 15, 16, 18, 19, 83]. One of the main reasons behind this progress is the availability of a variety of large-scale open-air tracking datasets such as GOT-10K [35], LaSOT [22], and TrackingNet [62] which are used to train and evaluate new trackers.
Underwater Visual Object Tracking (UVOT) has largely remained an unexplored research area in the computer vision community despite its numerous applications such as search and rescue operations [58], homeland and maritime security [28], ocean exploration [10], sea life monitoring [1], fisheries management and control [69, 7], underwater waste cleaning [17], and underwater robotics intervention [66].
Performance of all tested SOTA visual trackers significantly degrades when employed in UnderWater (UW) scenes compared to their performance on open-air datasets (Fig. 1). This is likely since underwater environment poses a very unique set of challenges that make UVOT very challenging resulting in deteriorated tracking performance. These challenges include low light and poor visibility conditions, light scattering by particles, absorption of light, non-uniform illumination conditions, target blurring, watercolor variations (such as bluelsi or greenish, etc.), flickering of caustic patterns [55, 28], a large number of similar distractors, quick target visibility decay with increasing distance from the camera. In an experiment performed on 25 existing SOTA trackers, we observed significant performance degradation when open-air trained trackers are employed for underwater scenarios (Fig. 1). Almost all trackers have shown a significant performance gap between open-air and underwater tracking.
We empirically show that image enhancement methods improve the tracking performance by reducing the adverse effect of the UVOT specific challenges. For this purpose, we employ existing SOTA underwater image enhancement methods and demonstrate improvement in the UVOT performance. All of the SOTA trackers included in this paper have consistently obtained better tracking performance when applied to the enhanced images. In this work, we propose a novel underwater image enhancement algorithm for the purpose of the UVOT performance improvement.
The literature dominantly deals with open-air VOT. UVOT has sparsely been investigated, which is likely to be attributed due to the lack of large-scale UVOT datasets, benchmarking, and the difficulty of obtaining UW imagery. We discuss the challenges posed by the UW environment for VOT and present a new UVOT dataset and then benchmark 25 existing SOTA trackers.
**The UVOT400 Dataset:** For real-world applications of UVOT, |
2310.11184 | Sparse Multi-Object Render-and-Compare | Reconstructing 3D shape and pose of static objects from a single image is an
essential task for various industries, including robotics, augmented reality,
and digital content creation. This can be done by directly predicting 3D shape
in various representations or by retrieving CAD models from a database and
predicting their alignments. Directly predicting 3D shapes often produces
unrealistic, overly smoothed or tessellated shapes. Retrieving CAD models
ensures realistic shapes but requires robust and accurate alignment. Learning
to directly predict CAD model poses from image features is challenging and
inaccurate. Works, such as ROCA, compute poses from predicted normalised object
coordinates which can be more accurate but are susceptible to systematic
failure. SPARC demonstrates that following a ''render-and-compare'' approach
where a network iteratively improves upon its own predictions achieves accurate
alignments. Nevertheless, it performs individual CAD alignment for every object
detected in an image. This approach is slow when applied to many objects as the
time complexity increases linearly with the number of objects and can not learn
inter-object relations. Introducing a new network architecture Multi-SPARC we
learn to perform CAD model alignments for multiple detected objects jointly.
Compared to other single-view methods we achieve state-of-the-art performance
on the challenging real-world dataset ScanNet. By improving the instance
alignment accuracy from 31.8% to 40.3% we perform similar to state-of-the-art
multi-view methods. | Florian Langer, Ignas Budvytis, Roberto Cipolla | 2023-10-17T12:01:32Z | http://arxiv.org/abs/2310.11184v1 | # Sparse Multi-Object Render-and-Compare
###### Abstract
Reconstructing 3D shape and pose of static objects from a single image is an essential task for various industries, including robotics, augmented reality, and digital content creation. This can be done by directly predicting 3D shape in various representations [C, C] or by retrieving CAD models from a database and predicting their alignments [C, C], C], C]. Directly predicting 3D shapes often produces unrealistic, overly smoothed or tessellated shapes [C, C], C]. Retrieving CAD models ensures realistic shapes but requires robust and accurate alignment. Learning to directly predict CAD model poses from image features is challenging and inaccurate [C], C]. Works, such as ROCA [C], compute poses from predicted normalised object coordinates which can be more accurate but are susceptible to systematic failure. SPARC demonstrates that following a "render-and-compare" approach where a network iteratively improves upon its own predictions achieves accurate alignments. Nevertheless, it performs individual CAD alignment for every object detected in an image. This approach is slow when applied to many objects as the time complexity increases linearly with the number of objects and can not learn inter-object relations. Introducing a new network architecture Multi-SPARC we learn to perform CAD model alignments for multiple detected objects jointly. Compared to other single-view methods we achieve state-of-the-art performance on the challenging real-world dataset ScanNet [D]. By improving the instance alignment accuracy from 31.8% to 40.3% we perform similar to state-of-the-art multi-view methods [C].
caption[]
alignment. Recent work [] demonstrates that an iterative, render-and-compare approach is more accurate and robust than relying on normalised object coordinates. However, [] produces more accurate alignments, but predicts CAD model poses individually which is slow and leads to worse predictions. In our method CAD model alignments are predicted jointly which is faster and more accurate.
Recent work [] demonstrates that an iterative, render-and-compare approach is more accurate and robust than relying on normalised object coordinates. However, [] produces more accurate alignments, but predicts CAD model poses individually which is slow and leads to worse predictions. In our method CAD model alignments are predicted jointly which is faster and more accurate.
## 2 Related Work
Aligning CAD models to images is a form of 3D reconstruction. While there exist a large number of works that perform 3D reconstruction by directly predicting shapes in various representations [],
works can be split along two meaningful axes: Whether they are single-shot predictions or perform iterative render-and-compare, or whether they predict object poses individually or for multiple CAD models jointly.
**Single-shot alignments vs. iterative procedures**. Mask2CAD [] and Patch2CAD [] directly predict CAD model poses by simply regressing the 6-DoF pose with a convolutional network. While this approach is very simple and fast it is not very accurate and performs poorly for unseen objects. demonstrate more accurate alignments by establishing sparse 2D-3D correspondences between RGB images and rendered CAD model and use these constraints to find the pose that maximizes the silhouette overlap with an instance segmentation prediction. ROCA [] demonstrate a more robust method by leveraging predicted depth to lift dense 2D-3D correspondences into 3D and directly optimizing for the pose that minimizes the 3D correspondence error. In contrast to these works stand approaches that iteratively update a CAD model pose. These works include [] and [] which learn a comparison function between the original image and the rendered CAD model. Both works maximise the learned similarity function at test time using gradient descent requiring 250 to 1000 update steps with run-times of 4 minutes and 36 seconds respectively. SPARC [] demonstrate that render-and-compare can be harnessed more efficiently by directly learning to predict pose updates which proves to be a lot faster (2 seconds) and more robust to poor initialization. Our method works similar to SPARC but we demonstrate how to apply render-and-compare to multiple objects simultaneously.
**Single-object vs. multi-object.**[] all predict alignments for every CAD model individually. While are still fast as they use the same encoder for making predictions for multiple CAD models, need to perform render-and-compare separately for every object which is slow at test time as the time increases linearly with the number of objects in the scene. This can be very slow for scenes with many objects. Independent of the speed all of these methods fail to model inter-object relations which are valuable when attempting to predict accurate CAD alignments.
Methods like [] explicitly model inter-object relations demonstrating that these can contain valuable information for the alignment. model object relations with a graph where nodes represent objects and edges represent their relations with each other. In comparison we allow our network to learn object relations by imposing less structure by having a dense latent space where information from different objects can attend to information regarding its own alignment and the alignment of other objects through attention.
## 3 Method
In this section we describe the three key steps of our method: (i) 2D object detection, instance segmentation as well as surface normal and depth estimation (Sec. 3.1), (ii) sparse input generation (Sec. 3.2) and (iii) pose update predictions (Sec. 3.3) where we iteratively repeat steps (ii) and (iii). Sec. 3.4 explains the synthetic pre-training we used.
### Object Detection, Instance Segmentation, Normal and Depth Prediction
As a first step we perform 2D object detections by predicting a set of bounding boxes (BB) and object classes (see Fig. 2) using Mask-RCNN []. We use the same bounding boxes, object classes and CAD model retrievals as ROCA [], although any other method could
be employed as well. Additionally, we use instance segmentation predictions (S) from [] prompted with the detected bounding boxes. For estimating surface normals (N) and depth values (D) we follow the same training procedure as []. We employ a lightweight convolutional encoder-decoder architecture from []. The training losses are consistent with state-of-the-art works for surface normal estimation [] and for depth estimation []. We use ground truth surface normals provided by [] and ground truth depth from ScanNet [] (for more details see the Supp. Mat.). When training the surface normal and depth estimation network, we respect the train and test split used in our evaluation.
### Generating Sparse Inputs
Rather than processing full images we sample sparse image information as vectors through different image channels []. We sample the location of those vectors from two regions, inside the detected bounding boxes (blue points in Fig. 2) and from pixels onto which 3D CAD model points were reprojected (red points). The different input channels include their color values (RGB), surface normal (N) and depth estimates (D) as well as their instance segmentation mask value (S). We append to those vectors the corresponding pixel bearing \((P_{x},P_{y},P_{z})\) (to provide information on the location of the sampled values), a token \(\tau\) corresponding to the type of input (\(\tau=0\) for bounding box, \(\tau=1\) for reprojected points) and the ID of the detection. For a single detection all vectors are stacked to make up the light blue block of shape \((N_{bbox}+N_{CAD},C_{input})\) in Fig. 2. We encode the 3D CAD model information of shape \((N_{CAD},C_{input})\) (dark blue block) in a similar way by sampling 3D points and corresponding surface normals from the CAD model in the current pose \(\mathbf{R},\mathbf{T},\mathbf{S}\). When reprojecting those points into the image plane we can compute the locations of the corresponding pixel bearings and the values of their surface normal and depth. Values for the color channels (RGB) and instance segmentation (S) are filled with zeros. For the region channel we add \(\tau=2\) and also
Figure 2: **Method**: Left side: For all 2D detections we sample the RGB values (RGB), surface normals (N), depth values (D) and instance segmentation mask values (S) from inside the detected bounding boxes and for pixel bearing \((P_{x},P_{y},P_{z})\) onto which a 3D CAD point is reprojected. CAD model information is encoded by reprojecting 3D points and surface normals into the image plane. Right side: Using Multi-SPARC-Net we encode information for each alignment separately into a latent space using cross-attention. Repeating blocks of separate cross-attention followed by self-attention layers three times we decode from each part of the latent space separately to predict pose updates \(\Delta\mathbf{R}\), \(\Delta\mathbf{T}\) and \(\Delta\mathbf{S}\) as well as a classification score \(\sigma\). Pose updates are used to iteratively refine CAD model poses and \(\sigma\) is used for choosing the best alignment from different rotation initialisations (see Fig. 4a).
include the detection ID. Together, both blocks of information make up all the information for a given detection which is encoded separately into the latent space. This information is sampled for all detections up to a maximum number of N\({}_{mul}\) detections. If there are fewer detections than N\({}_{mul}\) inputs are padded with zeros. If there are more detections, they are split up into multiple forward passes.
### Pose Update Predictions
This subsection provides details on the network architecture, pose parameterisation, loss function and iterative refinement procedure.
**Network Architecture.** Our network architecture is built on a Perceiver network [] with one small difference. Rather, than encoding all input information of the different detections jointly we found it beneficial to encode them separately using a shared cross-attention layer (\([N_{input},C_{input}],[N_{latent},C_{latent}]\rightarrow[N_{latent},C_{ latent}]\)) (see Fig. 2 right side). We concatenate all encodings and apply two layers of self-attention (\([N_{mul}\cdot N_{latent},C_{latent}]\rightarrow[N_{mul}\cdot N_{latent},C_{ latent}]\)) which allows for processing information relevant to the alignment and for sharing information between the different alignments. This block of per-object cross-attention followed by two layers of self attention is repeated three times. At the decoding stage we again decode from the relevant portion of the latent space for each detection separately. For this we reduce the \([N_{\text{Latent}},C_{\text{Latent}}]\) latent space for each object to an \([C_{\text{Latent}}]\) embedding by taking the mean over the first dimension. We map this to the desired number of output parameters \(N_{\text{out}}=11\) using an MLP. The same MLP is applied to the different portions of the latent space to produce pose updates for every detection.
**Pose Parameterisation.** The outputs are the updates to the current pose \((\mathbf{T},\mathbf{R},\mathbf{S})\). They consist of a translation update \(\Delta\mathbf{T}\), a rotation update \(\Delta\mathbf{R}\) and a scale update \(\Delta\mathbf{S}\) as well as a classification score \(\sigma\) indicating whether the starting pose was already an accurate alignment or not. We parameterise \(\mathbf{T}\) with polar coordinates (\(d,\phi,\theta\)) where \(d\) is the distance from the camera center and \(\phi\) and \(\theta\) parameterise a vector on the unit sphere. The updated translation \(\mathbf{T}^{\prime}\) is given by \(\mathbf{T}^{\prime}=(d\cdot\Delta d,\phi+\Delta\phi,\theta+\Delta\theta)\). Rotation is parameterised using quarternions which are transformed to a rotation matrix before making the rotation update \(\mathbf{R}^{\prime}=\mathbf{R}\cdot\Delta\mathbf{R}\). Finally, \(\mathbf{S}\) is parameterised by three axis-aligned scaling parameters and \(\mathbf{S}^{\prime}=\mathbf{S}\cdot\Delta\mathbf{S}\). The updates for scale and the distance parameter \(d\) are multiplicative rather than additive. This is to ensure that the learned updates are decoupled from each other as much as possible. An additive scale update will produce different effects depending on whether the object is close and small or far away and large. In contrast, a multiplicative scale update will produce the same result. We ensure that the predicted updates are positive by applying a sigmoid function to the predicted values. Choosing polar coordinates was again motivated by the intuition that decoupled pose updates are easier to learn than coupled ones. While for euclidean coordinates a given \(X\) prediction will have a very different effect if the object is close and small or far and large, predicting updates for \(\phi\) and \(\theta\) will have the same effect regardless of the distance.
**Loss function.** Our loss function is comprised of two components, one for learning the CAD model alignments and one for learning the pose classifications. For learning the alignments we introduce a loss function that unifies learning translation, rotation and scale, and does not require any hyper-parameter tuning for weighing the relative strengths of different components. Our loss is simply given by the L1 distance of \(N_{\text{loss}}\) points \(\mathbf{P}\) sampled from the CAD model in the ground truth pose \((\mathbf{T}_{\text{GT}},\mathbf{R}_{\text{GT}},\mathbf{S}_{\text{GT}})\) to the CAD model under the predicted pose \((\mathbf{T}^{\prime},\mathbf{R}^{\prime},\mathbf{S}^{\prime})\), \(L_{\text{align}}=\sum_{i=1}^{N_{\text{loss}}}|F^{\prime}(\mathbf{P}_{i})-F_{GT }(\mathbf{P}_{i})|\), where \(F^{\prime}\) and \(F_{GT}\) denote the affine transfor
mations when applying \(\mathbf{S}^{\prime}\),\(\mathbf{R}^{\prime}\) and \(\mathbf{T}^{\prime}\) or \(\mathbf{S}_{GT}\),\(\mathbf{R}_{GT}\) and \(\mathbf{T}_{GT}\) respectively. In general, poses are initialised from a large range of translations, rotations and scale to ensure that at test time the network is robust to poor detections. Consistent with previous work [], we find that it is difficult to learn rotation updates over the entire rotation space. We therefore constrain initialisations to be within an azimuthal angle of \(\pm 45^{\circ}\) of \(\mathbf{R}_{\mathrm{GT}}\). At test time we initialise from \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) azimuthal angle and use the predicted pose classification \(\sigma\) to identify the correct prediction. For learning \(\sigma\) we use a binary cross entropy loss. A given pose is labelled correct if its translation, rotation and scale are within 20 cm, \(20^{\circ}\) and 20% respectively, \(L_{\mathrm{classifier}}=L_{\mathrm{BCE}}(\sigma,\sigma_{\mathrm{GT}})\). Therefore the total loss is given by \(L_{\mathrm{total}}=L_{\mathrm{align}}+L_{\mathrm{classifier}}\). In order to balance the training of the pose classifier we sample separate training poses which are different from the ones used for learning the pose updates (see the Supp. Mat.).
**Iterative Refinements.** After a given prediction at train time the next initial poses will be the updated poses based on the networks predictions. This ensures that the network learns to predict pose updates for realistic poses that it is likely to encounter at test time. After repeating this 3 times a new batch of images is initialised with objects sampled in random poses. At test time pose updates are predicted for all objects in the image which are initialised from 4 different azimuthal angles rotated \(90^{\circ}\) with respect to each other (Fig. 2 shows just one such initialisation). For each initialisation three pose updates are predicted and in a fourth iteration their classification score \(\sigma\) is determined. For each detection the pose with the highest classification score is returned as the final prediction (see Fig. 3(a)).
### Synthetic Pre-training
For the synthetic pre-training we sample random objects from 3D-Future [] in random poses and render them on-the-fly with PyTorch3D []. We use CAD models from 3D-Future as opposed to the CAD models from ShapeNet [] used for our main training and evaluation as many ShapeNet models contain holes or are poorly meshed leading to artifacts when rendering surface normals. For more details see the Supp. Mat.
## 4 Experimental Setup
This section provides a concise overview of the dataset employed in training and testing, along with an explanation of the evaluation metrics and the selected hyperparameters.
**ScanNet dataset.** Following the approach of [], we use the ScanNet25k image dataset [] for training and testing, which includes CAD model annotations provided by []. This dataset comprises 20,000 training images from 1,200 training scenes and 5,000 test images from 300 distinct test scenes. Our method is trained and tested on the top 9 categories with the highest number of CAD annotations covering over 2,500 unique shapes.
**Evaluation metrics.** For our main evaluation we follow the original evaluation protocol established by Scan2CAD [] which evaluates CAD model alignments on a per-scene basis. We convert predicted CAD model poses into ScanNet [] world coordinates and, similar to [], apply 3D non-maximum suppression to remove multiple detections of identical objects from different images. For the evaluation, a CAD model prediction is deemed correct if the object class prediction is correct, the translation error is less than 20 cm, the rotation error is less than 20\({}^{\circ}\), and the scale error is below 20%. We report the percentage of correct alignments for each class individually as well as the overall instance alignment accuracy for all predictions.
In addition to the per-scene alignments we evaluate per-image alignments. For this purpose we reproject CAD models in GT poses into the individual camera frames. Note that for each camera frame only GT CAD models whose center is reprojected into the camera view are considered. For every predicted CAD alignment we find the associated GT CAD model by computing the IoU of the 2D bounding boxes and finding that GT CAD model of the same category with maximum IoU. In order to avoid penalising for objects that are not visible due to occlusion we only consider GT objects for which at least 50% of pixel have the rendered depth value within 30 cm of the GT sensor depth value. Similar to the per-scene metric we evaluate the alignment accuracy by computing the percentage of predictions whose errors for rotation, translation and scale are within the same thresholds as above. Additionally we compute AP\({}^{\text{mesh}}\) introduced by []. It is defined as the mean area under the per-category precision-recall curves for F\({}^{\rho}\) at different thresholds. The F\({}^{\rho}\) score is the harmonic mean of the fraction of points sampled from the predicted aligned CAD model that are within \(\rho\) of a point sampled from the GT aligned CAD model and the fraction of points sampled from the GT CAD model within \(\rho\) of a point sampled from the predicted CAD model. We evaluate AP50, which considers a prediction to be correct if F\({}^{\rho}>0.5\), as well as AP mean which takes the average across the ten AP scores AP50, AP55,....,AP95 sampled in regular intervals.
**Hyperparameters.** For our inputs we sample \(N_{bbox}=2000\) pixels inside the predicted bounding box which is uniformly extended by 10% and use \(N_{CAD}=500\) points from the CAD model. \(N_{input}=(N_{bbox}+2N_{CAD})\) and \(C_{input}=13\). We set the number of latents \(N_{latent}=80\) where each latent has \(C_{latent}=256\) channels. We choose \(N_{mul}=5\) which means that a maximum of 5 CAD models are processed jointly. If an image contains more than 5 detections the detections are split into multiple blocks. We show in the Supp. Mat. that we achieve similar results with larger numbers of \(N_{mul}\). We use batches of 20 images and use the Lamb optimisier [] with learning rate set to 0.001. We sample \(N_{loss}=1000\) points for computing the loss. Our model is pretrained on 10 M rendered images containing between 1 and 4 CAD models in random poses.
Figure 3: **Qualitative comparison.** Particularly for multiple objects close to each other our alignments are more accurate than existing methods (column 1 - 5). Due to the synthetic pre-training, our network can even work from very challenging viewpoints (column 6). Furthermore, our learned 3D classification score allows the network to identify potentially bad alignments (column 7 - 8). Our network struggles to correctly classify display orientations leading to poor performance on that class (column 9).
**Implementation Details.** All code is implemented in PyTorch. Pre-training our main model takes 6 days on a single TitanXp. Finetuning on ScanNet25k for 500 epochs takes 2 days.
## 5 Results
This section explains our qualitative and quantitative results. We first ablate major design choices in the network architecture and training procedure and subsequently compare our method to the state-of-the-art. If not stated otherwise numbers in the following refer to the overall instance alignment accuracy of all objects on ScanNet [].
**Separate Encoding and Decoding.** When performing multi-CAD model alignment with a transformer-based architecture, naively one would simply concatenate all inputs, marking information for different alignments with different tokens, and hoping that the network will learn to regress all pose updates jointly. The first two rows in Tab. 1 show results for the experiments where we perform joint decoding or joint encoding. For the former we reduce all latents \([N_{mul}\cdot N_{latent},C_{latent}]\rightarrow[C_{latent}]\) by taking the mean over the first dimension and then learning an MLP to map to \(N_{mul}\cdot N_{out}\) directly. For the latter we have one large cross attention that maps from all the concatenated inputs to all latents \(([N_{mul}\cdot N_{input},C_{input}],[N_{mul}\cdot N_{latent},C_{latent}] \rightarrow[N_{mul}\cdot N_{latent},C_{latent}])\). Comparing the instance alignment accuracy 27.9% and 31.9% to the alignment accuracy for the multi-object results without pre-training 36.7% we find that both separate encoding and separate decoding are crucial for good alignments, with separate decodings being even more important. The intuition behind this is that it is not easy for the network to learn to associate input information from different CAD models to the correct output values and encoding and decoding separately helps with this.
**Single vs. Multi-object and Pre-training vs. No Pre-training.** Our experiments show that performing CAD model alignments jointly leads to slightly more accurate alignments (36.7% vs. 34.6% without pre-training, 40.3% vs. 38.7% with pre-training). Reasons why learning joint-alignments does not help even more may include noise in the annotation data, making if difficult to learn exact relations, as well as a higher chance of overfitting to entire
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c|c} \hline \hline & Method & bathtub & bed & bin & htail & chint & chint & chint & display & info & table & [] & [] & [] & [] \\ \hline & \multicolumn{2}{c|}{Number of Instances \#} & 120 & 70 & 22 & 212 & 250 & 1003 & 191 & 113 & 857 & 9 & 2844 & \\ \hline \multicolumn{12}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} & \multicolumn{2}{c|}{**Ablate**} \\ \hline Joint Encoding and Decoding & \multicolumn{2}{c|}{separ encoding - joint decoding} & 17.5 & 21.4 & 28.7 & 9.9 & 15.0 & 43.7 & 3.1 & 29.2 & 17.5 & 20.7 & 29.9 & 864 \\ \hline \multirow{3}{*}{Single vs. Multi Pre-training} & \multicolumn{2}{c|}{**single-object**-**op training} & 18.3 & 54.3 & 36.6 & 12.7 & 14.2 & 52.8 & 4.7 & 25.7 & 15.5 & 24.1 & 31.9 & 656 \\ \cline{2-12} & single-object + pre-training & 30.8 & 24.9 & 73.2 & 22.7 & 57.5 & 57.2 & 54.3 & 17.5 & 24.8 & 31.0 & 27.0 \\ \hline \multirow{3}{*}{Single vs. Multi Pre-training} & \multicolumn{2}{c|}{**single-object**-**op training} & 20.6 & **26.0** & **26.0** & **26.1** & **27.4** & **28.4** & **50.9** & **28.4** & **36.0** & **36.7** & **28.4** \\ & single-object + pre-training & 25.3 & 35.6 & 36.0 & 23.1 & 64.4 & 53.7 & 32.2 & 50.2 & 36.7 & 25.0 \\ & multi-object- pre-training & 25.8 & 34.3 & 44.8 & 17.0 & 19.2 & 64.8 & 5.8 & 35.4 & 25.5 & 30.3 & 49.3 & 864 \\ \hline \multirow{3}{*}{Sparer and Faster} & \multicolumn{2}{c|}{\(N_{mul}{multicolumnumnumn{12}{c|}{}}=200\), \(N_{mul}{mul}{multicolumnumnumn{12}{c|}{}}=500\), \(N_{mul}{multicolumn}{multicolumnumnumn{12}{c|}{}}=500\), \(N_{multicolumn}{multicolumnumnumn{12}{c|}{}}=500\), \(N_{multicolumnumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumnumn{12}{c|}{}}=500\), \(N_{multicolumn
scenes as opposed to single alignments. When comparing results with and without synthetic pre-training we find significant improvement of 4%. This indicates that even training on a different set of CAD models synthetically rendered in random poses provides useful training signals that transfer to real images. Inspecting Fig. 4c we find that the pre-trained model achieves both a lower train and test loss leading to a higher instance alignment accuracy on the test set.
**Sparser and Faster.** Another advantage of performing alignments for multiple CAD models jointly as opposed to in sequence is that it is a lot faster. The times in Tab. 1 include the time for processing the input data (23 ms, for the main network architecture and inputs in row 4)) as well as a forward pass through the network (31 ms). These steps have to be repeated four times for the refinement procedure (3 refinement + 1 final classification score) from four different initialisations (see Fig. 4a) leading to a total time of \(4\times 4\times(23+31)=864\) ms. By processing very sparse inputs i.e. \(N_{bbox}=200\) and \(N_{CAD}=200\), reducing the number of latents \(N_{latent}=40\) and encoding input information jointly, we can reduce both the time for processing the inputs (16 ms) as well as the forward pass (14 ms) and almost halve the total run-time to 480 ms. If not initialised from four different rotations (as would be realistic for example in a video setting where the rough object rotation is known from previous frames) this approaches the speed of single-shot methods while being considerably more accurate. Interestingly, this network variant is more accurate than the one encoding the full inputs jointly in the second row. This may indicate that it is easier for the network to learn to separate information for multiple alignments when presented with fewer inputs. Row 8 shows results for even sparser inputs, resulting in further small gains in speed.
**Learned classification score.** Rather than just predicting pose updates we also learn classification scores indicating whether a given alignment is accurate or not. We use these learned classification scores to select the best alignment from multiple rotation initialisations (see Fig. 4a) as well as to select from multiple predictions of the same object from different images in the Scan2CAD [] evaluation. We compare to the 2D detection confidence from ROCA [] and note a small improvement (40.3% compared to 38.8%). More importantly, plotting the mean accuracy of the predictions sorted by the confidence we find that our 3D classification score is significantly better calibrated (see Fig. 4b).
**Comparison to other methods - per-scene evaluation.** We compare our method to other
Figure 4: **Pose selection, calibration and loss functions.** a) We use the predicted classification score to select the final object predictions from 4 different rotation initialisations. b) The classification score is also used in the ScanNet evaluation to filter out duplicate predictions. Compared to the 2D confidence scores (green) from [] our 3D classification score (blue) is significantly better calibrated. c) Synthetic pre-training leads to lower losses during training and testing as well as a higher instance alignment accuracy on the test set.
state-of-the-art CAD model alignment procedures [12, 13, 14, 15]. Quantitatively comparing against those methods we find that we improve significantly upon the instance alignment accuracy from 31.8% to 40.3% and the class mean accuracy from 24.9% to 30.3%. We also improve in most categories with the notable exception of displays. Here our learned classification score struggles to distinguish between front and back-facing displays which look very similar when only sparse pixels are provided (see Fig. 3 last column).
**Comparison to other methods - per-image evaluation.** The advantages of our method compared to previous methods are even more pronounced on the per-image evaluation then they were on the per-scene evaluation (see Tab. 2). The class and instance alignment accuracy almost double compared to previous methods (28.1% vs. 16.1% and 31.3% vs. 18.4%). AP50 and APmean show even greater relative improvements, e.g. at \(\rho=0.5\) AP50 improves from 10.8% to 27.0% and APmean improves from 3.0% to 11.5%. The reason why the improvements of our method compared to the previous ones are even more pronounced on the per-image compared to the per-scene evaluation is that the per-scene evaluation requires only one very accurate prediction for each object from any frame whereas the per-image evaluation has a high number of challenging viewpoints. Here both the multi-object predictions as well as the synthetic pre-training significantly increase the accuracy of the predictions.
## 6 Conclusion
We introduced a novel render-and-compare approach that jointly aligns multiple CAD models to objects in an image. This provides advantages for both speed and accuracy at test time, improving the run-time by a factor of up to 5 and improving the instance alignment accuracy on ScanNet [16] from 31.8% to 40.3%. We demonstrate that some of this improvement stems from pre-training our network on a large number of random synthetic scenes. The fact that those scenes contain objects different to the ones the network is tested on highlights the ability of our render-and-compare approach to generalise. Furthermore, we learn to predict not just pose updates but also classification scores that can be used for selecting a final pose from different candidates. In the future we would like to extend render-and-compare to multi-view scenarios as well as using larger foundational models in a render-and-compare setting to reconstruct 3D scenes.
\begin{table}
\begin{tabular}{c|c c c|c c|c c||c c} \hline \hline & \multicolumn{6}{c||}{**APmesh**} & \multicolumn{3}{c}{**Alignment Accuracy**} \\ \hline & & AP50 & APmean & AP50 & APmean & AP50 & APmean & class & instance \\ \hline & \(\rho\) & 0.3 & 0.3 & 0.5 & 0.5 & 0.7 & 0.7 & - & - \\ \hline ROCA [14] & & 1.8 & 0.4 & 10.8 & 3.0 & 20.3 & 7.1 & 16.1 & 18.4 \\ SPARC [14] & & 2.4 & 0.5 & 9.8 & 3.0 & 19.1 & 7.0 & 15.9 & 17.4 \\ Ours & & **11.6** & **3.4** & **27.0** & **11.5** & **36.4** & **18.7** & **28.1** & **31.3** \\ \end{tabular}
\end{table}
Table 2: **Per-image alignment accuracy and APmesh score on ScanNet [16]. Both AP scores and alignment accuracies are reported in %. The \(\rho\) value controls the threshold for computing the F1 score in the AP calculation. Smaller \(\rho\) values require points sampled from the predicted aligned CAD model and the GT aligned CAD model to be closer together and therefore more accurate poses. Before computing the F1 score both CAD models are re-scaled isotropically such that the longest side of the 3D bounding box of the GT CAD model is equal to 10. Therefore for a typical object of maximum width and height equal to 1 m \(\rho=0.5\) requires points sampled from the predicted CAD model to be within 5 cm of the GT CAD model and vice versa.** |
2304.02701 | Fixing the Kawarabayashi-Thomas-Wollan Flat Wall | Two recent papers by Kawarabayashi, Thomas and Wollan, "A New Proof of the
Flat Wall Theorem" (arXiv:1207.6927) and "Quickly Excluding a Non-Planar Graph"
(arXiv:2010.12397) provide major improvements over Robertson and Seymour's
original proof of the structure theorem for finite graphs that exclude a given
graph. The first paper redefines the notion of a flat wall. Unfortunately, this
new notion is too strong. As a result, the new Flat Wall Theorem in that paper
is incorrect. A counterexample is given in Appendix A. A follow-on lemma in the
first paper, about the transitivity of flatness, is also incorrect, a fact that
was noticed by Dimitrios Thilikos et al in arXiv:2102.06463. However, that
error is derivative and not the main issue. This paper provides a weaker
definition of the notion of a flat wall, provides a correction to the proof of
the new Flat Wall Theorem and a new proof of flatness transitivity. The notion
of a tight rendition as presented here differs from Thilikos' definition but is
defined much more simply, and the notion of a proper cycle is introduced. The
notions of certificates and tilted walls used by Thilikos turn out of be
unnecessary and transitivity is preserved in its original simplicity and
generality. Most importantly, it looks like the new weaker definition of
flatness is all that is really necessary to carry through the proof of the
structure theorem in the second paper of Kawarabayashi, Thomas and Wollan. | Dan Arnon | 2023-04-05T18:53:37Z | http://arxiv.org/abs/2304.02701v1 | # Fixing the Kawarabayashi-Thomas-Wollan Flat Wall
###### Abstract
Recent papers by Kawarabayashi, Thomas and Wollan ([1, 2]) provide major improvements over Robertson and Seymour's original proof of the structure theorem for finite graphs that exclude a given graph ([4]). This structure theorem constitutes a central step in the proof of the Wagner Conjecture. The new papers provide a significant reduction of the size bounds in the theorem as well as providing a simpler, shorter and more accessible proof. The first paper [1] gives a new proof of the Flat Wall Theorem, a central stepping stone to the proof of the structure theorem itself in [2]. More than that, the paper redefines an important notion, that of a _flat wall_. Unfortunately, this new notion is too strong. As a result, the new Flat Wall Theorem (Theorem 5.2 in [1]) is incorrect. I give a counterexample in Appendix A. A follow-on lemma in the same paper (Lemma 6.1 in [1], about the transitivity of flatness) is also incorrect, a fact that was noticed by Dimitrios Thilikos et al in [5]. However, those authors appear to have missed the main issue, which is Theorem 5.2. Nevertheless, their notion of a _tight rendition_ is a crucial ingredient of the fix to the main problem as presented here.
This paper provides a weaker definition of the notion of a flat wall, provides a correction to the proof of the Flat Wall Theorem and a new proof of flatness transitivity. The notion of a tight rendition as presented here differs a little from [5] but is defined much more simply, and the notion of a proper cycle is introduced. The notions of certificates and tilted walls in [5] turn out of be unnecessary and transitivity is preserved in its original simplicity and generality. Most importantly, it looks like the new weaker definition of flatness is all that is really necessary to carry through the structure theorem in [2].
###### Contents
* 1 Preliminaries
* 1.1 Loops in \(\mathbb{S}^{2}\)
* 1.2 Societies and Renditions
* 1.2.1 Paintings and orientations
* 1.2.2 Societies and their renditions
* 1.3 Graph terminology vs. topological terminology
* 1.4 Tracks and proper cycles
* 2 Fixing a wall
* 2.1 Fixing the definition of flatness
* 2.2 Fixing Lemma 5.1 in [1]
* 2.3 Where the rain gets in: Theorems 5.2 and 6.1 of [1] revisited
* 2.3.1 Revisiting the proof of 5.2 (The Flat Wall Theorem)
* 2.3.2 Revisiting Lemma 6.1 (Subwalls of flat walls are flat)
* A Counterexample to the Flat Wall Theorem (5.2 in [1])
## 1 Preliminaries
This section is mostly a review of some basic concepts of Robertson and Seymour's graph minor theory, but it also includes an explanation of some terminology choices I made in this paper.
### Loops in \(\mathbb{S}^{2}\)
We start with some terminology and basic facts about subsets of the unit 2-sphere \(\mathbb{S}^{2}\) that are homeomorphic to the unit 1-sphere \(\mathbb{S}^{1}\). We refer to them as _loops_ though technically they are _simple_ loops since they do not have self-intersections.
A loop \(L\in\mathbb{S}^{2}\) divides the sphere into two closed regions \(\Delta^{0}_{L}\) and \(\Delta^{1}_{L}\), both homeomorphic to a closed disk with intersection \(\Delta^{0}_{L}\cap\Delta^{1}_{L}=L\).
Given two loops \(L\) and \(L^{\prime}\), we say that \(L\) and \(L^{\prime}\) are _non-crossing_ if \(L\subset\Delta^{0}_{L^{\prime}}\) or \(L\subset\Delta^{1}_{L^{\prime}}\). Let \(L\) and \(L^{\prime}\) be non-crossing. Without loss of generality, we can assume \(L\subset\Delta^{0}_{L^{\prime}}\). Since \(\Delta^{0}_{L^{\prime}}\) is simply connected, we have one region of \(L\) that is contained in \(\Delta^{0}_{L^{\prime}}\). Say \(\Delta^{0}_{L}\subseteq\Delta^{0}_{L^{\prime}}\).
The converse is also true. If \(\Delta^{i}_{L}\subset\Delta^{j}_{L^{\prime}}\) for some \(i,j\in\{0,1\}\) then \(L\subset\Delta^{j}_{L^{\prime}}\) and so \(L\) and \(L^{\prime}\) are non-crossing. Since \(\Delta^{i}_{L}\subset\Delta^{j}_{L^{\prime}}\) implies \(\Delta^{1-j}_{L^{\prime}}\subset\Delta^{1-i}_{L}\), the non-crossing relationship is symmetric.
**Lemma 1**.: _Let \(L\) and \(L^{\prime}\) be non-crossing and let \(i,i^{\prime}\in\{0,1\}\). Then either:_
\[\Delta^{i}_{L}\subseteq\Delta^{i^{\prime}}_{L^{\prime}} \text{or}\quad\Delta^{i^{\prime}}_{L^{\prime}}\subseteq\Delta^{i}_{L} \text{or}\] \[\Delta^{i}_{L}\cap\Delta^{i^{\prime}}_{L^{\prime}}\subseteq L\cap L^{\prime} \text{or}\quad\Delta^{i}_{L}\cup\Delta^{i^{\prime}}_{L^{\prime}}= \mathbb{S}^{2}\]
Proof.: Since \(L\) and \(L^{\prime}\) are non-crossing, we have some \(j,k\in\{0,1\}\) such that \(\Delta^{j}_{L}\subseteq\Delta^{k}_{L^{\prime}}\). It follows that \(\Delta^{1-k}_{L^{\prime}}\subseteq\Delta^{1-j}_{L}\).
If \(j=i\) and \(k=i^{\prime}\) we are done due to the first inclusion, and if \(j\neq i\) and \(k\neq i^{\prime}\) we are done due to the second inclusion. If \(j=i\) and \(k\neq i^{\prime}\), then
\[\Delta^{i}_{L}\cap\Delta^{i^{\prime}}_{L^{\prime}} =\Delta^{j}_{L}\cap\Delta^{1-k}_{L^{\prime}}\subseteq\Delta^{k}_{ L^{\prime}}\cap\Delta^{1-k}_{L^{\prime}}=L^{\prime}\] \[\Delta^{i}_{L}\cap\Delta^{i^{\prime}}_{L^{\prime}} =\Delta^{j}_{L}\cap\Delta^{1-k}_{L^{\prime}}\subseteq\Delta^{j}_{ L}\cap\Delta^{1-j}_{L}=L\]
Finally if \(j\neq i\) and \(k=i^{\prime}\) then
\[\Delta^{i}_{L}\cup\Delta^{i^{\prime}}_{L^{\prime}}=\Delta^{1-j}_{L}\cup\Delta^ {k}_{L^{\prime}}\supseteq\Delta^{1-k}_{L^{\prime}}\cup\Delta^{k}_{L^{\prime}}= \mathbb{S}^{2}\]
### Societies and Renditions
The definitions of "painting" and "rendition" below are borrowed from [1] with a few corrections and superficial changes that make these notions easier to work with.
#### 1.2.1 Paintings and orientations
Let \(\mathbb{S}^{2}\) be the unit 2-sphere in \(\mathbb{R}^{3}\). A _bounded painting_ in \(\mathbb{S}^{2}\) is a quadruple \((\mathcal{N},\bar{\mathcal{C}},\bar{\star},\tau)\) such that
* \(\mathcal{N}\subset\mathbb{S}^{2}\) is a finite set of points, called the _nodes_ of the painting.
* \(\bar{\mathcal{C}}\) is a finite family of subsets of \(\mathbb{S}^{2}\) each homeomorphic to a closed disk and \(\bar{\star}\in\bar{\mathcal{C}}\) is a distinguished disk called the _external disk_.
* Let \(U=\bigcup\limits_{u\in\bar{\mathcal{C}}}u\). Then \(\mathcal{N}\subset\mathrm{bd}(U)\) and the sets \(\mathcal{C}\coloneqq\{u\setminus\mathcal{N}|u\in\bar{\mathcal{C}}\}\) are the connected components of of \(U\setminus\mathcal{N}\). The members of \(\mathcal{C}\) are called the _cells_ of the painting and the distinguished cell \(\star:=\bar{\star}\setminus\mathcal{N}\) is called the _external_ cell. We call all the other cells _internal_. For each cell \(c=u\setminus\mathcal{N}\), define \(\tilde{c}=u\cap\mathcal{N}\).
* For every internal cell \(c\), \(|\tilde{c}|\leq 3\). As a result if \(n_{1},n_{2}\) are distinct nodes in \(\tilde{c}\) then at least one of the two open segments of \(\mathrm{bd}(c)\) created by \(n_{1},n_{2}\) is \(\mathcal{N}\)-free. \(\tau\) is a function (called the tie-breaker) that associates each such triple \(c,n_{1},n_{2}\) to an \(\mathcal{N}\)-free open segment \(\tau(c,n_{1},n_{2})\) of the boundary of \(c\).
* The function \(\tau\) is unoriented. For all internal \(c\) and distinct \(n_{1},n_{2}\in\tilde{c}\), \[\tau(c,n_{1},n_{2})=\tau(c,n_{2},n_{1})\]
It follows immediately from the definition that the disks in \(\bar{\mathcal{C}}\) are mutually almost disjoint and only touch at a finite number of nodes along their boundaries. Pick an orientation of \(\Delta\coloneqq\overline{\mathbb{S}^{2}\setminus\star}\). The orientation establishes a notion of right and left when traversing a path in \(\Delta\).
For a loop \(\gamma\) in \(\Delta\), define the _clockwise_ orientation of \(\gamma\) to be the direction of travel that keeps the interior region of \(\gamma\) on the right. In particular the boundaries \(\mathrm{bd}(c)\) of cells \(c\in C(\Gamma)\) have a distinguished clockwise orientation. These orientations induce a circular order on the sets \(\pi(\tilde{c})\).
**Definition 1**.: _Let \(\Delta\subset\mathbb{S}^{2}\) be an oriented disk and let \(L\subset\Delta\) be a loop. The loop \(L\) divides \(\mathbb{S}^{2}\) into two closed regions, both homeomorphic to a disk, which we denoted by \(\Delta^{0}_{L}\) and \(\Delta^{1}_{L}\). \(\Delta\) is simply connected and therefore exactly one of these region, denoted \(\Delta^{\text{in}}_{L}\), is a subset of \(\Delta\). The other region is denoted \(\Delta^{\text{out}}_{L}\)._
If we endow the loop \(L\) with a clockwise orientation, then the interior of \(\Delta^{\text{in}}_{L}\) will be on the right as we traverse \(L\). If a loop \(L\) comes with a pre-determined arbitrary orientation, then one of its two regions will be on the right in the direction of travel. We denote that region by \(\Delta_{\overrightarrow{L}}\), and its complementary region by \(\Delta_{\overrightarrow{L}}\). It follows that if \(L\) is oriented, then its orientation is clockwise if and only if \(D_{\overrightarrow{L}}=\Delta^{\text{in}}_{L}\).
Given an oriented simple loop \(L\subset\Delta\) and two distinct points \(x,y\in L\), the segment \(L[x,y]\) is defined to be the segment of \(L\) that one would traverse when traveling in \(L\) from \(x\) to \(y\) in the direction given by the orientation. It follows that \(L=L[x,y]\cup L[y,x]\).
**Lemma 2**.: _Let \(\Delta\subset\mathbb{S}^{2}\) be an oriented disk, and let \(L_{0},L_{1}\subset\Delta\) be two non-crossing loops. let \(k\in\{2,3\}\) and let \(n_{1},\dots,n_{k}\in L_{0}\) be distinct points listed in \(L_{0}\)-clockwise order. Let \(X\) be a union of some of the open segments \(L_{0}(n_{1},n_{2}),\dots,L_{0}(n_{k},n_{1})\). Assume that \(\Delta^{\text{in}}_{L_{0}}\cap\Delta^{\text{in}}_{L_{1}}=\{n_{1},\dots,n_{k} \}\cup X\) (see for example Figures 1 and 2.) For \(1\leq m\leq k\) and \(p=1+(m\mod k)\) such that \(L_{0}(n_{m},n_{p})\not\subseteq X\), define \(Z_{m}\) to be the loop_
\[Z_{m}=L_{0}[n_{m},n_{p}]L_{1}[n_{p},n_{m}]\]
_Then_
* _each_ \(Z_{m}\) _is a simple loop that is non-crossing relative to_ \(L_{0}\) _and_ \(L_{1}\)_._
* _some_ \(\Delta^{\text{in}}_{Z_{m}}\) _contains both_ \(\Delta^{\text{in}}_{L_{0}}\) _and_ \(\Delta^{\text{in}}_{L_{1}}\)_._
Proof.: Throughout the proof we use \(m\) to range over \(\{1,\dots,k\}\) and \(i\) to range over \(\{0,1\}\). We also use the notation \(p\coloneqq 1+(m\mod k)\) for the modular successor of \(m\). \(Z_{m}\)
of two simple paths. The intersection of these paths is
\[L_{0}[n_{m},n_{p}]\cap L_{1}[n_{p},n_{m}]\subseteq L_{0}[n_{m},n_{p}]\cap(\{n_{1},\ldots,n_{k}\}\cup X)=\{n_{m},n_{p}\}\]
proving that \(Z_{m}\) is a simple loop.
The loop \(Z_{m}\) is composed of a segment of \(L_{i}\) (which resides in \(\Delta^{\rm out}_{L_{i}}\) by definition), and a segment of \(L_{1-i}\) which resides in \(\Delta^{\rm out}_{L_{i}}\) by assumption. Therefore \(Z_{m}\subset\Delta^{\rm out}_{L_{i}}\), proving that \(Z_{m}\) and \(L_{i}\) are non-crossing.
We prove the second claim through the following steps:
1. For all \(m\) and \(i\), neither \(\Delta^{\rm in}_{Z_{m}}\cup\Delta^{\rm in}_{L_{i}}=\mathbb{S}^{2}\) nor \(\Delta^{\rm in}_{Z_{m}}\subseteq\Delta^{\rm in}_{L_{i}}\) hold. The relation \(\Delta^{\rm in}_{Z_{m}}\cup\Delta^{\rm in}_{L_{i}}=\mathbb{S}^{2}\) is not possible since by definition \(\Delta^{\rm in}_{Z_{m}}\cup\Delta^{\rm in}_{L_{i}}\subseteq\Delta\subsetneq \mathbb{S}^{2}\). Assume that \(\Delta^{\rm in}_{Z_{m}}\subseteq\Delta^{\rm in}_{L_{i}}\). This implies \[\Delta^{\rm in}_{Z_{m}}\cap\Delta^{\rm in}_{L_{1-i}}\subseteq\Delta^{\rm in} _{L_{i}}\cap\Delta^{\rm in}_{L_{1-i}}=\{n_{1},\ldots,n_{k}\}\cup X\] We know that by definition \(\Delta^{\rm in}_{Z_{m}}\cap\Delta^{\rm in}_{L_{0}}\) contains the open segment \(L_{0}(n_{m},n_{p})\) which is not in \(X\), so the case \(i=1\) is not possible. \(\Delta^{\rm in}_{Z_{m}}\cap\Delta^{\rm in}_{L_{1}}\) contains the open segment \(L_{1}(n_{p},n_{m})\). This implies \(L_{1}[n_{p},n_{m}]\subset\bar{X}\) so it is a segment of \(L_{0}\). The equality \(L_{1}(n_{p},n_{m})=L_{0}(n_{m},n_{p})\) is not possible, because by definition \(L_{0}(n_{m},n_{p})\not\subseteq X\). The equality \(L_{1}(n_{p},n_{m})=L_{0}(n_{p},n_{m})\) is not possible because the interior of both \(\Delta^{\rm in}_{L_{0}}\) and \(\Delta^{\rm in}_{L_{1}}\) is found on the right of the segment as we travel from \(n_{p}\) to \(n_{m}\), implying that the intersection of the two disks has an interior, contrary to our assumption.
2. There are \(m\) and \(i\) such that \(\Delta^{\rm in}_{Z_{m}}\supseteq\Delta^{\rm in}_{L_{i}}\). Assume this is not the case. By Lemma 1, there are four possible relations between \(\Delta^{\rm in}_{Z_{m}}\) and \(\Delta^{\rm in}_{L_{i}}\). We have already eliminated two of them. By assumption, the relation \(\Delta^{\rm in}_{L_{i}}\subseteq\Delta^{\rm in}_{Z_{m}}\) does not occur either. As a result, the last relation, \(\Delta^{\rm in}_{Z_{m}}\cap\Delta^{\rm in}_{L_{i}}=Z_{m}\cap L_{i}\) must be true
for all possible values of \(m\) and \(i\). Let \[R=\Delta_{L_{0}}^{\text{in}}\cup\Delta_{L_{1}}^{\text{in}}\cup\bigcup\{\Delta_{Z_ {m}}^{\text{in}}\}\] The boundary \(\text{bd}(R)\) is a subset of \(L_{0}\cup L_{1}\). We will show that \(\text{bd}(R)=\emptyset\). Recall that \(m,p\) always indicate consecutive indices in the clockwise circular order on \(L_{0}\). Let \(x\in L_{0}\cup L_{1}\setminus\{n_{1},\ldots,n_{k}\}\). Suppose that \(x\in L_{0}(n_{m},n_{p})\). If \(L_{0}(n_{m},n_{p})\subseteq X\) then it is a common boundary segment of \(\Delta_{L_{0}}^{\text{in}}\) and \(\Delta_{L_{1}}^{\text{in}}\). Since \(\Delta_{L_{0}}^{\text{in}}\cap\Delta_{L_{1}}^{\text{in}}\) is nowhere dense by assumption, these disks do not share interior points and therefore their interiors are on opposite sides of \(L_{0}(n_{m},n_{p})\). It follows that \(x\) is not a boundary point of \(R\) in that case. If \(L_{0}(n_{m},n_{p})\not\subseteq X\) then it is a common boundary segment of \(\Delta_{L_{0}}^{\text{in}}\) and \(\Delta_{Z_{m}}^{\text{in}}\) and by a similar argument \(x\) is not a boundary point of \(R\). Suppose that \(x\in L_{1}(n_{p},n_{m})\). If \(L_{0}(n_{m},n_{p})\not\subseteq X\) then \(L_{1}(n_{p},n_{m})\) is a common boundary segment of \(\Delta_{L_{1}}^{\text{in}}\) and \(\Delta_{Z_{m}}^{\text{in}}\) and by the same argument as before, \(x\) is not a boundary point of \(R\). If \(L_{0}(n_{m},n_{p})\subseteq X\) then it is a segment of \(L_{1}\) as well, and it must be equal to either \(L_{1}(n_{p},n_{m})\) or \(L_{1}(n_{m},n_{p})\). In the former case \(x\in L_{0}(n_{m},n_{p})\) and we have already shown that \(x\) is not a boundary point of \(R\). In the latter case, when \(L_{0}(n_{m},n_{p})\) is traversed from \(n_{m}\) to \(n_{p}\), the interior of \(\Delta_{L_{0}}^{\text{in}}\) is on the right by definition, and since \(L_{0}(n_{m},n_{p})=L_{1}(n_{m},n_{p})\) the interior of \(\Delta_{L_{1}}^{\text{in}}\) is on the right as well, contradicting our assumption that these two disks do not share interior points. It follows that \(\text{bd}(R)\subseteq\{n_{1},\ldots,n_{k}\}\) and is therefore discrete. Since \(R\) is a closed set, the boundary must be the empty set. It follows that \(R=\mathbb{S}^{2}\), contradicting the fact that \(R\subseteq\Delta\).
* If \(\Delta_{Z_{m}}^{\text{in}}\) contains \(\Delta_{L_{i}}^{\text{in}}\) then it contains \(\Delta_{L_{1-i}}^{\text{in}}\). Assume \(i=0\) and traverse the loop \(Z_{m}\) starting with the segment \(L_{0}[n_{m},n_{p}]\) in the direction from \(n_{m}\) to \(n_{p}\). We encounter the interior of \(\Delta_{L_{0}}^{\text{in}}\) on the right since we are traversing the segment in the clockwise \(L_{0}\)-direction by definition. Since \(\Delta_{Z_{m}}^{\text{in}}\) contains \(\Delta_{L_{0}}^{\text{in}}\), we must encounter the interior of \(\Delta_{Z_{m}}^{\text{in}}\) on the right as well. In other words, we are traversing \(Z_{m}\) in its own clockwise direction. As we continue to traverse \(Z_{m}\) in the same direction, we traverse \(L_{1}[n_{p},n_{m}]\) from \(n_{p}\) to \(n_{m}\). The interior of \(Z_{m}\) is still on the right, and so is the interior of \(L_{1}\), since we are traversing \(L_{1}[n_{p},n_{m}]\) in the \(L_{1}\)-clockwise order by definition. As a result \(\Delta_{Z_{m}}^{\text{in}}\) and \(\Delta_{L_{1}}^{\text{in}}\) share an interior. As we have seen, this implies that \(\Delta_{Z_{m}}^{\text{in}}\supseteq\Delta_{L_{1}}\). A similar argument works when \(i=1\), we just need to start traversing \(Z_{m}\) along \(L_{1}[n_{p},n_{m}]\) from \(n_{p}\) to \(n_{m}\). This concludes the proof.
#### 1.2.2 Societies and their renditions
Let \(G\) be a graph and \(C\subseteq V(G)\) a set of vertices of \(G\) which we endow with a circular order. The pair \((G,C)\) is called a _society_. Societies arise naturally when we try to draw graphs on two dimensional surfaces with a connected boundary (i.e. surfaces from which the interior of a closed disk has been removed), because the boundary induces a natural circular order on the vertices that are drawn along it.
Let \((G,C)\) be a society and \(\Delta\subset\mathbb{S}^{2}\) be a closed oriented disk. A \(C\)-rendition of \(G\) in \(\Delta\) is a triple \((\Gamma,\sigma,\pi)\) such that
* \(\Gamma\) is a bounded painting in \(\mathbb{S}^{2}\) with \(\tilde{\star}(\Gamma)=\overline{\mathbb{S}^{2}\setminus\Delta}\). As we saw, the orientation of \(\Delta\) induces a circular order on each set \(\tilde{c}\) in \(C(\Gamma)\).
* \(\pi:\mathcal{N}\hookrightarrow V(G)\) is a 1:1 association of nodes to vertices of \(G\), and \(\pi(\tilde{\star})=C\) as cyclically ordered sets.
* \(\sigma\) is a function assigning a subgraph \(\sigma(c)\subseteq G\), called the _flap_ of \(c\), to each cell \(c\in\mathcal{C}\), such that
* \(G=\bigcup_{c\in C}\sigma(c)\cup\pi(\mathcal{N})\)
* for any node \(c\), \(\sigma(c)\cap\pi(\mathcal{N})=\pi(\tilde{c})\), and \(\sigma(\star)=\pi(\tilde{\star})=C\).
* if \(c_{1}\neq c_{2}\) then \(\sigma(c_{1})\cap\sigma(c_{2})=\pi(\tilde{c_{1}}\cap\tilde{c_{2}})\).
A society \((G,C)\) is called _rural_ if there is a \(C\)-rendition of \(G\) in a disk. The notion of a rural society is originally due to Robertson and Seymour in [3], but the definition here (taken from Section 2.1 of [2]) is clearer.
### Graph terminology vs. topological terminology
Like tree decompositions, the power of renditions comes from the interplay they engender between the structure of the underlying graph and the geometry of the rendition. To minimize confusion, I try to use separate terminologies for graph terms and the rendition-related topological terms.
* A subset of a disk homeomorphic to the unit circle \(\mathbb{S}^{1}\) will be referred to as a _loop_ (the usual term is _simple loop_, but all the loops here will be simple). A cycle in a graph will be referred to as a _cycle_.
* Loops in an oriented disk will be _oriented_ and have _(clockwise) orientations_. Directed cycles in a graph will be _directed_ and have _(clockwise) directions_.
* Graphs have _vertices_ and renditions have _nodes_. When there is no risk of confusion, I will use the term _node_ to refer to a graph vertex that is associated to a node through the \(\pi\)-mapping, and often ignore \(\pi\) completely and just assume that \(\mathcal{N}\subseteq V(G)\) and \(\pi\) is the identity.
* If \(H\) is a subgraph of \(G\), the nodes of \(H\) are denoted \(N(H)\). So \[N(H)=V(H)\cap\pi(\mathcal{N})=V(H)\cap\mathcal{N}\] If \(X\) is a subset of \(\Delta\), the nodes of \(X\) are denoted \(N(X)\). So \(N(X)=X\cap\mathcal{N}\).
* A subset of a disk \(\Delta\) homeomorphic to the closed unit interval is usually referred to as a path in \(\Delta\), but here will be referred to as a _segment_. The term _path_ will be reserved to paths in a graph. The interior of a segment in \(\Delta\) will be referred to as an _open_ segment.
* By abuse of notation, we will say that an edge (or vertex) of \(G\) resides in a cell if it belongs to the flap of that cell.
### Tracks and proper cycles
Throughout this subsection, \((G,C)\) is a rural society and \(\rho=(\Gamma,\sigma,\pi)\) is a \(C\)-rendition of \(G\) in an oriented disk \(\Delta\subset\mathbb{S}^{2}\). Throughout, \(\mathcal{N}\), \(\mathcal{C}\) and \(\tau\) refer to \(\mathcal{N}(\Gamma)\), \(\mathcal{C}(\Gamma)\) and \(\tau(\Gamma)\). Most of the definitions here are taken from [1] and are rephrased here for completeness. I tweaked the notion of _track_ slightly to make it easier to work with.
**Definition 2**.: _A path \(P\) in \(G\) is called grounded (relative to \(\rho\)) if both ends of \(P\) are nodes of \(\rho\). A cycle \(D\subseteq G\) is grounded if there are two edges of \(D\) that reside in different cells of \(\rho\). A directed grounded path \(Q\) is called atomic if \(|E(Q)|\geq 1\) and none of its internal vertices are nodes of \(\rho\). An atomic path \(Q\) is called trivial if it contains no internal vertices, i.e. \(|E(Q)|=1\)._
**Lemma 3**.: _An atomic path must reside entirely inside \(\sigma(c)\) for some internal cell \(c\)._
Proof.: Let \(Q\) be an atomic path and let \(v\) be an internal vertex of \(Q\). Then \(v\) is not a node and \(v\in V(\sigma(c))\) for a unique cell \(c\). There are two edges of \(Q\) that end in \(v\) and both must be in \(\sigma(c)\). Therefore every adjacent pair of \(Q\)-edges share a cell and therefore all the edges of \(Q\) share a cell. Since \(Q\) has at least one edge, the shared cell must be internal.
**Definition 3**.: _Let \(Q\) be an atomic path. We call the shared cell of the edges of \(Q\) the home of \(Q\) or \(h(Q)\)._
**Definition 4**.: _It is not hard to see that a directed grounded path \(P\) with \(k\) nodes (\(k\geq 1\)) can be written uniquely as a concatenation of \(k-1\) atomic paths (in the case \(k=1\) this empty concatenation should be interpreted as the node that consists the whole path.) These atomic paths are called the factors of \(P\). If \(m,n\) are consecutive nodes of \(P\) in the given direction then the factor of \(P\) connecting \(m\) to \(n\) is the directed subpath \(P[m,n]\)._
_A directed grounded cycle \(D\) with \(k\) nodes (\(k\geq 2\)) can be written uniquely (up to rotation) as a circular concatenation of \(k\) atomic paths. If \(m,n\) are consecutive nodes of \(D\) in the given direction then the factor of \(D\) connecting \(m\) to \(n\) is the subpath \(D[m,n]\) that proceeds from \(m\) to \(n\) in the given direction._
_We call these representations the atomic decomposition of the oriented path (or cycle). The atomic components of the decompositions are called the factors of the path (or cycle)._
The homes of consecutive atomic paths in an atomic decomposition do not have to be distinct. A cell can appear as a home either once or twice in an atomic decomposition, and any pair of factors with the same home must be (cyclically) adjacent. It is clear from the definition of groundedness that if a directed grounded cycle \(D\) has an atomic decomposition of length two, \(D=Q_{1}Q_{2}\), then \(h(Q_{1})\neq h(Q_{2})\).
**Definition 5**.: _Given an atomic path \(Q\) with ends \(n_{1}\to n_{2}\), define \(s_{Q}=\tau(h(Q),n_{1},n_{2})\). Then \(s_{Q}\) is an \(\mathcal{N}\)-free open segment of the boundary of \(h(Q)\) that connects the nodes at the two ends of \(Q\)._
_For a simple grounded path \(P\), choose a direction for \(P\) and let \(n_{0},\ldots n_{k-1}\) be the nodes of \(P\), written in order, where \(k\geq 1\). Then the track of \(P\) is the set_
\[\operatorname{tr}(P)=(\bigcup_{i=1}^{k-1}s_{P[n_{i-1},n_{i}]})\cup\{n_{0}, \ldots,n_{k-1}\}\]
_For a simple grounded cycle \(D\), choose a direction of \(D\) and let \(n_{0},\ldots n_{k-1}\) be the nodes of \(D\), written in circular order, where \(k\geq 2\). Then the track of \(D\) is the set_
\[\operatorname{tr}(P)=(\bigcup_{i=1}^{k}s_{D[n_{i-1},n_{(i\mod k)}]})\cup\{n_{0 },\ldots,n_{k-1}\}\]
_Because \(\tau\) is unoriented, the definition of track is independent of the choice of direction of the path (or cycle). The track of a cycle is also independent of the choice of the starting node \(n_{0}\). In the case of a grounded path \(P\), the track of \(P\) is a segment in \(\Delta\) connecting the ends of \(P\). In the case of a grounded cycle \(D\), the track of \(D\) is a loop in \(\Delta\). In either case the track passes through all the nodes of \(P\) (or \(D\)), and no other nodes._
_While the track does not depend on the chosen direction of the path or cycle, each direction of the path or cycle induces an orientation on the track, and vice versa. The track of a grounded cycle \(D\) has a clockwise orientation induced by the orientation of \(\Delta\). This orientation, in turn, induces a direction on \(D\) itself. We call that direction the clockwise direction of \(D\)._
_For a cycle \(D\), \(\operatorname{tr}(D)\) is a loop in \(\Delta\). We write \(\Delta_{D}^{\text{in}}\coloneqq\Delta_{\operatorname{tr}(D)}^{\text{in}}\) and \(\Delta_{D}^{\text{out}}\coloneqq\Delta_{\operatorname{tr}(D)}^{\text{out}}\)._
**Lemma 4**.: _Let \(P\) and \(R\) be two grounded paths (or cycles) in \(G\). Then \(P\) and \(R\) intersect if and only if their tracks intersect. If the intersection \(P\cap R\) is a path, then the intersection of their tracks is a segment or a loop._
Proof.: Choose directions for \(P\) and \(R\). Suppose \(P\) and \(R\) intersect and let \(v\in V(P)\cap V(R)\). If \(v\) is a node then it is on the intersection of the tracks. Otherwise \(v\) belongs to a unique cell \(c\). Let \(Q_{P}\) and \(Q_{R}\) be the unique atomic factors of \(P\) and \(R\) respectively with \(v\in V(Q_{P})\cap V(Q_{R})\). Then \(h(Q_{P})=h(Q_{R})=c\) and therefore \(P\) and \(R\) have at least two nodes each in the boundary of \(c\). Since \(|\tilde{c}|\leq 3\) it follows that at least one of these nodes is common to both and belongs to the intersection of the tracks. Conversely assume that the tracks of \(P\) and \(R\) share a node. This node is by definition a vertex of both, so the paths intersect.
Suppose \(X=P\cap R\) is a path. By the preceding argument, \(X\) contains at least one node. If \(\tilde{X}\) is the longest grounded subpath of \(X\), it is not hard to see that the intersection of the tracks of \(P\) and \(R\) is usually the track of \(\tilde{X}\), except in the case where \(P\) and \(R\) are both cycles and share the same track.
Finally we need a few more definitions. When consecutive factors of an directed cycle \(D\) share the same home \(c\), their tracks also share \(c\). In such a case \(\tilde{c}\) must have three nodes, and all of them are nodes of \(D\), and there are two consecutive segments of \(\operatorname{tr}(D)\) along the boundary of \(c\). There are other complex ways for a cell of degree \(3\) to interact with \(\operatorname{tr}(D)\). If \(|\tilde{c}|=3\) and \(\tilde{c}\subseteq N(D)\) then the track of \(D\) may have two, one or zero segments along the boundary of \(c\). We are interested in cycles \(D\) where the interaction of \(\operatorname{tr}(D)\) with the homes of its factors is particularly simple. These _proper_ cycles are of particular importance for the arguments presented in this paper.
**Definition 6**.: _Let \(c\) be an internal cell of \(\rho\). The degree of \(c\) is the number of nodes on the boundary of \(c\). If \(D\) is a grounded cycle, then \(c\) is internal relative to \(D\) if \(c\subseteq\Delta_{D}^{\text{in}}\), and otherwise it is external relative to \(D\). If \(c\) is external relative to \(D\) and at least on of the factors of \(D\) has \(c\) as its home, then we say that \(c\) is a border cell of \(D\)._
**Definition 7**.: _A cycle \(D\) is called proper if each border cell \(c\) of \(D\) has degree \(3\), and exactly two of the nodes of \(c\) are nodes of \(D\). It follows immediately that the third node is in the interior of \(\Delta_{D}^{\,out}\)._
## 2 Fixing a wall
### Fixing the definition of flatness
In [1], the definition of the term _flat wall_ is too restrictive. It ignores the possibility that the induced graph \(G[A\cap B]\) of the torso \(A\cap B\) of the separation \((A,B)\) may possess crossing edges that can prevent the typical wall from being flat. This is exactly how the counterexample to the version of the Flat Wall Theorem in [1] is constructed. See Appendix A for a description of the counterexample.
The following weaker definition seems to be sufficient for the purpose of [1, 2]. We start with a definition that will help us reason about pegs.
**Definition 8**.: _Let \(W_{e}\) be a an elementary wall. Let \(D\) be the boundary of \(W_{e}\). A subpath \(P\) of \(D\) is called a peg interval of \(W_{e}\) if \(V(P)>2\); both ends of \(P\) are degree \(3\) vertices of \(W_{e}\); and all of the internal vertices of \(P\) are pegs, i.e. degree \(2\) vertices of \(W_{e}\). If \(W\) is a wall with boundary \(D\) that is obtained by an edge subdivision of an elementary wall \(W_{e}\), then a subpath \(P\) of \(W\) is a peg interval of \(W\) if it is a subdivision of a peg interval of \(W_{e}\). Notice that this definition does not depend on the choice of \(W_{e}\)._
**Definition 9**.: _Let \(G\) be a graph and \(W\subset G\) a wall with boundary \(D\). We say that \(W\) is flat in \(G\) if there is a separation \((A,B)\) of \(G\) and a vertex set \(\Omega\subseteq A\cap B\) such that_
1. \(V(W)\subset V(B)\)__
2. \(A\cap B\subset V(D)\)__
3. \(\Omega\) _intersects the interior of each peg interval of_ \(W\)_._
4. _Endow_ \(\Omega\) _with a circular order induced from_ \(D\)_. Then_ \((G[B],\Omega)\) _is a rural society, as defined in Section_ 2.1 _of_ _[_2_]__._
This definition is weaker than the overly strong definition in [1] in two respects. It does not require \(\Omega=A\cap B\), and it does not require that there be a choice of an elementary wall for \(W\) such that every peg of it is in \(\Omega\). The only requirement is for one peg in every peg interval to be present in \(\Omega\). An elementary wall has peg intervals that contain two or even three distinct pegs. Notice however that the _corners_ of \(W_{e}\) reside in distinct peg intervals, so \(W_{e}\) can be chosen such that every corner of \(W\) is in \(\Omega\).
### Fixing Lemma 5.1 in [1]
Lemma 5.1 is a technical lemma, used in [1] to prove the Flat Wall Theorem 5.2. The original lemma needs to be restated due to the problematic flat wall definition, and in addition it needs to be proved more carefully because of subtleties that went unnoticed in the original proof.
We start with defining tight renditions and proving their basic properties.
**Definition 10**.: _Let \(\rho\) be a rendition of a society in a disk. The degree of \(\rho\) is the sum of the degrees of all the cells of \(\rho\)._
**Definition 11**.: _Let \(\rho\) be a rendition of a society in a disk, and let \(c\) be a cell of \(\rho\). We call \(c\) empty if the flap \(\sigma(c)\) is an edgeless graph. Clearly the number of non-empty cells in \(\rho\) is bounded by \(|E(G)|\)._
Since we have a global bound on the number of non-empty cells of a \(C\)-rendition of \(G\) in a disk, we can define the following.
**Definition 12**.: _Let \((G,C)\) be a rural society. A maximal \(C\)-rendition of \(G\) in a disk is a \(C\)-rendition of \(G\) in a disk with the maximal possible number of non-empty cells. A maximal rendition is tight if it has a minimum degree among all maximal renditions._
**Lemma 5**.: _Let \((G,C)\) be a rural society, and let \(\rho=(\Gamma,\sigma,\pi)\) be a tight \(C\)-rendition of \(G\) in a disk. Then \(\rho\) has the following properties:_
1. _If_ \(Q\) _is a trivial atomic path, it is home alone, i.e._ \(\sigma(h(Q))=Q\)_. In particular the cell_ \(h(Q)\) _has degree two._
2. _Let_ \(c\) _be a cell of_ \(\rho\)_. If_ \(c\) _is non-empty and the nodes of_ \(c\) _are not isolated in_ \(G\)_, then the nodes of_ \(c\) _belong to a single connected component of_ \(\sigma(c)\)_._
3. _Let_ \(c\) _be a cell of_ \(\rho\)_. If_ \(c\) _has degree_ \(3\) _and the nodes of_ \(c\) _belong to a single connected component of_ \(\sigma(c)\) _then it is not possible for one of the nodes of_ \(c\) _to separate the other two in_ \(\sigma(c)\)_._
Proof.: To prove claim 1, assume that \(Q\subsetneq\sigma(h(Q))\). Create a new \(C\)-rendition \(\rho^{\prime}=(\Gamma^{\prime},\sigma^{\prime},\pi^{\prime})\) of \(G\), starting with \(\rho^{\prime}\coloneqq\rho\) and then modifying it as follows. Create a new empty cell \(c\) right next to \(h(Q)\), along the track of of \(Q\) and with the nodes of \(Q\) serving as the only nodes of \(c\). Let \(e\) be the lone edge of \(Q\). Redefine \(\sigma^{\prime}(h(Q))=\sigma(Q)\setminus e\) and set \(\sigma^{\prime}(c)=Q\). Neither \(h(Q)\) nor \(c\) are empty in the modified rendition, so it has one more non-empty cell than \(\rho\), contradicting its maximality.
To prove claim 2, first suppose that one of the nodes of \(c\), say \(m\), is isolated in \(\sigma(c)\). Since \(m\) is not isolated in \(G\), it must be a node of some additional cell \(c^{\prime}\). We can extract \(m\) from \(c\) by trimming the boundary of \(c\) around \(m\) and re-defining \(\sigma^{\prime}(c)=\sigma(c)\setminus m\). This change does not make \(c\) empty because \(\sigma(c)\) has an edge, and \(m\) is not an end of any edge of \(\sigma(c)\), so the new rendition is still maximal. We did, however, reduce the degree of the rendition, violating the tightness of \(\rho\). So this case is not possible, and so the nodes of \(c\) are not isolated in \(\sigma(c)\).
Suppose \(\sigma(c)=M\sqcup P\) where \(M\) and \(P\) are disjoint and \(c\) has nodes \(m\in V(M)\), \(p\in V(P)\). Since \(m\) and \(p\) are not isolated in \(\sigma(c)\), both \(M\) and \(P\) have edges, and we can replace the cell \(c\) with a pair of disjoint cells \(c_{M}\) and \(c_{P}\), with \(\tilde{c}_{M}=N(M)\), \(\sigma(c_{M})=M\), \(\tilde{c}_{P}=N(P)\), and \(\sigma(c_{P})=P\). The resulting rendition violates the maximality of \(\rho\), since neither \(c_{M}\) nor \(c_{P}\) is empty.
To prove claim 3, let \(\tilde{c}=\{m,n,p\}\), and assume that \(n\) separates nodes \(m\) and \(p\) in \(\sigma(c)\). Then there is a separation \((M,P)\) of \(\sigma(c)\) with \(m\in V(M)\), \(p\in V(P)\) and \(M\cap P=\{n\}\). Modify \(\rho\) by replacing the cell \(c\) by two cells \(c_{m}\) and \(c_{p}\) of degree \(2\) each, such that \(\tilde{c}_{m}=\{m,n\}\) and \(\tilde{c}_{p}=\{n,p\}\). Define \(\sigma(c_{m})=M\) and \(\sigma(c_{p})=P\). Since \(m\) and \(p\) are both connected to \(n\) in \(\sigma(c)\), both \(c_{m}\) and \(c_{p}\) are non-empty and the maximality of \(\rho\) is violated.
Finally, here is the replacement lemma for Lemma 5.1 of [1].
**Lemma 5.1**.: _Let \((G,C)\) be a rural society with \(|C|\geq 4\). Let \(W\subset G\) be a subgraph and \(D\subset W\) be a directed cycle. Assume the following:_
1. \(W^{\prime}=W\setminus V(D)\) _is connected_
2. _There are four simple paths_ \(P_{1},\ldots,P_{4}\) _from_ \(C\) _to_ \(W^{\prime}\) _that are vertex-disjoint (with the possible exception of their_ \(W^{\prime}\) _ends) and for each_ \(1\leq i\leq 4\)_, the intersection_ \(P_{i}\cap D\) _is a non-empty path. Let_ \(x_{i},y_{i}\in V(G)\) _be the_ \(C\)_-end and_ \(W^{\prime}\)_-end of_ \(P_{i}\)_, respectively._
_Then \(D\) is grounded relative to any \(C\)-rendition of \(G\) in a disk, and one can choose a specific \(C\)-rendition \(\rho\) of \(G\) in an oriented disk \(\Delta\) and a proper \(\rho\)-grounded cycle \(E\subset G\) with the following properties:_
1. _the given direction of_ \(D\) _agrees with its induced clockwise direction._
2. \(N(E)\subseteq N(D)\)_, and the circular orders of_ \(N(E)\) _induced by the clockwise directions of_ \(E\) _and_ \(D\) _agree._
3. _For any clockwise consecutive nodes_ \(m\to n\) _of_ \(E\)_, if_ \(m\to n\) _are also clockwise consecutive in_ \(D\)_, then_ \(E[m,n]=D[m,n]\)_._
4. \(\Delta_{E}^{\text{in}}\supseteq\Delta_{D}^{\text{in}}\)__
5. _Let_ \((A,B)\) _be the following separation of_ \(G\)_:_ * \(A=N(\Delta_{E}^{\text{out}})\cup(\bigcup\limits_{c\subseteq\Delta_{E}^{\text {out}}}\sigma(c))\)__ * \(B=(V(D)\cap V(E))\cup(\bigcup\limits_{c\subseteq\Delta_{E}^{\text{in}}}\sigma( c))\)__ _Let_ \(P\subset V(D)\setminus\mathcal{N}(\rho)\) _be a set with the following properties:_ * _If_ \(c\) _is a border cell of both_ \(D\) _and_ \(E\) _then_ \(|P\cap\sigma(c)|\leq 1\)_._ * _For any other cell_ \(c\)_,_ \(P\cap\sigma(c)=\emptyset\)_._ _Then_ \(V(W)\subseteq V(B)\)_, and if we define_ \(\Omega=N(E)\cup P\)_, then_ \(\Omega\subseteq A\cap B\subseteq V(D)\) _and the society_ \((G[B],\Omega)\) _is rural where the circular order on_ \(\Omega\) _is induced by the clockwise direction of_ \(D\)_._
Proof.: We proceed through a series of steps. We will choose \(E\) and \(\rho\) after completing a few necessary steps.
1. \(D\) is grounded relative to any \(C\)-rendition \(\rho^{\prime}\) of \(G\) in a disk. Suppose that \(D\) is not \(\rho^{\prime}\)-grounded. Then all the edges of \(D\) share a common cell \(c\) of \(\rho^{\prime}\). Each path \(P_{i}\) meets a first vertex \(d_{i}\) of \(D\) as it proceeds from \(C\) to \(W^{\prime}\). The vertices \(d_{i}\) must be nodes of \(c\), and they must be distinct because they are not at the \(W^{\prime}\) end of their respective paths. Therefore \(|\tilde{c}|>3\) which is impossible since \(c\neq\star\).
2. In any \(C\)-rendition \(\rho^{\prime}\) of \(G\) in a disk, \(W^{\prime}\) contains a node. This is proved using the same argument as the previous case. If \(W^{\prime}\) does not contain a node, then it follows from its connectedness that \(W^{\prime}\) must reside inside a single cell \(c\) and the paths \(P_{i}\) must each meet its boundary at a node. Since we assumed that these nodes
are not in \(W^{\prime}\), they must be distinct, so \(|\tilde{c}|>3\). The last edge of each path \(P_{i}\) must be in \(\sigma(c)\) so \(c\neq\star\), a contradiction.
* Choosing \(\rho\) and \(E\) Let \(\Delta\) be a disk. Look at the set of all pairs \((\rho,E)\) such that
* \(\rho\) is a tight \(C\)-rendition of \(G\) in \(\Delta\), with \(\Delta\) oriented such that the induced clockwise direction of \(D\) agrees with its given direction.
* \(E\) is a \(\rho\)-grounded cycle that meets the required properties 2, 3 and 4 but is not necessarily proper. (for property 2, notice that the direction of \(D\) is well defined because it is automatically \(\rho\)-grounded as we have already established). This set is not empty because a tight \(\rho\) exists (since \((G,C)\) is rural), and for such \(\rho\) the pair \((\rho,D)\) meets the criteria. Among all possible choices of \((\rho,E)\) in the set, choose one where the graph \(B\) in the separation \((A,B)\) is maximal. To keep notation simple we will refer to our chosen values simply as \(\rho\) and \(E\). Requirement 1 is satisfied by fiat because of the way we oriented \(\Delta\).
* \(V(W^{\prime})\subseteq V(B)\) We showed that \(W^{\prime}\) contains a node of \(\rho\). Choose a node \(n\) such that \(\pi(n)\in V(W^{\prime})\). For \(i=1,\ldots,4\), let \(R_{i}\) be a path in \(W^{\prime}\) connecting \(y_{i}\) to \(n\), and define \(\tilde{P}_{i}=P_{i}R_{i}\). The path \(\tilde{P}_{i}\) is grounded and intersects \(D\) in a non-empty path, and therefore its track intersects the track of \(D\) in a segment. The \(D\)-nodes in the intersection all appear on \(\tilde{P}_{i}\) before it reaches \(W^{\prime}\) and therefore they are all in \(P_{i}\). Let \(d_{i}\) be the first node of \(D\) on \(P_{i}\). If we assume that the sequence \(x_{1},\ldots,x_{4}\) is ordered in \(C\)-order, then the tracks \(\operatorname{tr}(P_{1}[x_{1},d_{1}])\) and \(\operatorname{tr}(P_{3}[d_{3},x_{3}])\) divide \(\Delta_{D}^{\operatorname{out}}\) into two regions. If \(n\) is in one of these regions, then one of the tracks \(\operatorname{tr}(\tilde{P}_{2})\) and \(\operatorname{tr}(\tilde{P}_{4})\) is unable to reach \(n\) without intersecting either \(\operatorname{tr}(P_{1}[x_{1},d_{1}])\), or \(\operatorname{tr}(P_{3}[d_{3},x_{3}])\), or the interiors of both tracks \(\operatorname{tr}(D[d_{1},d_{3}])\) and \(\operatorname{tr}(D[d_{3},d_{1}])\). For convenience, assume that this track is \(\operatorname{tr}(\tilde{P}_{2})\). If the first case occurs then there is a node \(n^{\prime}\) in \(\operatorname{tr}(P_{1}[x_{1},d_{1}])\) that also belongs to \(\operatorname{tr}(\tilde{P}_{2})\). \(\pi(n^{\prime})\neq W^{\prime}\) because \(\pi(n^{\prime})\) occurs on \(P_{1}\) before it reaches \(D\). Therefore it cannot occur in \(R_{2}\) and therefore it must occur in \(P_{2}\). But \(P_{1}\) and \(P_{2}\) do not intersect outside of \(W^{\prime}\), so this case is impossible. The second case is disposed of in exactly the same way. In the third case, \(\operatorname{tr}(\tilde{P}_{2})\) must intersect \(\operatorname{tr}(D)\) in at least two disjoint segments. This implies that the intersection \(P_{2}\cap D\) is disconnected, which we assumed was not the case. So we established that \(n\in\Delta_{D}^{\operatorname{in}}\). Since \(n\not\in V(D)\) by definition, it must be in the interior of \(\Delta_{D}^{\operatorname{in}}\) and therefore it must belong to \(\sigma(c)\) for some cell \(c\subseteq\Delta_{D}^{\operatorname{in}}\subseteq\Delta_{E}^{\operatorname{in}}\). Therefore \(n\in V(B)\). The same holds for all the nodes of \(W^{\prime}\). If \(v\) is a non-node vertex of \(W^{\prime}\), then \(v\) must belong to a unique cell \(c\). Since W' is connected and not limited to a single cell, there must be a node of \(W^{\prime}\) in \(c\). This node must be in \(\Delta_{E}^{\operatorname{in}}\) and it cannot be on its boundary, because it is not a node of \(D\) by definition, and therefore not a node of \(E\) since \(N(E)\subseteq N(D)\). Therefore it is in the interior of \(\Delta_{E}^{\operatorname{in}}\) and therefore \(c\) is an interior cell of \(\Delta_{E}^{\operatorname{in}}\). it follows that \(v\in V(B)\) and therefore \(V(W^{\prime})\subseteq V(B)\).
* \(V(D)\subseteq V(B)\), and therefore \(V(W)\subseteq V(B)\) By assumption, \(\Delta_{D}^{\operatorname{in}}\subseteq\Delta_{E}^{\operatorname{in}}\). It follows immediately that all the nodes of \(D\) are in \(V(B)\).
Let \(v\in V(D)\) be a vertex that is not a node. Then \(v\) is an interior vertex of some factor \(Q_{D}\) of \(D\). If \(h(Q_{D})\subseteq\Delta_{E}^{\rm in}\) then \(v\in V(B)\). So we can assume that \(h(Q_{D})\) is exterior to \(E\), and therefore exterior to \(D\) as well. Since this cell contains a factor of \(D\), it must be a border cell of \(D\). The track of \(Q_{D}\) separates the interior of \(h(Q_{D})\) (which is in \(\Delta_{E}^{\rm out}\) by assumption), from the interior of \(\Delta_{D}^{\rm in}\) (because it is border cell and \({\rm tr}(Q_{D})\) is part of \({\rm tr}(D)\) by definition). But \(\Delta_{D}^{\rm in}\subseteq\Delta_{E}^{\rm in}\), so \({\rm tr}(Q_{D})\) separates \(\Delta_{E}^{\rm in}\) from \(\Delta_{E}^{\rm out}\). Therefore \({\rm tr}(Q_{D})\) is part of \({\rm tr}(E)\), and its ends, which are consecutive nodes in \(D\), must therefore be consecutive in \(E\) as well. As a result, by property 3 of \(E\), \(Q_{D}\) is a factor of \(E\) as well. Therefore \(v\in(V(D)\cap V(E))\subset V(B)\) and we are done.
* \(\Omega\subseteq A\cap B\subseteq V(D)\) We start with the left inclusion. By definition the set \(P\) is confined to the intersections \(V(D)\cap\sigma(c)\) in cells \(c\) that are border cells of both \(D\) and \(E\), where by assumption \(D\) and \(E\) coincide. Therefore \(P\subseteq V(D)\cap V(E)\subseteq B\). Since any \(E\)-border cell \(c\) is by definition a subset of \(\Delta_{E}^{\rm out}\), we also have \(P\subseteq A\), and therefore \(P\subseteq A\cap B\). In addition, \[N(E)\subseteq{\rm tr}(E)\subseteq\Delta_{E}^{\rm out}\ \ \mbox{ and therefore }\ \ N(E)\subseteq N(\Delta_{E}^{\rm out})\subseteq A\] \[N(E)\subseteq N(D)\cap V(E)\subseteq V(D)\cap V(E)\subseteq B\] And therefore \(\Omega=N(E)\cup P\subseteq A\cap B\). To show the right inclusion, first observe that \[N(\Delta_{E}^{\rm out})\cap N(\Delta_{E}^{\rm in})=N(\Delta_{E}^{\rm out} \cap\Delta_{E}^{\rm in})=N({\rm tr}(E))=N(E)\subseteq N(D)\subseteq V(D)\] and then break \(A\) and \(B\) into their constituents, and show the inclusion of the resulting intersections: \[N(\Delta_{E}^{\rm out})\cap(\bigcup_{c\subseteq\Delta_{E}^{\rm in }}\sigma(c)) \subseteq N(\Delta_{E}^{\rm out})\cap N(\Delta_{E}^{\rm in})\subseteq V (D)\] \[(\bigcup_{c\subseteq\Delta_{E}^{\rm out}}\sigma(c))\cap(\bigcup_ {c\subseteq\Delta_{E}^{\rm in}}\sigma(c)) \subseteq N(\Delta_{E}^{\rm out})\cap N(\Delta_{E}^{\rm in})\subseteq V (D)\] \[A\cap(V(D)\cap V(E)) \subseteq V(D)\]
* \(E\) is proper. If \(E\) is not proper, then there is a border cell \(c\) of \(E\) with one of the two following properties:
* \(|\tilde{c}|=3\) and \(\tilde{c}\subseteq N(E)\) If the first case occurs, let \(\rho^{\prime}\) be a modification of \(\rho\) where the modified tie-breaker function \(\tau^{\prime}\) chooses the other segment of \({\rm bd}(c)\) as the preferred segment. With this change the pair \((\rho^{\prime},E)\) is still an eligible pair in Step 3 but with a strictly larger graph \(B\), contrary to the choice of \(\rho\) and \(E\). Therefore this case does not occur. In the second case, let the nodes of \(c\) be \(m,n,p\) listed in clockwise \(E\)-order as they appear on \({\rm tr}(E)\). Notice that since \(c\) is external to \(\Delta_{E}^{\rm in}\), the clockwise \({\rm bd}(c)\)-order of the three nodes is the opposite order. As a border cell, \(\sigma(c)\) contains one or two factors of \(E\), so it
contains at least one edge. None of the nodes of \(c\) are isolated in \(G\) since they all belong to \(V(E)\). Therefore by Lemma 5(2) the nodes of \(c\) belong to a single connected component of \(\sigma(c)\). Apply Lemma 2 to \(\operatorname{tr}(E)\) and \(\operatorname{bd}(c)\). If \(\operatorname{tr}(E)\) contains two segments along \(\operatorname{bd}(c)\), we may assume, by rotating the names of the nodes, that these are \(\operatorname{bd}(c)[n,m]\) and \(\operatorname{bd}(c)[p,n]\). Lemma 2 guarantees that the loop \[L_{1}=\operatorname{bd}(c)[m,p]\operatorname{tr}(E)[p,m]\] has an interior disk that contains both \(c\) and \(\Delta_{E}^{\operatorname{in}}\). If \(\operatorname{tr}(E)\) contains only one segment along \(\operatorname{bd}(c)\), we can assume, by rotating node names, that this segment is \(\operatorname{bd}(c)[n,m]\). Lemma 2 guarantees that one of the two loops \[L_{0} =\operatorname{bd}(c)[p,n]\operatorname{tr}(E)[n,p]\] \[L_{1} =\operatorname{bd}(c)[m,p]\operatorname{tr}(E)[p,m]\] has an interior disk that contains both \(c\) and \(\Delta_{E}^{\operatorname{in}}\). If we are lucky, the desired loop is \(L_{1}\). If not, we can rotate the node names one more time, making \(L_{1}\) the desired loop while the segment of \(E\) along \(\operatorname{bd}(c)\) becomes \(\operatorname{bd}(c)[p,n]\). With these naming conventions, in all cases the segment(s) of \(\operatorname{tr}(E)\) along \(\operatorname{bd}(c)\) would be either \(\operatorname{bd}(c)[n,m]\), \(\operatorname{bd}(c)[p,n]\) or both. The respective factor(s) of \(E\) in \(c\) are \(E[m,n]\), \(E[n,p]\) or both. By Lemma 5(3), there is a path \(R\) in \(\sigma(c)\) between the nodes \(m\) and \(p\) that avoids \(n\). Let \(E^{\prime}\) be the modification of \(E\) created by replacing the path \(E[m,p]\) with \(R\). This removes the potential factors \(E[m,n]\) and \(E[n,p]\) from \(E^{\prime}\) because \(E[m,p]=E[m,n]E[n,p]\), due to the node ordering. As a result \(E^{\prime}\) has no self intersections and is therefore a simple cycle. By construction, \(N(E^{\prime})\subseteq N(E)\subseteq N(D)\). \(E^{\prime}\) is grounded since \(R\) has at least one edge, which is in \(c\), and the path \(E^{\prime}[p,m]=E[p,m]\) has at last one edge, which is not in \(c\). By construction, \(\operatorname{tr}(E^{\prime})=L_{1}\) and therefore the clockwise order on \(E^{\prime}\) agrees with the clockwise order on \(E\), and therefore with the clockwise order on \(D\). If \(s\to t\) is a consecutive pair of nodes in \(E^{\prime}\) which is also a consecutive pair in \(D\) then that pair is not \(m\to p\) and therefore \(E^{\prime}[s,t]=E[s,t]=D[s,t]\). Finally, the by the property of \(L_{1}\), \(\Delta_{E}^{\operatorname{in}}\subsetneq\Delta_{L_{1}}^{\operatorname{in}}= \Delta_{E^{\prime}}^{\operatorname{in}}\), and \(B\) becomes strictly larger by gaining the path \(R\) without losing the node \(n\). It follows that \(E^{\prime}\) meets all the criteria of the lemma, but it violates the maximality of \(B\), a contradiction. Therefore \(E\) must be proper.
* \((G[B],\Omega)\) is a rural society. We follow the same recipe as in Lemma 5.1 in [1]. We remove all the cells from \(\rho\) that are neither interior to \(\Delta_{E}^{\operatorname{in}}\) nor border cells of \(E\). We remove all the nodes that do not belong to \(\Delta_{E}^{\operatorname{in}}\) or abut a border cell. For each border cell \(c\) of \(E\), it follows from the propriety of \(E\) that \(|\tilde{c}|=3\) and exactly two of the nodes of \(c\), \(\alpha_{c}\) and \(\beta_{c}\), are in \(V(E)\). Denote its third node \(z_{c}\). Redraw each border cell \(c\) as in Figure 3 by first drawing a bisecting line \(\ell_{c}^{1}\) through \(c\) from \(\alpha_{c}\) to \(\beta_{c}\). This line carves \(c\) into two regions, one of which is disjoint from \(z_{c}\) and is denoted
\(C\) in the figure, with the other region denoted \(B\). Let the redrawn cell \(\hat{c}\) be the region \(C\). If \(c\) is a border cell of \(D\) and \(|\sigma(c)\cap P|=\{p\}\), draw a new node on the interior of \(\ell_{c}^{1}\) and identify it with the vertex \(p\). After modifying all the border cells, remove all the nodes \(z_{c}\) from the drawing.
Draw a line \(\ell_{c}^{2}\) through region \(B\) of \(c\) from \(\alpha_{c}\) to \(\beta_{c}\) as in Figure 3. If the interior of \(\ell_{c}^{1}\) has a new node \(p\), make sure that \(\ell_{c}^{2}\) passes through that node. Otherwise \(\ell_{c}^{2}\) must be internally disjoint from \(\ell_{c}^{1}\). By propriety, \(\operatorname{tr}(E)\) has a single segment in each border cell \(c\). Replace the segment of \(\operatorname{tr}(E)\) in \(c\) with \(\ell_{c}^{2}\). The resulting curve is a simple loop. Let \(\Delta^{\prime}\) be the closed interior of this loop. Then \(\Delta^{\prime}\) contains all the interior cells and modified border cells of \(E\), and the nodes on its boundary are exactly the points of \(\Omega\), in the clockwise direction of \(E\).
As a final step we redefine the flaps of each border cell \(\hat{c}\) by defining \(\sigma^{\prime}(\hat{c})=\sigma(c)\cap G[B]\). We leave the flaps of interior cells intact. Notice that the propriety of \(E\) implies that \(z_{c}\) is not a vertex of \(\sigma^{\prime}(\hat{c})\).
Altogether, this construction creates a rendition \(\rho^{\prime}\) on the disk \(\Delta^{\prime}\) of the rural society \((\bigcup\limits_{c\subseteq\Delta^{\prime}}\sigma^{\prime}(c),\ \Omega)\). We just have to show that
\[\bigcup\limits_{c\subseteq\Delta^{\prime}}\sigma^{\prime}(c)=G[B]\]
The inclusion \(\bigcup\limits_{c\subseteq\Delta^{\prime}}\sigma^{\prime}(c)\subseteq G[B]\) is obvious. Conversely,
\[\bigcup\limits_{c\subseteq\Delta^{\text{ir}}_{E}}\sigma(c)\subseteq\bigcup \limits_{c\subseteq\Delta^{\prime}}\sigma^{\prime}(c)\]
and every vertex \(v\in V(E)\) is either in \(\sigma(c)\) for an internal cell \(c\), or it is in \(\sigma^{\prime}(\hat{c})\) for a border cell \(c\) of \(\rho\), since we know that \(v\neq z_{c}\). Therefore
\[V(D)\cap V(E)\subseteq V(E)\subseteq\bigcup\limits_{c\subseteq\Delta^{\prime }}\sigma^{\prime}(c)\]
and so \(B\subseteq\bigcup\limits_{c\subseteq\Delta^{\prime}}\sigma^{\prime}(c)\).
To complete the proof we just need to verify that all the edges of \(G[B]\) are accounted for.
Let \(e\in G[B]\). Then there is a unique cell \(c\) in \(\rho\) such that \(e\in\sigma(c)\). If \(c\subset\Delta^{\text{in}}_{E}\) then \(e\in E(B)\) and we are done. If \(c\) is a border cell of \(\operatorname{tr}(E)\) then \(e\in\sigma^{\prime}(\hat{c})\) by definition. We are left with the case where \(c\) is an external cell which is not a border. In this case there is no factor of \(E\) with a home in \(c\), and therefore the ends of \(e\) must be in \(\operatorname{tr}(E)\), which means that the ends of \(e\) are nodes. Therefore by Lemma 5(1), \(c\) must be trivial. Let \(s\) and \(t\) be the nodes of \(c\).
Apply Lemma 2 to \(\operatorname{tr}(E)\) and \(\operatorname{bd}(c)\).
The lemma guarantees that one of the two loops
\[L_{0} =\operatorname{bd}(c)[s,t]\operatorname{tr}(E)[t,s]\] \[L_{1} =\operatorname{bd}(c)[t,s]\operatorname{tr}(E)[s,t]\]
has an interior disk that contains both \(c\) and \(\Delta_{E}^{\rm in}\). By flipping the names \(s\) and \(t\) if necessary, we can guarantee that the desired loop is \(L_{1}\). Create a cycle \(E^{\prime}\) by replacing the path \(E[t,s]\) with the edge \(e\). Create a \(C\)-rendition \(\rho^{\prime}\) of \(G\) by modifying the tie-breaking function \(\tau\), if necessary, such that \(\tau^{\prime}(c)\) chooses the boundary segment of \(\operatorname{bd}(c)\) that is used by \(L_{1}\), namely \(\operatorname{bd}(c)[t,s]\). With this modification, we have \(\operatorname{tr}_{\rho^{\prime}}(E^{\prime})=L_{1}\). Using the same arguments we used in the proof of propriety of \(E\) we can conclude that that \(E^{\prime}\) is a simple cycle which is \(\rho^{\prime}\)-grounded, with the same clockwise direction as the direction inherited from the clockwise direction of \(D\), with \(N(E^{\prime})\subset N(D)\), and with the same factors as \(D\) for nodes that are consecutive in both \(D\) and \(E^{\prime}\). As before, \(\Delta_{E}^{\rm in}\subsetneq\Delta_{L_{1}}^{\rm in}=\Delta_{E^{\prime}}^{\rm in}\), and \(B\) grows by adding the edge \(e\). It follows that the pair \((\rho^{\prime},E^{\prime})\) violates the maximality of \((\rho,E)\), and this case cannot occur. This concludes the proof.
### Where the rain gets in: Theorems 5.2 and 6.1 of [1] revisited
We are now ready to fix the two main results in [1], the Flat Wall Theorem (5.2) and the hereditary property of flat walls (Theorem 6.1). We need a technical lemma that does the bulk of the work for both. The lemma relies on 5.1' to show that under mild assumptions, a wall \(W\) in a rural society \((G,C)\) is flat (as in Definition 9.)
The main challenge in the proof of the lemma is to show that under the right circumstances, any maximal choice of the set \(P\) of peg choices in 5.1' yields a circular order \(\Omega\) that contains a peg choice from each peg interval of \(W\).
**Lemma 6**.: _Let \((G,C)\) be a rural society, \(W\subseteq G\) a wall of height \(r\geq 3\), and \(D\subset W\) the boundary of \(W\). Assume that each peg interval \(I\) of \(D\) has a simple path \(R_{I}\) from \(C\) to an interior vertex of \(I\), such that \(R_{I}\) does not intersect \(V(W)\) except at its \(I\) terminus. Let \(I_{1},\ldots,I_{4}\) be the peg intervals of the corner bricks of \(W\). Assume that \(R_{I_{1}},\ldots,R_{I_{4}}\) are vertex disjoint. Then \(W\) is flat._
Figure 3: Carving a border cell \(c\)
Proof.: The peg intervals of \(W\) occur along \(D\). Figure 4 shows all the possible types of \(W\) border bricks that carry a peg interval along their boundaries. The dashed lines are possible edges of \(G\) outside of \(W\) that connect to its peg intervals, while \(W\)-edges are shown as solid lines. The peg intervals themselves are highlighted, and their ends are marked by \(\alpha\) and \(\beta\), so that each depicted peg interval, as shown, is \(D[\alpha,\beta]\) in the clockwise direction. In the interior of each peg interval \(D[\alpha,\beta]\) the terminus of \(R_{I}\) is marked as \(m=m_{\alpha\beta}\). While \(m\) has degree \(3\) in \(G\) it only has degree \(2\) in \(W\).
The peg intervals of reflected bricks (e.g. bottom bricks and right side bricks) are \(D[\beta,\alpha]\) in the clockwise direction. The following analysis applies to the bricks as depicted. To analyze the reflected bricks, \(\alpha\) and \(\beta\) need to be interchanged. Notice that the boundary \(D\) also passes along recessed side bricks, that are not depicted. While these are border bricks, they do not possess peg intervals, since in the elementary wall they do not have degree \(2\) vertices.
We start by constructing _pegging paths_\(S_{I}^{\alpha}\) and \(S_{I}^{\beta}\) for each peg interval (see Figure 5 for pegging paths.) For each border brick \(B\) of \(W\) with peg interval \(I\), we construct \(S_{I}^{\alpha}\) and \(S_{I}^{\beta}\) along the boundary of \(B\), both starting at the terminus \(m\) of \(R_{I}\), continuing towards the vertex \(\alpha\) or \(\beta\), respectively, and ending at the vertex \(\omega\). Notice that \(\omega\in V(W)\setminus V(D)\).
Apply Lemma 5.1' to \((G,C)\), \(W\), \(D\) and the concatenated paths \(P_{1}=R_{I_{1}}S_{I_{1}}^{\alpha},\ldots,P_{4}=R_{I_{4}}S_{I_{4}}^{\alpha}\). It is not hard to see that all the conditions of the lemma are met. In particular \(|C|\geq 4\) because \(R_{I_{1}},\ldots,R_{I_{4}}\) are vertex disjoint. We can conclude that there is a rendition \(\rho\) that makes \(D\) grounded, and a proper grounded cycle \(E\) that meet the conclusions of Lemma 5.1'. Using the notation of 5.1', the separation \((A,B)\) and the circular order \(\Omega\) almost prove that \(W\) is flat. The only thing left to prove is that the set \(P\) of peg choices can be chosen such that \(\Omega\) contains a peg choice from
Figure 4: Border bricks with peg intervals
each peg interval. We will show that this condition holds for all maximal choices of \(P\).
The proof of Lemma 5.1' establishes that there is a node \(n\) of \(W\setminus V(D)\) in the interior of \(\Delta_{D}^{\text{in}}\). For
Figure 5: Border bricks with their pegging paths
each peg interval define \(T_{I}\) to be a path in \(W\setminus V(D)\) leading from \(\omega\) to \(n\). Define
\[P_{I}^{\alpha} =R_{I}S_{I}^{\alpha}T_{I}\] \[P_{I}^{\beta} =R_{I}S_{I}^{\beta}T_{I}\]
It is easy to check that
\[I =(P_{I}^{\alpha}\cup P_{I}^{\beta})\cap D\] \[\alpha \in V(P_{\alpha})\setminus V(P_{\beta})\] \[\beta \in V(P_{\beta})\setminus V(P_{\alpha})\]
We need to show that \(\Omega\) contains a peg choice for every peg interval in \(D\). In other words, \(\Omega\) must intersect the interior of each peg interval. Suppose that is not the case, and let \(I=D[\alpha,\beta]\) be a peg interval whose interior does not intersect \(\Omega\). Without loss of generality we assume that this interval is depicted in Figure 4.
1. \(\alpha\) and \(\beta\) are consecutive nodes of \(E\). Since \(P_{I}^{\alpha}\) is grounded and leads from \(\Delta_{E}^{\mathrm{out}}\) to \(\Delta_{E}^{\mathrm{in}}\), there must be a node in the intersection of \(\mathrm{tr}(E)\) and \(\mathrm{tr}(P_{I}^{\alpha})\). By our assumption, this node cannot be internal to \(I\), and since \(N(E)\cap N(P_{I}^{\alpha})\subseteq N(I)\setminus\{\beta\}\), it follows that the node must be \(\alpha\). We repeat the same argument with \(P_{I}^{\beta}\) to conclude that both \(\alpha\) and \(\beta\) are nodes of \(E\). If there is another node \(\zeta\) of \(E\) between \(\alpha\) and \(\beta\), then \(\zeta\) is a node of \(D\) (since \(N(E)\subseteq N(D)\)) and since the orders on \(N(E)\) induced by the clockwise directions of \(E\) and \(D\) are identical, \(\zeta\) is between \(\alpha\) and \(\beta\) in \(D\) as well. In other words, \(\zeta\in N(E)\) is an internal node of the peg interval \(I\), contrary to our assumption. As consecutive nodes, \(\alpha\) and \(\beta\) are the ends of an \(E\)-factor \(E[\alpha,\beta]\) that resides in some cell \(c=h(E[\alpha,\beta])\).
2. If the cell \(c\) has 3 nodes \(\{\alpha,\beta,\gamma\}\) then \(\gamma\not\in V(I)\). Suppose that \(\gamma\in V(I)\). As a node of \(D\), \(\gamma\in\mathrm{tr}(D)\) and since \(\Delta_{D}^{\mathrm{in}}\subseteq\Delta_{E}^{\mathrm{in}}\) it follows that \(\gamma\in\Delta_{E}^{\mathrm{in}}\). Recall that \(R_{I}\) is a path that connects \(C\) to \(m_{\alpha\beta}\). Let \(J\) be the sub-interval of \(I\) connecting \(m_{\alpha\beta}\) to \(\gamma\) (so \(J\) is either \(D[m_{\alpha\beta},\gamma]\) or \(D[\gamma,m_{\alpha\beta}]\), depending on the order of \(m_{\alpha\beta}\) and \(\gamma\) in \(D\)). The concatenation \(R^{\prime}=R_{I}\cdot J\) is a grounded path leading from \(\Delta_{E}^{\mathrm{out}}\) to \(\Delta_{E}^{\mathrm{in}}\). As such, the track of \(R^{\prime}\) must intersect the track of \(E\). But neither \(R_{I}\) nor \(J\) contain a node of \(E\). Recall that \(V(R_{I})\cap N(E))\subseteq V(R_{I})\cap V(W)=\{m_{\alpha\beta}\}\subseteq V (J)\), and \(J\) is wholly contained in the interior of \(I\) which contains no \(E\) nodes, by assumption.
3. \(I\cap\sigma(c)=\{\alpha,\beta\}\) We showed that \(I\) does not contain nodes of \(c\) except at its ends. Therefore the interior of \(I\) is either entirely inside \(\sigma(c)\) or entirely outside. Assume that \(I\subseteq\sigma(c)\). Then \(I\) is a factor of \(D\) and \(\alpha\) and \(\beta\) are consecutive nodes of \(D\), and by our assumptions on \(E\) we know that \(I=D[\alpha,\beta]=E[\alpha,\beta]\). Suppose that \(c\) is a border cell of \(E\). Since \(\sigma(c)\) contains at least one internal vertex (because \(m_{\alpha\beta}\in I\)) it follows from the maximality of \(P\) that there is a vertex \(p\in P\cap\sigma(c)\). Since \(E\) is proper, the third node \(\gamma\) of \(c\) is in the interior of \(\Delta_{E}^{\mathrm{out}}\) and therefore not in \(V(D)\) and therefore \(D\) has a single factor in \(c\). It follows that \(p\in P\cap V(I)\) contrary to our assumption that \(\Omega\) does not contain a peg choice for \(I\). The cell \(c\) cannot be exterior either because it contains a factor of \(E\). Therefore \(c\subset\Delta_{E}^{\mathrm{in}}\).
The path \(R_{I}\) goes from \(C\) in the exterior of \(c\) to the vertex \(m_{\alpha\beta}\in V(I)\subset V(\sigma(c))\). Therefore \(V(R_{I})\) must contain a node \(\gamma\) of \(c\). Since \(c\) is interior to \(E\), the grounded subpath of \(R_{I}\) leading from \(C\) to \(\gamma\) must pass through a node of \(E\). Since \(R_{I}\) is disjoint from \(D\) except at \(m_{\alpha\beta}\), it follows that this \(E\) node must be \(m_{\alpha\beta}\), contrary to our assumption that there are no \(E\) nodes in the interior of \(I\). We conclude that the interior of \(I\) is entirely outside \(\sigma(c)\).
* If \(Q_{D}\) is a factor of \(I\), then \(h(Q_{D})\subset\Delta_{E}^{\rm in}\). Assume that \(h(Q_{D})\subset\Delta_{E}^{\rm out}\). Then it must be exterior to \(D\) as well, and as the home of a factor of \(D\) it must be a border cell of \(D\), with \({\rm tr}(Q_{D})\) separating the interior of \(h(Q_{D})\) from \(\Delta_{D}^{\rm in}\). Therefore \({\rm tr}(Q_{D})\) separates \(\Delta_{E}^{\rm out}\) from \(\Delta_{E}^{\rm in}\), and so \({\rm tr}(Q_{D})\) is a segment of \({\rm tr}(E)\), and so there is a factor \(Q_{E}\) of \(E\) with \(h(Q_{E})=h(Q_{D})\) and with \(Q_{D}\) and \(Q_{E}\) sharing the same ends. Since the clockwise directions on \(N(D)\) and \(N(E)\) agree, \(Q_{E}\) must be a subpath of \(E[\alpha,\beta]\), which implies that \(Q_{E}=E[\alpha,\beta]\) and therefore \(Q_{D}=D[\alpha,\beta]=I\) and so \(h(I)=h(Q_{D})=h(Q_{E})=h(E[\alpha,\beta])=c\), contrary to our assumption that \(I\) is disjoint from the interior of \(c\).
* Reach a contradiction and conclude that there is a peg choice for \(I\) in \(\Omega\). Let \(Q_{D}\) be a factor of \(D[\alpha,\beta]\) that contains \(m_{\alpha,\beta}\) as a vertex (\(Q_{D}\) may not be unique because we cannot exclude the possibility that \(m_{\alpha\beta}\) is a node.) The path \(R_{I}\) must intersect a node \(n\) of \(h(Q_{D})\). Since \(h(Q_{D})\subseteq\Delta_{E}^{\rm in}\), we have \(n\in\Delta_{E}^{\rm in}\) and therefore the grounded subpath of \(R_{I}\) from \(C\) to \(n\) must contain a node of \(E\). The only possible candidate for such a node is \(m_{\alpha\beta}\), contrary to our assumption that there are no nodes of \(E\) in the interior of \(I\).
#### 2.3.1 Revisiting the proof of 5.2 (The Flat Wall Theorem)
In the final part of the proof of 5.2, one obtains a rural society \((H_{i},C)\) where \(H_{i}\) contains a wall \(W_{i}\) that contains the vertices of \(C\) as corners, and a subwall \(W\subset W_{i}\), with boundary \(D\), such that \(W\) is far from the boundary of \(W_{i}\). The original proof of 5.2 is then proceeds by appealing to lemma 5.1, proving that \(W\) is flat in \(H_{i}\) (and ultimately in \(G\)), using the notion of flatness defined in [1].
To fix the proof, we use Lemma 6 instead. All we have to do is construct the paths \(R_{I}\). This is not hard to do, but requires slightly different constructions for peg intervals \(I\) belonging to different types of border bricks \(B\) of \(W\). The construction works because \(W\) is contained entirely within the interior of \(W_{i}\) and does not intersect its boundary.
If \(B\) is a bulging right side brick or a top right corner brick of \(W\) (either bulging or recessed), construct \(R_{I}\) by first drawing a horizontal rightward path from the top right corner of \(B\) (which is in the interior of \(I\)) to the first vertex \(v\) on the boundary of \(W_{i}\), and then continue up the right boundary of \(W_{i}\) to the top right corner of \(W_{i}\) which is in \(C\) by assumption.
The same construction holds for bulging left side bricks and bottom left corner brick of \(W\) by rotating the picture 180 degrees, it works for the bottom right corner brick by flipping the picture 180 degrees along the horizontal axis, and for the top left corner brick by flipping the picture 180 degrees along the vertical axis.
The remaining case is when \(B\) is a top (or bottom) brick of \(W\). Start \(R_{I}\) at the upward \(W_{i}\)-edge emanating from the middle of \(I\), and continue in a vertical, right-bulging square-wave pattern until you hit the boundary of \(W_{i}\) at a vertex \(v\), and then continue right along the boundary of \(W_{i}\) until you reach a corner, which is in \(C\) by assumption.
It is not hard to check that these constructions give the desired paths and that the four corner brick paths are mutually disjoint as required. Lemma 6 now implies that \(W\) is flat in \(H_{i}\).
#### 2.3.2 Revisiting Lemma 6.1 (Subwalls of flat walls are flat)
Lemma 6.1 in [1] attempts to prove that a subwall of a flat wall \(W\) is also flat, at least when the boundary of the subwall is disjoint from the boundary of \(W\). According to [5], this assertion is not true in full generality, and that paper proposes a way to add _certificates of flatness_ to make the statement of 6.1 true after some necessary modifications. With the new definition of flatness proposed here, we show that 6.1 is true in general, without any restrictions on the boundary of the subwall.
**Lemma 7**.: _Let \(W\) be a flat wall in a graph \(G\), and let \(W^{\prime}\) be a subwall of \(W\) of height at least 3. Then \(W^{\prime}\) is flat in \(G\)._
Proof.: Let \(D\) and \(D^{\prime}\) be the boundaries of \(W\) and \(W^{\prime}\), respectively. \(W\) being flat means that there is a separation \((A,B)\) of \(G\) and a vertex set \(\Omega\subseteq A\cap B\) such that
* \(V(W)\subseteq V(B)\)
* \(A\cap B\subseteq V(D)\)
* \(\Omega\) contains an internal vertex from each peg interval of \(W\).
* Endow \(\Omega\) with a circular order induced from \(D\). Then \((G[B],\Omega)\) is a rural society.
Apply lemma 5.1' to the society \((G[B],\Omega)\), the subgraph \(W^{\prime}\) and its cycle \(D^{\prime}\). Finding the required paths \(P_{1},\ldots,P_{4}\) is trivial. Then there is a \(\Omega\)-rendition \(\rho\) of \(G[B]\) and a \(\rho\)-proper cycle \(E\) in \(G[B]\) such that
* \(N(E)\subseteq N(D^{\prime})\)
* \(\Delta_{E}^{\rm in}\supseteq\Delta_{D^{\prime}}^{\rm in}\)
* The sets
* \(A^{\prime}=N(\Delta_{E}^{\rm out})\cup\bigcup_{c\subseteq\Delta_{E}^{\rm out }}\sigma(c)\)
* \(B^{\prime}=(V(D^{\prime})\cap V(E))\cup\bigcup_{c\subseteq\Delta_{E}^{\rm in }}\sigma(c)\) form a separation of \(G[B]\) such that \(V(W^{\prime})\subseteq V(B^{\prime})\) and for any choice \(P\) of internal vertices of shared factors of \(D^{\prime}\) and \(E\), \(N(E)\cup P\subseteq A^{\prime}\cap B^{\prime}\subseteq V(D^{\prime})\) and \(((G[B])[B^{\prime}],N(E)\cup P)\) is rural.
It is easy to see that \((G[B])[B^{\prime}]=G[B^{\prime}]\) and so \((G[B^{\prime}],N(E)\cup P)\) is rural. Look at the separation \((\bar{A},\bar{B})=(A\cup A^{\prime},B\cap B^{\prime})\). It is obvious that \((\bar{A},\bar{B})\) is a separation of \(G\). We will show that together with \(\Omega^{\prime}=N(E)\cup P\) it provides evidence for the flatness of \(W^{\prime}\).
* \(V(W^{\prime})\subseteq V(\bar{B})\) This is easy since we already know that \(V(W^{\prime})\subseteq V(W)\subseteq V(B)\) and \(V(W^{\prime})\subseteq V(B^{\prime})\).
* Choose \(P\) to be maximal. Then \(\Omega^{\prime}\) intersects the interior of each peg interval of \(W^{\prime}\). This will follow directly from Lemma 6, once we show that all the required paths \(R_{I}\) from \(\Omega\) to the peg intervals of \(W^{\prime}\) exist. This is not hard to do. Here is one recipe: Let \(I^{\prime}\) be a peg interval of \(W^{\prime}\) along a border brick \(B^{\prime}\) of \(W^{\prime}\). If \(m\in V(I^{\prime})\cap\Omega\) is an interior vertex of \(I^{\prime}\) then define \(R_{I^{\prime}}=\{m\}\). Otherwise assume that \(\Omega\) does not intersect the interior of \(I^{\prime}\). This also implies that \(B^{\prime}\) is not a border brick of \(W\). If \(B^{\prime}\) is a bulging right side brick or a top right corner brick of \(W^{\prime}\), construct \(R_{I^{\prime}}\) by drawing a horizontal rightward path from the top right corner of \(B^{\prime}\) (which is in the interior of \(I^{\prime}\)) to the first vertex \(v\) on the boundary of \(W\). The vertex \(v\) is a member of a unique \(W\)-brick \(B\) which is either a bulging right side brick or a bulging top right corner brick of \(W\), and is an end vertex of its \(W\)-peg interval \(I\). By assumption \(\Omega\) contains an interior point \(p\) of \(I\). Continue the path from \(v\) to \(p\) along \(I\), thus completing \(R_{I^{\prime}}\). The same construction holds when "right" is replaced by "left" or "top" is replaced by "bottom". The remaining case is when \(B^{\prime}\) is a top (or bottom) brick of \(W^{\prime}\). By our assumption on \(V(I^{\prime})\cap\Omega\) we can assume that \(B^{\prime}\) is an interior brick of \(W\). Start \(R_{I^{\prime}}\) at the upward \(W\)-edge emanating from the middle of \(I^{\prime}\), and continue in a vertical, right-bulging square-wave pattern until you hit the boundary of \(W\) at a vertex \(v\). The vertex \(v\) is a member of exactly two top bricks of \(W\). Let \(B\) be the left brick and let \(I\) be its peg interval. Then \(v\) is an end vertex of \(I\). By assumption \(\Omega\) contains a vertex \(p\) in the interior of \(I\). Continue the path from \(v\) to \(p\) along \(I\) to complete \(R_{I^{\prime}}\). It is not hard to check that these constructions give the desired paths and that the four corner brick paths are mutually disjoint as required.
## Appendix A A Counterexample to the Flat Wall Theorem (5.2 in [1])
The Flat Wall Theorem as stated in [1] says:
**Theorem 5.2**.: _Let \(r,t\geq 1\) be integers, let \(r\) be even, let \(R=49152t^{24}(40t^{2}+r)\), let \(G\) be a graph, and let \(W\) be an \(R\)-wall in \(G\). Then either \(G\) has a model of a \(K_{t}\) minor grasped by \(W\), or there exist a set \(A\subseteq V(G)\) of size at most \(12288t^{24}\) and an \(r\)-subwall \(W^{\prime}\) of \(W\) such that \(V(W^{\prime})\cap A=\emptyset\) and \(W^{\prime}\) is a flat wall in \(G\setminus A\)._
This theorem fails because of a definition of flatness that is too strict1. Flatness is defined in [1] as follows:
Footnote 1: Robertson and Seymour use a much looser definition.
**Definition**.: _Let \(G\) be a graph, and let \(W\) be a wall in \(G\) with an outer cycle \(D\). Let us assume that there exists a separation \((A,B)\) such that \(A\cap B\subseteq V(D)\), \(V(W)\subseteq B\), and there is a choice of pegs of \(W\) such that every peg belongs to \(A\). If some \(A\cap B\)-reduction of \(G[B]\) can be drawn in a disk
_with the vertices of \(A\cap B\) drawn on the boundary of the disk in the order determined by \(D\), then we say that the wall \(W\) is flat in \(G\)._
The _choice of pegs_ requirement in the definition simply means that for every top or bottom brick in \(W\), at least one degree-2 vertex of \(D\) along that brick must be in \(A\); for every right or left brick at least two degree-2 vertices of \(D\) along the brick must be in \(A\); two such vertices are in \(A\) for recessed corner bricks; and finally three such vertices are in \(A\) for bulging corner bricks. See Figure 4 for reference.
In light of Lemmas 1.3 and 1.4 in [1], the definition can be rephrased as follows (compare with Definition 9).
**Definition**.: _Let \(G\) be a graph and \(W\subset G\) a wall with boundary \(D\). We say that \(W\) is flat in \(G\) if there is a separation \((A,B)\) of \(G\) such that_
1. \(V(W)\subset V(B)\)__
2. \(A\cap B\subset V(D)\)__
3. \(A\cap B\) _contains a choice of pegs of_ \(W\)_._
4. _Endow_ \(A\cap B\) _with a circular order induced from_ \(D\)_. Then_ \((G[B],A\cap B)\) _is a rural society._
The root cause of the failure of Theorem 5.2 is that \(G[B]\) can include arbitrary arrangements of edges between vertices of \(D\) that can prevent the society \((G[B],A\cap B)\) from being rural as required.
The counterexample to 5.2 is essentially a very large wall with some additional vertices and edges added to each brick, including a pair of crossing edges with ends along the horizontal bottom edge of the brick. We call these bricks _full bricks_ (see Figure 5(a)). We will refer to a graph built by layering full bricks in an \(R\)-wall-like configuration as an \(R\)-_counterwall_. We must take care when we layer these bricks - when a brick \(A\) is layered over the top left of brick \(B\), their shared horizontal path is determined by \(A\) and not by \(B\). If \(A\) is layered on over the top right of \(B\), then the shared horizontal path is just a single edge.
Write \(G_{R}\) for an \(R\)-counterwall built out of full bricks. The _wall of \(G_{R}\)_ is the wall obtained from \(G_{R}\) by removing all the diagonal edges of type \(\omega\alpha\), \(\omega\beta\), \(\omega\gamma\) and \(\omega\delta\) and all the curved edges of type \(\alpha\gamma\) and \(\beta\delta\). A _sub-counterwall_ of \(G_{R}\) is a union of full bricks of \(G_{R}\) that forms a counterwall.
**Claim 1**.: _The graph \(G_{R}\) does not possess a model of a \(K_{6}\) minor._
Figure 6: A full brick and two types of reduced bricks
Proof.: To prove the claim we introduce _reduced bricks_ (see Figure 6), and we consider _mixed counterwalls_ built with a mix of full and reduced bricks. The layering rules for full bricks apply to reduced bricks as well. We prove that mixed counterwalls do not possess a model of a \(K_{6}\) minor. The proof proceeds by induction on the number of full bricks in the mixed counterwall.
The base case is easy. If a mixed counterwall \(G\) does not contain any full bricks, then one can check that \(G\) is planar by inspecting the two types of reduced bricks. For the other cases, suppose that there is a model of a \(K_{6}\) minor in \(G\). Denote its six branch sets by \(B_{1},\ldots,B_{6}\) and assume that all of them are trees. Denote its fifteen model edges by \(e_{12},e_{13},\ldots,e_{56}\).
Notice that a branch set can be a singleton \(B_{i}=\{v\}\) only if \(v\) is a vertex of degree 5 at least. This excludes the degree 4 vertices of type \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\). The induction proceeds by choosing a full brick in \(G\) and considering the following cases.
1. One of the two edges \(\omega\beta\) and \(\omega\gamma\) is not used in the model - neither in some \(B_{i}\) nor as a model edge. In this case the full brick can be replaced with a reduced brick of type I or II, respectively. The \(K_{6}\) model lifts trivially to the new mixed counterwall, contrary to the induction hypothesis.
2. Two neighboring vertices \(v\), \(v^{\prime}\) among \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\) belong to the same branch set \(B_{i}\). At least one of \(v\) and \(v^{\prime}\) is in \(\{\beta,\gamma\}\). Let's say it is \(v\). Now examine the role of the vertex \(\omega\). If \(\omega\) is not used in the model then we are obviously in Case 1 since \(\omega v\) is not used in the model either. if \(\omega\) belongs to \(B_{j}\) with \(i\neq j\), then at most one of \(\omega v\) and \(\omega v^{\prime}\) plays the role of \(e_{ij}\). If it happens to be \(\omega v\), we can modify the model by replacing \(\omega v\) with \(\omega v^{\prime}\) in that role, and we are back to Case 1, since \(\omega v\) is no longer used in the model. So we may assume that \(\omega\) belongs to \(B_{i}\) as well. Look at the edge \(\omega v\). If \(\omega v\) is not used in \(B_{i}\) then we are back to Case 1, so assume that \(\omega v\in V(B_{i})\). Removing \(\omega v\) creates a disjoint union of trees \(B_{i}\setminus\omega v=B_{i}^{\omega}\sqcup B_{i}^{v}\) with \(\omega\in V(B_{i}^{\omega})\) and \(v\in V(B_{i}^{v})\). We can go back to Case 1 by removing \(\omega v\) from \(B_{i}\) and replacing it with \(vv^{\prime}\) (if \(v^{\prime}\in V(B_{i}^{\omega})\)) or with \(\omega v^{\prime}\) (if \(v^{\prime}\in V(B_{i}^{v})\)).
3. None of the above. We can assume that both \(\omega\beta\) and \(\omega\gamma\) are used in the model; that \(\beta\) and \(\gamma\) belong to two different branch sets \(B_{i}\) and \(B_{j}\); and that \(\omega\) belongs to some branch set \(B_{k}\). Without loss of generality we can assume \(i\neq k\). It follows that \(\beta\) does not share a branch set with either \(\alpha\), \(\gamma\) or \(\omega\). Since \(B_{i}\) cannot be a singleton, it must be the case that \(\delta\) is a vertex of \(B_{i}\). Since \(\omega\beta\) is used in the model by assumption, we have \(e_{ik}=\omega\beta\). We can change the model by replacing \(\omega\beta\) with \(\omega\delta\) in the role of \(e_{ik}\), and we are back in Case 1.
**Claim 2**.: _Let \(R>r>5\) be integers. Let \(G\) be an \(R\)-counterwall, \(G^{\prime}\subset G\) an \(r\)-sub-counterwall of \(G\) that is disjoint from the top horizontal path of \(G\), and \(X\subseteq V(G)\) a vertex set such that \(G^{\prime}\) is \(X\)-free (\(V(G^{\prime})\cap X=\emptyset\).) Assume that there are three \(X\)-free, horizontally consecutive bricks in \(G\) layered completely on top of \(G^{\prime}\) (their bottom paths are subpaths of the boundary of \(G^{\prime}\).) Then the wall of \(G^{\prime}\) is not flat in \(G\setminus X\) according to the definition of flatness in [1]._
of bricks of \(G^{\prime}\) with the rightmost two of the three \(X\)-free bricks above shown with dashed lines.
The section of \(D^{\prime}\) in the figure is the middle horizontal line and is marked with "\(D^{\prime}\)", the dashed horizontal path above that is marked with "\(T\)", and an example of a choice of pegs in the depicted section of \(D^{\prime}\) is marked with enlarged black circles. The argument below is independent of any particular choice of pegs.
Assume that \(W^{\prime}\) is flat in \(G\setminus X\). Then there is a separation \((A,B)\) of \(G\setminus X\) such that:
* \(V(W^{\prime})\subseteq V(B)\)
* \(A\cap B\subseteq V(D^{\prime})\)
* \(A\cap B\) contains a choice of pegs for \(W^{\prime}\).
* When endowed with the circular order induced from \(D^{\prime}\), the society \((G[B],A\cap B)\) is rural.
The vertices along the path \(T\) cannot belong to \(A\cap B\) because \(T\) is disjoint from \(D^{\prime}\). Therefore, since \(T\subset G\setminus X\), each vertex along \(T\) must belong to \(A\setminus B\) or to \(B\setminus A\). Since \((A,B)\) is a separation and \(T\) is connected, either all the vertices of \(T\) belong to \(A\setminus B\) or all of them belong to \(B\setminus A\).
1. All the vertices of \(T\) belong to \(A\setminus B\), and therefore \(\omega\) belongs to \(A\setminus B\). Since the vertices \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) in the top right brick belong to \(B\) by assumption2, the edges \(\omega\alpha,\ldots,\omega\delta\) force these vertices to belong to \(A\cap B\), which is impossible since the society \((G[B],A\cap B)\) is assumed to be rural and yet it has a cross \(\alpha\gamma,\beta\delta\). Footnote 2: labels not depicted in Figure 7, see Figure 6a
2. All the vertices of \(T\) belong to \(B\setminus A\). In this case as well there is a cross in \((G[B],A\cap B)\) as illustrated in Figure 83. Notice that a similar cross exists for any choice of pegs.
Footnote 3: Technically, it is a cross only if the depicted peg choices are the rightmost choices in each brick.
**Claim 3**.: _The Flat Wall Theorem is not correct as stated in Theorem 5.2 of [1]._
Proof.: Let \(r,t\) be integers with \(r\) even, \(t\geq 6\) and \(r>4+36864t\)24. let \(R=49152t^{24}(40t^{2}+r)\). Let \(G\) be an \(R\)-sub-counterwall of \(G_{R+1}\) that is disjoint from the top horizontal path of \(G_{R+1}\). Let \(W\) be the wall of \(G\). According to 5.2, either \(G_{R+1}\) has a model of a \(K_{t}\) minor, or there exist a set
Figure 7: Section of top row of sub-counterwall \(G^{\prime}\), with two \(X\)-free bricks above
\(A\subseteq V(G_{R+1})\) of size at most \(12288t^{24}\) and an \(r\)-subwall \(W^{\prime}\) of \(W\) such that \(V(W^{\prime})\cap A=\emptyset\) and \(W^{\prime}\) is a flat wall in \(G_{R+1}\setminus A\).
Since \(t\geq 6\), \(G_{R+1}\) does not have a model of a \(K_{t}\) minor, as we have shown, and therefore according to 5.2, \(A\) and \(W^{\prime}\) exist as specified above. Since \(W^{\prime}\) is a subwall of the wall of \(G\), \(W^{\prime}\) is the wall of a sub-counterwall \(G^{\prime}\) of \(G\). The top row of \(G^{\prime}\) contains at least \(4+36864t^{24}=1+3(1+12288t^{24})\) bricks. By our assumption there is a row of bricks in \(G_{R+1}\) directly above that row, with a consecutive series of at least \(3(1+12288t^{24})\) bricks that are layered completely on top of \(G^{\prime}\). By dividing that series into \(\geq 1+12288t^{24}\) blocks of 3 consecutive bricks, we can conclude that by the pigeon hole principle there are 3 consecutive \(X\)-free bricks that are layered completelely on top of \(G^{\prime}\), and therefore as we have shown \(W^{\prime}\) is not flat in \(G_{R+1}\), contrary to the claim of 5.2.
|
2307.14433 | ProtoASNet: Dynamic Prototypes for Inherently Interpretable and
Uncertainty-Aware Aortic Stenosis Classification in Echocardiography | Aortic stenosis (AS) is a common heart valve disease that requires accurate
and timely diagnosis for appropriate treatment. Most current automatic AS
severity detection methods rely on black-box models with a low level of
trustworthiness, which hinders clinical adoption. To address this issue, we
propose ProtoASNet, a prototypical network that directly detects AS from B-mode
echocardiography videos, while making interpretable predictions based on the
similarity between the input and learned spatio-temporal prototypes. This
approach provides supporting evidence that is clinically relevant, as the
prototypes typically highlight markers such as calcification and restricted
movement of aortic valve leaflets. Moreover, ProtoASNet utilizes abstention
loss to estimate aleatoric uncertainty by defining a set of prototypes that
capture ambiguity and insufficient information in the observed data. This
provides a reliable system that can detect and explain when it may fail. We
evaluate ProtoASNet on a private dataset and the publicly available TMED-2
dataset, where it outperforms existing state-of-the-art methods with an
accuracy of 80.0% and 79.7%, respectively. Furthermore, ProtoASNet provides
interpretability and an uncertainty measure for each prediction, which can
improve transparency and facilitate the interactive usage of deep networks to
aid clinical decision-making. Our source code is available at:
https://github.com/hooman007/ProtoASNet. | Hooman Vaseli, Ang Nan Gu, S. Neda Ahmadi Amiri, Michael Y. Tsang, Andrea Fung, Nima Kondori, Armin Saadat, Purang Abolmaesumi, Teresa S. M. Tsang | 2023-07-26T18:06:25Z | http://arxiv.org/abs/2307.14433v1 | ProtoASNet: Dynamic Prototypes for Inherently Interpretable and Uncertainty-Aware Aortic Stenosis Classification in Echocardiography
###### Abstract
Aortic stenosis (AS) is a common heart valve disease that requires accurate and timely diagnosis for appropriate treatment. Most current automatic AS severity detection methods rely on black-box models with a low level of trustworthiness, which hinders clinical adoption. To address this issue, we propose ProtoASNet, a prototypical network that directly detects AS from B-mode echocardiography videos, while making interpretable predictions based on the similarity between the input and learned spatio-temporal prototypes. This approach provides supporting evidence that is clinically relevant, as the prototypes typically highlight markers such as calcification and restricted movement of aortic valve leaflets. Moreover, ProtoASNet utilizes abstention loss to estimate aleatoric uncertainty by defining a set of prototypes that capture ambiguity and insufficient information in the observed data. This provides a reliable system that can detect and explain when it may fail. We evaluate ProtoASNet on a private dataset and the publicly available TMED-2 dataset, where it outperforms existing state-of-the-art methods with an accuracy of 80.0% and 79.7%, respectively. Furthermore, ProtoASNet provides interpretability and an uncertainty measure for each prediction, which can improve transparency and facilitate the interactive usage of deep networks to aid clinical decision-making. Our source code is available at: [https://github.com/hooman007/ProtoASNet](https://github.com/hooman007/ProtoASNet).
Keywords:Aleatoric Uncertainty Aortic Stenosis Echocardiography Explainable AI Prototypical Networks
## 1 Introduction
Aortic stenosis (AS) is a common heart valve disease characterized by the calcification of the aortic valve (AV) and the restriction of its movement. It affects
5% of individuals aged 65 or older [2] and can progress rapidly from mild or moderate to severe, reducing life expectancy to 2 to 3 years [20]. Echocardiography (echo) is the primary diagnostic modality for AS. This technique measures Doppler-derived clinical markers [16] and captures valve motion from the parasternal long (PLAX) and short axis (PSAX) cross-section views. However, obtaining and interpreting Doppler measurements requires specialized training and is subject to significant inter-observer variability [14, 15].
To alleviate this issue, deep neural network (DNN) models have been proposed for automatic assessment of AS directly from two-dimensional B-mode echo, a modality more commonly used in point-of-care settings. Huang et al. [9, 10] proposed a multitask model to classify the severity of AS using echo images. Ginsberg et al. [6] proposed an ordinal regression-based method that predicts the severity of AS and provides an estimate of aleatoric uncertainty due to uncertainty in training labels. However, these works utilized black-box DNNs, which could not provide an explanation of their prediction process.
Explainable AI (XAI) methods can provide explanations of a DNN's decision making process and can generally be categorized into two classes. Post-hoc XAI methods explain the decisions of trained black-box DNNs. For example, gradient-based saliency maps [18, 19] show where a model pays attention to, but these methods do not necessarily explain why one class is chosen over another [17], and at times result in misleading explanations [1]. Ante-hoc XAI methods are explicitly designed to be explainable. For instance, prototype-based models [4, 8, 11, 12, 22, 23], which the contributions of our paper fall under, analyze a given input based on its similarity to learned discriminative features (or "prototypes") for each class. Both the learned prototypes and salient image patches of the input can be visualized for users to validate the model's decision making.
There are two limitations to applying current prototype-based methods to the task of classifying AS severity from echo time series. First, prototypes should be spatio-temporal instead of only spatial, since AS assessment requires attention to small anatomical regions in echo (such as the AV) at a particular phase of the heart rhythm (mid-systole). Second, user variability in cardiac view acquisition and poor image quality can complicate AV visualization in standard PLAX and PSAX views. The insufficient information in such cases can lead to more plausible diagnoses than one. Therefore, a robust solution should avoid direct prediction and notify the user. These issues have been largely unaddressed in previous work.
We propose ProtoASNet (Fig. 1), a prototype-based model for classifying AS severity from echo time series. ProtoASNet discovers dynamic prototypes that describe shape- and movement-based phenomena relevant to AS severity, outperforming existing models that only utilize image-based prototypes. Additionally, our model can detect ambiguous decision-making scenarios based on similarity with less informative samples in the training set. This similarity is expressed as a measure of aleatoric uncertainty. To the best of our knowledge, the only prior work for dynamic prototypes published to-date is [7]. ProtoASNet is the first work to use dynamic prototypes in medical imaging and the first to incorporate aleatoric uncertainty estimation with prototype-based networks.
## 2 Methods
### Background: Prototype-Based Models
Prototype-based models explicitly make their decisions using similarities to cases in the training set. These models generally consist of three key components structured as \(h(g(f(x)))\). Firstly, \(f(.)\) is a feature encoder such as a ConvNet that maps images \(x\in\mathbb{R}^{H_{o}\times W_{o}\times 3}\) to \(f(x)\in\mathbb{R}^{H\times W\times D}\), where \(H\), \(W\), and \(D\) correspond to the height, width, and feature depth of the ConvNet's intermediate layer, respectively. Secondly, \(g(.)\in\mathbb{R}^{H\times W\times D}\rightarrow\mathbb{R}^{P}\) is a prototype pooling function that computes the similarity of encoded features \(f(x)\) to \(P\) prototype vectors. There are \(K\) learnable prototypes defined for each of \(C\) classes, denoted as \(p_{k}^{c}\). Finally, \(h(.)\in\mathbb{R}^{P}\rightarrow\mathbb{R}^{C}\) is a fully-connected layer that learns to weigh the input-prototype similarities against each other to produce a prediction score for each class. To ensure that the prototypes \(p_{k}^{c}\) reflect those of true examples in the training distribution, they are projected ("pushed") towards the embeddings of the closest training examples of class \(c\).
\[p_{k}^{c}\leftarrow\underset{z\in\mathcal{Z}_{c}}{\arg\min}\|z-p_{k}^{c}\|_{2},\text{where }\mathcal{Z}_{c}=\{z:z\in f_{p_{k}^{c}}(x_{i})\ s.t.\ y_{i}\in c\} \tag{1}\]
Such models are inherently interpretable since they are enforced to first search for similar cases in the training set and then to compute how these similarities contribute to the classification. As a result, they offer a powerful approach for identifying and classifying similar patterns in data.
Figure 1: **(A)** An overview of our proposed ProtoASNet architecture. ProtoASNet extracts spatio-temporal feature vectors \(f_{p_{k}^{c}}(x)\) from the video, which are compared with learned prototypes. Similarity values between features and prototypes are aggregated to produce a score for class membership and aleatoric uncertainty. **(B)** Prototypes representing aleatoric uncertainty (blue) can capture regions of the data distribution with inherent ambiguity (intersection between green and yellow regions). In practice, this region consists of videos with poor visual quality.
### ProtoASNet
#### 2.2.1 Feature Extraction.
The overall structure of ProtoASNet is shown in Fig. 1. The feature extraction layer consists of a convolutional backbone, in our case the first three blocks of a pre-trained R(2+1)D-18 [21] model, followed by two branches of feature and region of interest (ROI) modules made up of two and three convolutional layers respectively. In both modules, the convolutional layers have ReLU activation function, except the last layers which have linear activations. Given an input video \(x\in\mathbb{R}^{H_{o}\times W_{o}\times T_{o}\times 3}\) with \(T_{o}\) frames, the first branch learns a feature \(F(x)\in\mathbb{R}^{H\times W\times T\times D}\), where each \(D\)-dimensional vector in \(F(x)\) corresponds to a specific spatio-temporal region in the video. The second branch generates \(P\) regions of interest, \(M_{p_{k}^{c}}(x)\in\mathbb{R}^{H\times W\times T}\), that specify which regions of \(F(x)\) are relevant for comparing with each prototype \(p_{k}^{c}\).
The features from different spatio-temporal regions must be pooled before being compared to prototypes. As in [12], we perform a weighted average pooling with the learned regions of interest as follows:
\[f_{p_{k}^{c}}(x)=\frac{1}{HWT}\sum_{H,W,T}|M_{p_{k}^{c}}(x)|\circ F(x), \tag{2}\]
where \(|.|\) is the absolute value and \(\circ\) is the Hadamard product.
#### 2.2.2 Prototype Pooling.
The similarity score of a feature vector \(f_{p_{k}^{c}}\) and prototype \(p_{k}^{c}\) is calculated using cosine similarity, which is then shifted to \([0,1]\):
\[g(x,p_{k}^{c})=\frac{1}{2}(1+\frac{<f_{p_{k}^{c}}(x),p_{k}^{c}>}{\|f_{p_{k}^{c }}(x)\|_{2}\|p_{k}^{c}\|_{2}}). \tag{3}\]
#### 2.2.3 Prototypes for Aleatoric Uncertainty Estimation.
In Fig. 1, trainable uncertainty prototypes (denoted \(p_{k}^{u}\)) are added to capture regions in the data distribution that are inherently ambiguous (Fig. 1.B). We use similarity between \(f_{p_{k}^{u}}(x)\) and \(p_{k}^{u}\) to quantify aleatoric uncertainty, denoted \(\alpha\in[0,1]\). We use an "abstention loss" (Eq. (6)) method inspired by [5] to learn \(\alpha\) and thereby \(p_{k}^{u}\). In this loss, \(\alpha\) is used to interpolate between the ground truth and prediction, pushing the model to "abstain" from its own answer at a penalty.
\[\hat{y} =\sigma(h(g(x,p_{k}^{c}))),\quad\alpha=\sigma(h(g(x,p_{k}^{u}))); \tag{4}\] \[\hat{y}^{\prime} =(1-\alpha)\hat{y}+\alpha y;\] (5) \[\mathcal{L}_{abs} =CrsEnt(\hat{y}^{\prime},y)-\lambda_{abs}\log(1-\alpha), \tag{6}\]
where \(\sigma\) denotes Softmax normalization in the output of \(h(.)\), \(y\) and \(\hat{y}\) are the ground truth and the predicted probabilities, respectively, and \(\lambda_{abs}\) is a regularization constant.
When projecting \(p_{k}^{u}\) to the nearest extracted feature from training examples, we relax the requirement in Eq. (1) allowing the uncertainty prototypes to be pushed to data with the ground truth of any AS severity class.
#### 2.0.2 Class-Wise Similarity Score.
The fully connected (FC) layer \(h(.)\) is a dense mapping from prototype similarity scores to prediction logits. Its weights, \(w_{h}\), are initialized to be 1 between class \(c\) and the corresponding prototypes and 0 otherwise to enforce the process to resemble positive reasoning. \(h(.)\) produces a score for membership in each class and for \(\alpha\).
#### 2.0.3 Loss Function.
As in previous prototype-based methods [4, 12], the following losses are introduced to improve performance: 1) Clustering and separation losses (Eq. (7)), which encourage clustering based on class, where \(\mathcal{P}_{y}\) denotes the set of prototypes belonging to class \(y\). Due to lack of ground truth uncertainties, these losses are only measured on \(p_{k}^{c}\), not \(p_{k}^{u}\); 2) Orthogonality loss (Eq. (8)), which encourages prototypes to be more diverse; 3) Transformation loss \(\mathcal{L}_{trns}\) (described in [12]), which regularizes the consistency of the predicted occurrence regions under random affine transformations; 4) Finally, \(\mathcal{L}_{norm}\) (described in [4]) regularizes \(w_{h}\) to be close to its initialization and penalizes relying on similarity to one class to influence the logits of other classes. Eq. (9) describes the overall loss function where \(\lambda\) represent regularization coefficients for each loss term. The network is trained end-to-end. We conduct a "push" stage (see Eq. (1)) every 5 epochs to ensure that the learned prototypes are consistent with the embeddings from real examples.
\[\mathcal{L}_{clst} =-\max_{p_{k}^{c}\in\mathcal{P}_{y}}g(x,p_{k}^{c}),\quad \mathcal{L}_{sep}=\max_{p_{k}^{c}\notin\mathcal{P}_{y}}g(x,p_{k}^{c}); \tag{7}\] \[\mathcal{L}_{orth} =\sum_{i>j}\frac{<p_{i},p_{j}>}{\|p_{i}\|_{2}\|p_{j}\|_{2}};\] (8) \[\mathcal{L} =\mathcal{L}_{abs}+\lambda_{clst}\mathcal{L}_{clst}+\lambda_{sep }\mathcal{L}_{sep}+\lambda_{orth}\mathcal{L}_{orth}+\lambda_{trns}\mathcal{L} _{trns}+\lambda_{norm}\mathcal{L}_{norm}. \tag{9}\]
## 3 Experiments and Results
### Datasets
We conducted experiments on a private AS dataset and the public TMED-2 dataset [10]. The private dataset was extracted from an echo study database of a tertiary care hospital with institutional review ethics board approval. Videos were acquired with Philips iE33, Vivid i, and Vivid E9 ultrasound machines. For each study, the AS severity was classified using clinically standard Doppler echo guidelines [3] by a level III echocardiographic, keeping only cases with concordant Doppler measurements. PLAX and PSAX view cines were extracted from each study using a view-detection algorithm [13], and subsequently screened by a level III echocardiographic to remove misclassified cines. For each cine, the echo beam area was isolated and image annotations were removed. The dataset consists of 5055 PLAX and 4062 PSAX view cines, with a total of 2572 studies. These studies were divided into training, validation, and test sets, ensuring patient exclusivity and following an 80-10-10 ratio. We performed randomized augmentations including resized cropping and rotation.
The TMED-2 dataset [10] consists of 599 fully labeled echo studies containing 17270 images in total. Each study consists of 2D echo images with clinician-annotated view labels (PLAX/PSAX/Other) and Doppler-derived study-level AS severity labels (no AS/early AS/significant AS). Though the dataset includes an unlabeled portion, we trained on the labeled set only. We performed data augmentation similar to the private dataset without time-domain operations.
### Implementation Details
To better compare the results with TMED-2 dataset, we adopted their labeling scheme of no AS (normal), early AS (mild), and significant AS (moderate and severe) in our private dataset. We split longer cines into 32-frame clips which are approximately one heart cycle long. In both layers of the feature module, we used \(D\) convolutional filters, while the three layers in the ROI module had \(D\), \(\frac{D}{2}\), and \(P\) convolutional filters, preventing an abrupt reduction of channels to the relatively low value of \(P\). In both modules, we used kernel size of 1\(\times\)1\(\times\)1. We set \(D=256\) and \(K=10\) for AS class and aleatoric uncertainty prototypes. Derived from the hyperparameter selection of ProtoPNet [4], we assigned the values of 0.8, 0.08, and \(10^{-4}\) to \(\lambda_{clst}\), \(\lambda_{sep}\), and \(\lambda_{norm}\) respectively. Through a search across five values of 0.1, 0.3, 0.5, 0.9, and 1.0, we found the optimal \(\lambda_{abs}\) to be 0.3 based on the mean F1 score of the validation set. Additionally, we found \(\lambda_{orth}\) and \(\lambda_{trns}\) to be empirically better as \(10^{-2}\) and \(10^{-3}\) respectively. We implemented our framework in PyTorch and trained the model end-to-end on one 16 GB NVIDIA Tesla V100 GPU.
### Evaluations on Private Dataset
#### 3.3.1 Quantitative Assessment.
In Table 1, we report the performance of ProtoASNet in AS severity classification against the black-box baselines for image (Huang et al. [9]), video (Ginsberg et al. [6]), as well as other prototypical methods, i.e. ProtoPNet [4] and XProtoNet [12]. In particular, for ProtoASNet, ProtoPNet [4], and XProtoNet [12], we conduct both image-based and video-based experiments with ResNet-18 and R(2+1)D-18 backbones respectively. We apply softmax to normalize the ProtoASNet output scores, including \(\alpha\), to obtain class probabilities that account for the presence of aleatoric uncertainty. We aggregate model predictions by averaging their probabilities from the image- (or clip-) level to obtain cine- and study-level predictions. We believe the uncertainty probabilities reduce the effect of less informative datapoints on final aggregated results. Additionally, the video-based models perform better than the image-based ones because the learnt prototypes can also capture AV motion which is an indicator of AS severity. These two factors may explain why our proposed method, ProtoASNet, outperforms all other methods for study-level classification.
#### 3.3.2 Qualitative Assessment.
The interpretable reasoning process of ProtoASNet for a video example is shown in Fig. 2. We observe that ProtoASNet places significant importance on prototypes corresponding to thickened AV leaflets due to
calcification, which is a characteristic of both early and significant AS. Additionally, prototypes mostly capture the part of the heart cycle that aligns with the opening of the AV, providing a clinical indication of how well the valve opens up to be able to pump blood to the rest of the body. This makes ProtoASNet's reasoning process interpretable for the user. Note how the uncertainty prototypes focusing on AV regions where the valve leaflets are not visible, are contributing to the uncertainty measure, resulting in the case being flagged as uncertain.
#### 4.2.2 Ablation Study.
We assessed the effect of removing distinct components of our design: uncertainty prototypes (\(\mathcal{L}_{abs},p_{k}^{u}\)), clustering and separation (\(\mathcal{L}_{clst},\mathcal{L}_{sep}\)), and _push_ mechanism. As shown in Table 2, keeping all the aforementioned components results in superior performance in terms of bACC and bMAE. We evaluated whether the model is capable of detecting its own misclassification using the value of \(\alpha\) (or entropy of the class predictions in the case without \(\mathcal{L}_{abs},p_{k}^{u}\)). This is measured by the AUROC of detecting (\(y\neq\hat{y}\)). Learning \(p_{k}^{u}\) may benefit accuracy by mitigating the overfitting of \(p_{k}^{c}\) to poor-quality videos. Furthermore, \(\alpha\) seems to be a stronger indicator for misclassification than entropy. Moreover, we measured prototype quality using diversity and sparsity [8], normalized by the total number of prototypes. Ideally, each prediction can be explained by a low number of prototypes (low \(s_{spars}\)) but different predictions are explained with different prototypes (high Diversity). When \(\mathcal{L}_{clst}\) and \(\mathcal{L}_{sep}\) are removed, the prototypes are less constrained, which contributes to stronger misclassification detection and more diversity, but reduce accuracy and cause explanations to be less sparse. Finally, the _push_ mechanism improves performance, countering the intuition of an interpretability-performance trade-off.
\begin{table}
\begin{tabular}{c|c c c|c c c} \multirow{2}{*}{Method} & \multicolumn{3}{c}{Cine-level (N=973)} & \multicolumn{3}{c}{Study-level (N=258)} \\ & bACC\(\uparrow\) & F1 \(\uparrow\) & bMAE\(\downarrow\) & bACC\(\uparrow\) & F1 \(\uparrow\) & bMAE\(\downarrow\) \\ \hline \hline Huang et al. [10] & 70.2(1.5) & 0.70(.02) & 0.33(.02) & 74.7(1.6) & 0.75(.02) & 0.28(.02) \\ ProtoPNet [4] & 67.8(3.7) & 0.66(.05) & 0.36(.05) & 70.9(4.7) & 0.69(.07) & 0.32(.05) \\ XProtoNet [12] & 69.2(1.3) & 0.69(.01) & 0.34(.01) & 73.8(0.8) & 0.74(.01) & 0.29(.01) \\ ProtoASNet (Image)* & 70.1(1.6) & 0.70(.02) & 0.33(.02) & 73.9(3.5) & 0.74(.04) & 0.29(.04) \\ \hline Ginsberg et al. [6] & **76.0(1.4)** & **0.76(.01)** & **0.26(.01)** & 78.3(1.6) & 0.78(.01) & 0.24(.02) \\ XProtoNet (Video)* & 74.1(1.1) & 0.74(.01) & 0.29(.01) & 77.2(1.4) & 0.77(.01) & 0.25(.02) \\ ProtoASNet & 75.4(0.9) & 0.75(.01) & 0.27(.01) & **80.0(1.1)** & **0.80(.01)** & **0.22(.01)** \\ \multicolumn{7}{l}{* Feature extraction modified to the corresponding input type.} \\ \end{tabular}
\end{table}
Table 1: Quantitative results on the test set of our private dataset in terms of balanced accuracy (bACC), mean F1 score, and balanced mean absolute error (bMAE). bMAE is the average of the MAE of each class, assuming labels of \(0,1,2\) for no AS, early AS and significant AS respectively. Study-level results were calculated by averaging the prediction probabilities over all cines of each study. Results are shown as "mean(std)" calculated across five repetitions for each experiment. Best results are in bold.
### Evaluation on TMED-2, a Public Dataset
We also applied our method to TMED-2, a public image-based dataset for AS diagnosis. Consistent with [10], images were fed to a WideResNet-based prototype model with two output branches. The view classifier branch used average-pooling of patches followed by a fully connected layer. However, the AS diagnosis branch used the prototype setup outlined in Methods. A diagram of the overall architecture is available in the supplementary material. We trained the model end-to-end with images from all views. During inference, images with high entropy in the predicted view and high aleatoric uncertainty for AS classification were discarded. Then, probabilities for PLAX and PSAX were used for weighted averaging to determine the study-level prediction. Addition of the prototypical layer and thresholding on predicted uncertainty achieves 79.7% accuracy for AS severity, outperforming existing black-box method [10] at 74.6%.
## 4 Conclusion
We introduce ProtoASNet, an interpretable method for classifying AS severity using B-mode echo that outperforms existing black-box methods. ProtoASNet identifies clinically relevant spatio-temporal prototypes that can be visualized to improve algorithmic transparency. In addition, we introduce prototypes for estimating aleatoric uncertainty, which help flag difficult-to-diagnose scenarios, such as videos with poor visual quality. Future work will investigate methods
Figure 2: Visualization of the ProtoASNet decision-making process for a test cine video showing significant AS but poor valve leaflet visualization. We visualize most similar video parts by overlaying the upsampled model-generated ROI, \(M_{p_{k}^{c}}(x_{test})\), on the test cine video. Likewise, we visualize prototypes by finding the training clip each prototype is drawn from, \(x_{p}\), and overlaying \(M_{p_{k}^{c}}(x_{p})\). ProtoASNet explains which spatio-temporal parts of the test echo are most similar to the prototypes and how accumulation of these supporting evidence results in the prediction probabilities. More visualizations of our model’s performance are included in the supplementary material in video format.
to optimize the number of prototypes, or explore out-of-distribution detection using prototype-based methods.
#### Acknowledgements.
This work was supported in part by the Canadian Institutes of Health Research (CIHR) and in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
|
2302.11395 | Using infinite server queues with partial information for occupancy
prediction | Motivated by demand prediction for the custodial prison population in England
and Wales, this paper describes an approach to the study of service systems
using infinite server queues, where the system has non-empty initial state and
the elapsed time of individuals initially present is not known. By separating
the population into initial content and new arrivals, we can apply several
techniques either separately or jointly to those sub-populations, to enable
both short-term queue length predictions and longer-term considerations such as
managing congestion and analysing the impact of potential interventions. The
focus in the paper is the transient behaviour of the $M_t/G/\infty$ queue with
a non-homogeneous Poisson arrival process and our analysis considers various
possible simplifications, including approximation. We illustrate the approach
in that domain using publicly available data in a Bayesian framework to perform
model inference. | Nikki Sonenberg, Victoria Volodina, Peter G. Challenor, Jim Q. Smith | 2023-02-22T14:23:03Z | http://arxiv.org/abs/2302.11395v1 | # Using infinite server queues with partial information for occupancy prediction
###### Abstract
Motivated by demand prediction for the custodial prison population in England and Wales, this paper describes an approach to the study of service systems using infinite server queues, where the system has non-empty initial state and the elapsed time of individuals initially present is not known. By separating the population into initial content and new arrivals, we can apply several techniques either separately or jointly to those sub-populations, to enable both short-term queue length predictions and longer-term considerations such as managing congestion and analysing the impact of potential interventions. The focus in the paper is the transient behaviour of the \(M_{t}/G/\infty\) queue with a non-homogeneous Poisson arrival process and our analysis considers various possible simplifications, including approximation. We illustrate the approach in that domain using publicly available data in a Bayesian framework to perform model inference.
I nfinite server queues; Non-stationary arrivals; Decision support; Parameter uncertainty
## 1 Introduction
This work is motivated by the problem of predicting short and longer-term implications of policy changes on the custodial elements of the prison system in England and Wales. The model described here was developed following consultation with the Ministry of Justice (MoJ) to add to their methods of forecasting the prison population, to help analyse the implications of changing external factors accounting for the prison population such as government guidelines and sentencing policies, and to consider the uncertainty involved in the model and its predictions.
The nature of the prison system is such that arrivals can't be turned away, hence infinite server queueing models are directly applicable as they support the assumption of no queueing delay for service. While such models have been widely used in modelling service systems, including call centres (Ibrahim et al., 2016), hospital staffing (Pender, 2016), software reliability (Huang and Huang, 2008) and insurance claims (Cheung et al., 2019), the assumptions relevant to our scenario lead us to consider some less well-known results from the queueing systems literature and discuss how they can be useful in this setting.
In this domain, matters of capacity management and overcrowding at individual
institutions have to be handled by adjustments involving medium term system-wide considerations such as sentencing patterns, parole guidelines and the use of community service arrangements. This intent to support policy makers leads to a focus on the _strategic_ level of decision support (Grieco et al., 2021; Hulshof et al., 2012), and to some considerations on the development of models expressed in terms that are interpretable by policy makers and can enable 'what if?' studies, including quantification of the impact of prospective policy change (Bravo et al., 2019; Dong and Perry, 2020; Kegel et al., 2017; Petris et al., 2009; Tuominen et al., 2022).
The value of decision support tools that can analyse the impact of interventions linked to policy change has been demonstrated in care pathways in health services (Demir and Southern, 2017). While modelling flows through a prison system has not been widely studied, there has been some work using queueing models to study the relationship between policy changes and prison occupancy. For example, Usta and Wein (2015) used a queueing network model of the jail system and a simulation approach to study the optimal mix of pretrial release and forms of sentencing to minimise the amount of recidivism, subject to constraints on the available prison occupancy; and Master et al. (2019) used a \(M/M/c/c\) queueing system to assess performance of alternative pretrial release policies, and of sentences with a split of custodial and supervised outcomes.
Using public data, we study a single phase of a prisoner's journey through the prison system, as it is well out of scope to model the full system. Our analysis is both informed by, and limited by, the availability of service system data: the size of the prison population is collected and recorded on a monthly basis, but the time served and the remaining length of stay (service times) of those individuals are not. Indeed for custodial sentence admissions, even within offence types sentence lengths differ, and there is a difference between a court imposed sentence length and the actual service time, so the model cannot assume remaining service times for those individuals present at a given time, even if their formal sentence lengths are known. This contrasts with, for example, work on bed demand in an intensive care unit that considered existing patients as well as arrivals, but could use knowledge of the length of stay of existing patients (Pagel et al., 2017).
Queueing model analyses typically assume the system begins in an empty state (Li et al., 2019) and then, assuming a Poisson arrival process, the well-known results of Eick et al. (1993) mean that the queue length exhibits a Poisson distribution with a mean derivable from properties of the service time distribution. The scenario motivating the work in this paper involves the less well-studied situation where there are \(n>0\) individuals initially present and where the elapsed time at \(t=0\) of each individual is not known (Goldberg and Whitt, 2008; Korhan Aras et al., 2017; Weber, 2005). In this case, the departure process is no longer Poisson, but the queue dynamics can be analysed as a combination involving those already in service at \(t=0\) and those who subsequently arrive, and the departure process is the superposition of a binomial and a Poisson process (Weber, 2005). This separation into initial content and new input allow these sub-populations to be analysed jointly or separately.
As pointed out by Pagel et al. (2017), in contrast to much research involving applications of queueing systems that rely on steady state distributions, when considering the use of models for informing short term consequences associated with operational decision making, results involving transient distributions of queueing systems become relevant.
As highlighted in a recent review of major healthcare applications of infinite server queuing models (Worthington et al., 2020), time-inhomogeneous infinite server models
have been used both for predictive modelling (e.g., ward capacity planning (Bekker and de Bruin, 2010)) and for investigating the impact of policy changes (e.g., the introduction of specialised treatment centres for specified illnesses (Utley et al., 2008)). Our ambition to support the analysis of both short and longer-term policy change means we also consider time-varying arrivals.
Another key choice in formulating a queueing theory model is the assumptions regarding service time. The available data suggest a heavy tailed service time (Ministry of Justice, 2023b). Hence, we are led to the \(M_{t}/G/\infty\) model as it is well recognised that there is an effect on the performance of the queue of the tail of the distribution as it becomes less exponential (Goldberg and Whitt, 2008).
To employ our model as a prediction tool, we must also consider the uncertainty involved as we are considering behaviour beyond the sample of the original data set used to estimate the model parameters (Gans et al., 2003). Incorporating parameter uncertainty as part of model inference is an important step to avoid overconfidence in our results (Aktekin and Soyer, 2011). We consider parameter uncertainty involved in the prediction of the system size using estimates from aggregate historical data. Techniques typically rely on summary observable data such as queue lengths, visit counts and response times, but perform inference in different ways (Spinner et al., 2015). A common approach is by Jongbloed and Koole (2001) using a Poisson mixture model and a recent survey on forecasting of the arrival process is by Ibrahim et al. (2016). We employ a Bayesian framework to perform model inference and prediction (Aktekin and Soyer, 2011; Xie et al., 2014). The Bayesian framework allows us to incorporate expert knowledge about the system into prior distributions of model parameters, which is particularly important since we have limited historical data available to us (O'Hagan, 1998).
In summary, we study the transient behaviour of the time-varying infinite server queue, \(M_{t}/G/\infty\), fed by a non-homogeneous Poisson arrival process whose occupancy is observed at discrete points in time, but the time in service to that point is not known. The contributions of this paper are: (i) the novel synthesis of results from several authors about transient and stationary behaviour of the \(M_{t}/G/\infty\) queue; and (ii) application of the approach, using Bayesian inference, to the real-world domain of prison occupancy - a domain that has not been well-studied in the literature.
The structure of this paper is as follows: in Section 2 we describe the motivating application; Section 3 outlines relevant results from the literature for analysing an observed \(M_{t}/G/\infty\) queue; Section 4 presents a Bayesian framework for the estimation of the model parameters using domain data to illustrate the use of the model to both short-term and longer-term predictions, and includes a discussion of the mathematical assumptions of the model. In Section 5 we provide concluding remarks, with comments comparing the presented queueing theoretic approach with time-series based estimation methods. The Appendices contain further information about the prison system (Appendix A), use of the theory (Appendix B), and some examples of how analysis with our model compares with using a time-series based ARIMA model (Appendix C).
## 2 Motivating application: Prison occupancy
The study in this paper was produced following consultation with the Ministry of Justice (MoJ). We briefly describe the custodial elements of the prison system, that is, those that require accommodation, with more details in Appendix A. Data on the prison population is collected and managed by MoJ and the HM Prison and Probation
Service (HMPPS). Statistics are regularly released as well as projections of the prison population (Ministry of Justice, 2018).
Attributes of the application domain that guided our modelling choices include: a large system of multiclass arrivals with a high offered load, no abandonments, input parameters that are subject to change and operation over a long time scale. Further, the domain data suggests the use of a non-exponential service time distribution and that a stationarity assumption is reasonable over short time frames, which allowed us to take advantage that several questions of interest have more tractable solutions under the assumptions of a stationary model.
Factors such as the conviction rate (the proportion of those arriving to court that are convicted and sentenced) and the custody rate (the proportion of those sentenced that are given custodial sentences) influence the sentenced population. Hence the size and composition of the prison population is subject to policy and legislation changes, for example, changes in Home Office (government department) resources that can affect charge rates and modifications to the sentencing guidelines (MoJ, 2019). Patterns in the published data illustrate how policy and legislative changes have had subsequent impacts on prison occupancy. Hence, from a policy maker's perspective, being able to adjust model parameters to allow, for example, for a more serious mix of offence groups coming before the courts reflects the importance of reviewing model parameters over time.
Of course, describing the dynamics of the prison system is beyond the scope of this paper, but from a modelling perspective it requires only some reasonable assumptions to treat the flow of prisoners as an \(M_{t}/G/\infty\) queue (Schwarz et al., 2016). Figure C1 displays our model of a simplified prisoner journey with the prison population divided into three main holding phases (displayed percentages are as of June 2019) (MoJ, 2019): (i) on remand (11%), (ii) sentenced prisoners (79%) and (iii) on recall (9%). Prisoners within the licence phase are in the community. A broader view of the prison system and its constraints would include, for example, the number of offenders on probation, staffing resources required for supervision, demands on the courts and parole hearing frequencies (Crowhurst and Harwich, 2016; Ministry of Justice, 2016, 2018).
## 3 An observed \(M_{t}/G/\infty\) queue
Motivated by obtaining a prediction of the population given partial information, we describe results applicable to an infinite server system with Poisson arrivals and general service times observed at time \(\tau>0\), under the assumption that we have no information on when the \(n\) individuals present at this time each began their service, namely an _observed \(M_{t}/G/\infty\) queue_. We define the infinite server queue in Section 3.1 and then present results for the conditional distribution for the observed queue in Section 3.2. As the \(n\) individuals initially observed at time \(\tau\) complete service, this sub-population will go to zero as \(t\rightarrow\infty\). Arrivals after time \(\tau\) will occur according to the Poisson dynamics described in Section 3.1. These results are used in our empirical study in Section 4.
### The \(M_{t}/G/\infty\) queue
The \(M_{t}/G/\infty\) queue is a service system in which individuals arrive according to a non-homogeneous Poisson process with rate function \(\lambda(t)\), for \(-\infty<t<\infty\) and where
the service times are independent and identically distributed (i.i.d.) and independent of the arrival process (Eick et al., 1993). Let \(S\) be the service time and denote by \(G\) its cumulative distribution function (cdf) and \(g\) its density. Assume \(E[S]<\infty\), \(\lambda(t)\) is bounded and integrable and define the associated random variable \(S_{e}\), the stationary excess of the service time, with cdf \(G_{e}(t)=P(S_{e}\leq t)\) for \(t\geq 0\),
\[G_{e}(t)=\frac{1}{E[S]}\int_{0}^{t}G^{c}(u)du, \tag{1}\]
where \(G^{c}(u)=1-G(u)\).
Let \(Q(t)\) represent the number of busy servers at time \(t\) and let \(m(t)=E[Q(t)]\).
**Theorem 3.1**.: _(Eick et al., 1993, Theorem 1) For an \(M_{t}/G/\infty\) queue that was initially empty at \(t=-\infty\), for each \(t\), \(Q(t)\) has a Poisson distribution with mean_
\[m(t)=E\left[\int_{t-S}^{t}\lambda(u)du\right]=E\left[\lambda(t-S_{e})\right]E [S]. \tag{2}\]
_The departure process is a Poisson process with time dependent rate function \(\lambda^{-}(t)\), where_
\[\lambda^{-}(t)=E\left[\lambda(t-S)\right]. \tag{3}\]
_For each \(t\), \(Q(t)\) is independent of the departure process in the interval \((-\infty,t]\)._
The impact of the service time beyond its mean can be seen from \(E[S_{e}]=\frac{1}{2}E[S](c_{s}^{2}+1)\) where \(c_{s}^{2}=Var(S)/E[S]^{2}\) is the squared coefficient of variation (SCV).
For \(\lambda(t)=\lambda\), the approach to the steady state is given by
\[m(t)=\lambda G_{e}(t)E[S], \tag{4}\]
where the steady state has the insensitivity property, as \(m(\infty)=\lambda E[S]\). Similarly, the transient behaviour of a stationary model that has been terminated, that is, for \(t<0\), \(\lambda(t)=\lambda\) and zero otherwise, is \(m(t)=\lambda G_{e}^{c}(t)E[S]\).
In an empty system, the approach to steady state is seen from Equation (4). For a non-empty system observed at time \(\tau\), we note a result by Mandjes and Uraniewski (2011) who analysed the approach to steady state identifying a function that behaves as the difference between the transient and stationary distributions in the limit, and of relevance in the following Section 3.2.
**Example 3.2**.: We use the Pareto distribution as it exhibits the heavy tailed non-exponential survival times observed in the prison domain, and we draw upon this later. For \(G\sim Pa(\theta,\alpha)\) with shift parameter \(\theta>0\) and shape parameter \(\alpha>0\), for \(x\geq 0\), \(G^{c}(x)=\theta^{\alpha}(x+\theta)^{-\alpha}\). The high variability of \(G\) is indicated by the fact that the tail decays as a power instead of exponentially. For \(\alpha>1\), \(E[S]=\theta(\alpha-1)^{-1}\), otherwise if \(\alpha\leq 1\) then \(S\) has infinite mean and \(G^{c}(x)\) is not a directly integrable function. For \(\alpha>2\), \(Var[S]=\theta^{2}\alpha(\alpha-1)^{-2}(\alpha-2)^{-1}\), for \(1<\alpha\leq 2\) the variance is infinite, and for \(\alpha\) otherwise the variance is undefined. For \(\alpha>1\), \(G_{e}^{c}(x)=\theta^{\alpha-1}(x+\theta)^{1-\alpha}\). If \(\alpha>2\) then the SCV is \(c_{s}^{2}=\alpha(\alpha-2)^{-1}\), then \(E[S_{e}]=\theta(\alpha-2)^{-1}\).
### Conditional distribution
Denote by \(\tau\) the observation time, by \(\tau+\delta\) the prediction time, and define the vector of the past arrival intensity, \(\hat{\lambda}=\{\lambda(t):0\leq t\leq\tau\}\). For a stochastic process \(f(t)\) cut at an observation time \(\tau\), define the past process \(\hat{f}(t)=f(t)\) if \(t<\tau\) and \(\hat{f}(t)=0\) if \(t\geq\tau\), and define the future process as \(\hat{f}(t)=0\), if \(t<\tau\) and \(\hat{f}(t)=f(t)\) if \(t\geq\tau\). The past process can be thought of as an \(M_{t}/G/\infty\) process in which arrivals are terminated at time \(\tau\), so we are able to draw on results by Goldberg and Whitt (2008) who, motivated by a model of a two-stage item inspection process, studied the distribution of the last departure time from a queue of a terminating arrival process.
For the analysis of the observed queue, we separate the contributions of the subpopulations of those observed at time \(\tau\) and those arriving later. It is useful to consider the regions describing arrival and service pairs as depicted in Figure 2, where for each individual arriving to the system, a point is placed at \((u,v)\), with the \(u\)-axis recording arrival times and \(v\)-axis recording service times. For example, Region (1) corresponds to individuals arriving and departing by \(\tau\), and the region \(A_{(2\cup 3)}=\{(u,v)\mid\ u\leq\tau,u+v\geq\tau\}\) corresponds to individuals arriving on or before time \(\tau\). Denote by \(N(A)\) the number of arrival-service pairs in a region \(A\). As per Theorem 3.1, the Poisson arrivals and general service times generate a Poisson process on the plane with the intensity of a point occurring at \((u,v)\) being \((\lambda(u),g(v))\). As independent splitting of Poisson processes results in Poisson processes, the numbers of points in two disjoint regions are independent Poisson random variables. For example, consider \(A_{(3)}=\{(u,v)\mid\ u\leq\tau,u+v\geq\tau+\delta\}\), corresponding to individuals arriving by \(\tau\) and present at \(\tau+\delta\). Then the proportion of individuals present at time \(\tau\) whose remaining service time from that point is at least \(\delta\), is clearly \(N(A_{(3)})/N(A_{(2\cup 3)})\). This geometric depiction is captured in the following result.
**Theorem 3.3**.: _(Goldberg and Whitt, 2008, Theorem 2.1) Conditional on there being \(n\) individuals in the system at time \(\tau\), the remaining service times are i.i.d., each distributed as a random variable \(X_{\tau}\) with ccdf_
\[G_{\tau}^{c}(x)=P\left(X_{\tau}>x\right)=\frac{1}{\nu_{\tau}}\int_{-\infty}^{ \tau}\lambda(u)G^{c}(\tau+x-u)du. \tag{5}\]
_where mean \(\nu_{\tau}\) is given by_
\[\nu_{\tau}=\int_{-\infty}^{\tau}\lambda(u)G^{c}(\tau-u)du=\int_{0}^{\infty} \lambda(\tau-u)G^{c}(u)du. \tag{6}\]
It directly follows that if \(\lambda(t)=\lambda\) for \(t\geq 0\), and \(\lambda(t)=0\) for \(t<0\), then \(\nu_{\tau}=\lambda E[S]G_{e}(\tau)\) and
\[G_{\tau}^{c}(x)=\frac{G_{e}(\tau+x)-G_{e}(x)}{G_{e}(\tau)}. \tag{7}\]
The above result provides an expression for the remaining service time distribution in Equation (5), which is required to calculate the conditional distribution in the following result.
**Theorem 3.4**.: _(Weber, 2005, Theorem 6) The random variable \(Q(\tau+\delta\mid\tau)\) with
\(Q(\tau)=n\) can be expressed as_
\[Q(\tau+\delta|\tau)=Bi\left(n,p_{\tau}(\delta)\right)+Po\left(\check{m}(\tau+ \delta)\right), \tag{8}\]
_where_
\[\check{m}(\tau+\delta)= E[\check{\lambda}(\tau+\delta-S_{e})]E[S]=\int_{0}^{\delta} \lambda(\tau+\delta-u)G^{c}(u)du, \tag{9}\]
_where \(p_{\tau}(\delta)=G^{c}_{\tau}(\delta)\) is given in Equation (5), and \(Po(\cdot)\) and \(Bi(\cdot,\cdot)\) are the Poisson and Binomial random variables, respectively._
The process \(Q(\tau+\delta)\) can be written as \(Q(\tau+\delta)=\hat{Q}(\tau+\delta)+\check{Q}(\tau+\delta)\), where \(\hat{Q}(t)\) has arrival rate \(\check{\lambda}(t)\) and \(\check{Q}(t)\) has arrival rate \(\check{\lambda}(t)\). As constructed above, \(Q(\tau+\delta\mid\tau)\) will be the number of points in regions \((3\cup 5)\) of Figure C2. \(N(A_{(5)})\) has Poisson distribution with mean given in Theorem 3.1 and the result for \(\check{Q}(\tau+\delta|\tau)\) is given by Equation (9). The distribution of \(N(A_{(3)})\) is binomial with \(n\) trials and parameter \(p_{\tau}(\delta)\) given in Equation (5).
Depending on the size of \(\delta\) relative to \(\tau\), either existing or new arrivals can dominate the prediction. The calculations can reveal the contribution of each component to future demand requirements. For some quantities of interest the two sub-processes \(Q(t)=(\hat{Q}(t),\check{Q}(t))\), from the expression in Equation (8) can be studied separately. The results extend easily to multiple classes, \(i\), \(\{Q_{i}(\tau)=n_{i}\}\) as each class is treated independently.
**Proposition 3.5**.: _The conditional distribution is_
\[P\left[Q(\tau+\delta)=y\mid Q(\tau)=n\right]=\sum_{k=0}^{\min\left\{n,y \right\}}P\left[Bi\left(n,p_{\tau}(\delta)\right)=k\right]P[Po\left(\check{m} (\tau+\delta)\right)=y-k]\,, \tag{10}\]
_with mean \(E\left[Q(\tau+\delta)\mid Q(\tau)=n\right]=np_{\tau}(\delta)+\check{m}(\tau+ \delta)\), and variance \(Var\left[Q(\tau+\delta)\mid Q(\tau)=n\right)]=np_{\tau}(\delta)(1-p_{\tau}( \delta))+\check{m}(\tau+\delta)\)._
The departure process of the observed \(M_{t}/G/\infty\) queue at time \(\tau+\delta\) will be the superposition of a Poisson process with intensity \(\lambda^{-}(\tau+\delta)=\int_{0}^{\delta}\lambda(\tau+\delta-u)g(u)du\) and a binomial process with parameters \(n\) and \(1-p_{\tau}(\delta)\) (Weber, 2005). A consequence of which is that the Poisson-in-Poisson-out feature of the unobserved \(M_{t}/G/\infty\) system does not hold.
Under a high load, the binomial process will be the superposition of several point processes and will be approximately Poisson: for large \(n\),
\[P[\tilde{Q}(\tau+\delta)=y|\tilde{Q}(\tau)=n]=P[Po(np_{\tau}(\delta)+\check{m }(\tau+\delta))=y]. \tag{11}\]
Thus considering the system began with an initial population drawn from a Poisson distribution with mean \(n\) and we employ this approximation in Section 4.
#### 3.2.1 Last departure time
The behaviour of the decay of the initial population \(\hat{Q}(t)\) can be described by results of Goldberg and Whitt (2008). Suppose \(n\) individuals are observed at time \(\tau\) and consider the time of the last departure. Define \(M_{n}=\max\{X_{\tau}^{1},\ldots,X_{\tau}^{n}\}\), where each \(X_{\tau}^{i}\) is distributed according to Equation (5), then \(P\left[M_{n}\leq x\right]=G_{\tau}(x)^{n}\). Now consider the case where the arrival process is terminated at time \(\gamma\), but where an observation is not made. Following Goldberg and Whitt (2008), let \(D\) be the last departure time and define \(T=(D-\gamma)^{+}\), the remaining time after \(\gamma\) until the last departure. To determine the distribution of \(T\), let \(N=N_{\gamma}\) be a random variable with a Poisson distribution having mean \(\nu_{\gamma}\), then \(T\stackrel{{ d}}{{=}}M_{N}\).
For any random variable \(Y\) with continuous cdf, let its quantile function be \(q_{Y}\equiv q_{Y}(x)\) such that the number \(P[T\leq q_{Y}(x)]=x\). Write \(f(x)\sim g(x)\) as \(x\rightarrow\infty\) when \(f(x)/g(x)\to 1\) as \(x\rightarrow\infty\).
**Theorem 3.6**.: _(Goldberg and Whitt, 2008, Theorem 2.2) For any \(x>0\),_
\[P\left[T\leq x\right]=e^{-\nu_{t}G_{\gamma}^{e}(x)}. \tag{12}\]
_Then as \(x\rightarrow\infty\), \(P[T>x]\sim\nu_{\gamma}G_{\gamma}^{e}(x),\) and for \(e^{-\nu_{\gamma}}<x<1\),_
\[q_{T}(x)=q_{X_{\gamma}}\left(1-\frac{\log(1/x)}{\nu_{\gamma}}\right). \tag{13}\]
_The moments of which can be calculated by \(E[T^{k}]=\int_{0}^{\infty}kx^{k-1}E[T>x]dx\)._
**Example 3.7**.: For \(G\sim Pa(\theta,\alpha)\), and with \(\lambda(t)=\lambda\) for \(t\in(-\infty,\tau]\),
\[q_{T}(x)=q_{X_{\tau}}\left(1+\frac{(\alpha-1)\log(x)}{\lambda\theta}\right), \tag{14}\]
with
\[q_{X_{\tau}}(x)=\left(\frac{\theta^{\alpha-1}}{1-x}\right)^{\frac{1}{\alpha-1 }}-\theta. \tag{15}\]
In Figure 10 for fixed \(E[S]=3\), we illustrate varying values \((\lambda,c_{s}^{2})\). The effect of increasing the SCV is seen in the quantiles of the last departure time \(T\). These results can be employed in the application domain to study the decreasing content of a prison population with no new arrivals.
## 4 Empirical study and simulations
The statistical inference of infinite server queues has been studied given various types of incomplete data: (i) queue length data (Goldenshluger and Koops, 2019; Picklands, 1997); (ii) unmatched arrival and service times (Blanghaps et al., 2013); (iii) busy period data (Hall and Park, 2004); and (iv) arrival and departure counts (Li et al., 2019).
In this section, we present a method to generate predictions that consider the parameter uncertainty using Bayesian inference, considering queue length data from March 2015 to March 2019 on the Adult/Male sentenced population(MoJ, 2023b, Table 1.2b: Sentenced to immediate custody). We refer to the MoJ documentation for details on counting processes, but we interpret this as the sentenced population and only consider data from 2015 due to changes in the reporting processes.
We aim to provide a demonstration of the estimation approach for the model parameters using simulated monthly counts generated from published quarterly data, as the data is not sufficient to perform Bayesian inference about the system and model parameters. To generate monthly counts we: (i) divide the quarterly counts to obtain a monthly mean count; (ii) fit a smoothing spline to produce monthly predictions; (iii) add noise equal to the residual standard error from a linear model fit.
### Bayesian analysis of the sentenced population
To specify the model, we adopt a linear arrival rate of the form \(\lambda(t)=\beta_{0}+\beta_{1}t\) and assume that the service time distribution for each offence group is Pareto, and can be adequately estimated from historical data (over a much longer time interval than the prediction lead time). Using public data (MoJ, 2023a, July 2018: June 2019) for the mean sentence length for each offence group, we specify that the service time is 50% of the sentence length for this demonstration.
For a large system, define the mean function \(M(\tau+\delta)=E[\tilde{Q}(\tau+\delta)]\) using Equation (11), and Pareto\((\theta,\alpha)\) distribution as in Example 3.2, we have
\[v_{\tau} =\frac{(\beta_{0}+\beta_{1}\tau)\theta}{\alpha-1}-\frac{\beta_{1 }\theta^{2}}{(\alpha-1)(\alpha-2)}, \tag{16}\] \[p_{\tau}(\delta) =\theta^{\alpha-1}(\delta+\theta)^{1-\alpha}\Bigg{[}1+\frac{ \beta_{1}\delta}{(\beta_{0}+\beta_{1}\tau)(2-\alpha)+\beta_{1}\theta}\Bigg{]},\] (17) \[\tilde{m}(\tau+\delta) =\frac{(\beta_{0}+\beta_{1}\tau)}{1-\alpha}[\theta^{\alpha}( \delta+\theta)^{1-\alpha}-\theta]\] \[+\frac{\beta_{1}}{(1-\alpha)(2-\alpha)}[\theta^{\alpha}(\delta+ \theta)^{2-\alpha}-\theta^{2}-\delta\theta(2-\alpha)]. \tag{18}\]
We specify prior distributions for arrival rate function and service distribution time parameters \(\{\beta_{0},\beta_{1},\alpha,\theta\}\), where arrival data is used to specify informative priors for arrival rate function. We define a Bayesian model for the monthly numbers of arrivals with non-informative priors. Denote by \(Q_{a}(\tau+\delta)-Q_{a}(\tau)\) the number of individuals that arrive in \([\tau,\tau+\delta)\) and is Poisson distributed with mean \(\int_{\tau}^{\tau+\delta}(\beta_{0}+\beta_{1}u)du\). We extract the posterior samples of \(\beta_{0}\) and \(\beta_{1}\). Since both posterior densities are bell-shaped, we define normal priors for \(\beta_{0}\) and \(\beta_{1}\) centred on the mean of posterior samples and standard deviation equal to the scaled posterior sample standard deviation for the model. We use a scaled posterior sample standard deviation in order to fully explore the parameter space during Markov chain Monte Carlo (MCMC). We specify a uniform prior for \(\alpha\sim\text{Uniform}[2.5,10]\), where the lower bound avoids infinite variance and the upper bound ensures good convergence and mixing of the Markov chains. In practice, we observe that increasing the upper bound of \(\alpha\) leads to non-identifiability. We use RStan to perform full Bayesian statistical inference by adopting the Hamiltonian Monte Carlo (HMC) and no-U-turn samplers (NUTS) (Carpenter et al., 2017).
We demonstrate long-term (\(k\)-step ahead prediction) and short-term (one-step ahead prediction) forecasts. The long-term forecasts are useful in informing long-term planning decisions in relation to the prison system capacity, whereas short-term forecasts could provide the insight needed for the day-to-day prison system operation. To generate predictions about the system behaviour, we perform the following steps:
1. Sample \((\beta_{0}^{(k)},\beta_{1}^{(k)},\alpha^{(k)},\theta^{(k)})\) from the posterior distribution \([\beta_{0},\beta_{1},\alpha,\theta|\mathbf{n}^{s}]\) using RStan, where \(\mathbf{n}^{s}=(n_{1},\ldots,n_{\tau})\) with \(n_{i},i=1,\ldots,\tau\) being the number of individuals present in the system at time \(i\).
2. For each \(k=1,\ldots,N\) compute \(M^{(k)}(\tau+\delta)\) and sample \(\tilde{Q}^{(k)}(\tau+\delta)\) from \(Po(M^{(k)}(\tau+\delta))\).
3. Compute sample mean \(\mu_{\tilde{Q}}(\tau+\delta)=\frac{1}{N}\sum_{k=1}^{N}\tilde{Q}^{(k)}(\tau+\delta)\), which is our prediction.
4. Compute sample standard deviation, \(\sigma_{\tilde{Q}}(\tau+\delta)\quad=\)\(\sqrt{\frac{1}{N}\sum_{k=1}^{N}\left(\tilde{Q}^{(k)}(\tau+\delta)-\mu_{ \tilde{Q}}(\tau+\delta)\right)^{2}}\), which represents uncertainty about our prediction.
### Predictions and simulation results
**Example 4.1**.: We demonstrate the Bayesian model specification to predict the number of prisoners in the Theft offence group where \(E[S]=5.22\) months. From the arrival count data, we obtain the posterior sample for \(\beta_{0}\) with mean 1376.5 and standard deviation 9.7. Similarly, the mean and standard deviation of posterior sample for \(\beta_{1}\) are -11.5 and 0.3 respectively. We then specify the priors: \(\beta_{0}\sim\text{Normal}(1376.5,10\times 9.7)\), \(\beta_{1}\sim\text{Normal}(-11.5,10\times 0.3)\) and \(\alpha\sim\text{Uniform}[2.5,10]\), with \(\theta=5.22(\alpha-1)\). We set two Markov chains with 10,000 iterations for each chain (including warmup).
In Figure C3, we produce the density plots of marginal posterior distributions for model parameters. We observe that the posterior sample density of \(\beta_{0}\) is bell shaped and centered around 677.4 and the standard deviation of posterior samples is 17.76. Similarly, the values of \(\beta_{1}\) is centred around -3.77 with posterior standard deviation 0.54. We observe that under our model specification the number of arrivals in Theft Offence group gradually decreases over time. The distributions of posterior samples for \(\theta\) and \(\alpha\) are less informative, since we included the mean of service time in our prior specification.
In Figure C4 we illustrate long and short term predictions from August 2019 to March 2020. We note that the short-term projections are more computationally expensive as in order to obtain a new observation, we update the posterior distribution by re-running a Stan programme. The left panel plot in Figure C4 shows the long-term forecast (for 8 months). The number of offenders in Theft offence category is expected to decrease over time. Our predictions slightly underestimate the true values, however the true values lie within two standard deviation prediction interval. We observe that uncertainty about our projections increases with time. The right panel plot in Figure C4 shows the short-term forecasts. The predictions are closer to the true values together with the slightly lower uncertainty about our predictions. To access the performance of the proposed model, we use the Root Mean Square Error (RMSE): \(RMSE=\sqrt{q^{-1}\sum_{\delta=1}^{q}(n_{\tau+\delta}-\mu_{\tilde{Q}}(\tau+ \delta))^{2}}\), where a lower value indicates a better model performance. The RMSE for long-term predictions is 53.08, whereas for short-term predictions is 20.9.
**Example 4.2**.: Adopting a similar approach as in Example 4.1, we consider the num
ber of offenders for two further offence groups: Sexual offences and Public order offences. The predictions for Public order offences are illustrated in Figure C5. For Sexual offences the RMSE for long-term forecast is 7.16, whereas for short-term forecast is 5.10. For Public order offences: the RMSE for long-term forecast is 2.97, whereas for short-term forecast is 1.89.
In the next example we reuse the posterior samples of model parameters derived in Examples 4.1 and 4.2 to demonstrate how the model can provide insight into policy modifications.
**Example 4.3**.: A change point can be studied, representing switching from one kind of service to another at time \(\tau\) (that does not involve an observation), the model is an \(M_{t}/G^{o},G^{\nu}/\infty\), (Korhan Aras et al., 2017) (using the superscript \(o\) for old and \(\nu\) for new). The distribution of \(Q(\tau+\delta)\) is Poisson with mean \(\hat{m}(\tau)G_{e}^{o,c}+\hat{m}(\tau+\delta)\).
In Figures C6, C7 and C8 we simulate the effect of increasing and decreasing the mean service time on the offence group population for theft offences, sexual offences and public order offences, respectively, and where the solid lines are the predictors and the dashed lines represent two standard deviation prediction intervals. We remark that to produce these results we reuse samples from the posterior distributions of \(\beta_{0},\beta_{1}\) and \(\alpha\). The new values of \(\theta\) are given by \(\theta=E[S^{\nu}](\alpha-1)\).
### Discussion
In the previous section we illustrated how to use the Bayesian model for short and longer-term predictions, and to provide insight into the implications of possible policy modifications. In Appendix B, drawing on other theoretical results, we briefly describe how modification of sentencing and custody rates can be seen as a form of intervention to enable congestion event recovery. Empirical demonstration of the value of such analysis is beyond the scope of the paper, as this would require data not available to us.
With regard to the predictions in Section 4.2, we note that time-series forecasting methods such as ARIMA models (Shumway et al., 2000) can predict the future prison population by describing the autocorrelation in the data. A direct comparison, using the data from Section 4.1, is presented in Appendix C, and in Section 5 we provide some comments on the different approaches to forecasting.
We now briefly discuss the impact of the assumptions of the mathematical model described in Section 3. It is assumed that the time served by individuals at the observation point \(t=0\) is unknown, but if the elapsed service times \(\{y_{i}:i=1,\ldots,n\}\) of the observed population \(\hat{Q}(t)\) are recorded, it can be more effective to use this information to make predictions, while carrying a greater cost (Duffield and Whitt, 1997). In contrast to Equation (5), the conditional remaining service time ccdf for elapsed service time \(x\) is \(H_{x}^{c}(t)=G^{c}(t+x)/G^{c}(x)\).
As the conditional remaining service times are no longer identically distributed, the complication of the distribution \(\hat{Q}(t)\) increases and the resulting process is a Markov process (Korhan Aras et al., 2017), although the estimates for the mean number remaining at time \(t\) and variance have simple forms: \(E[\hat{Q}(t)]=\sum_{i=1}^{n}H_{y_{i}}^{c}(t),Var[\hat{Q}(t)]=\sum_{i=1}^{n}H_ {y_{i}}^{c}(t)H_{y_{i}}(t)\).
Further, as noted by Whitt (1999), the importance of conditioning upon the time served at an observation point increases as the service-time distribution differs more from an exponential distribution. If \(G\) is highly variable, then the elapsed holding
time can greatly help in future prediction and a long elapsed holding time tends to imply a very long remaining holding time. Let \(Y(\alpha,\theta)\) denote the Pareto service time distribution \(G\) as defined in Example 3.2 and let \(Y_{x}(\alpha,\theta)\) the associated ccdf for elapsed service time \(x\). By the result for \(H_{x}^{c}(t)\) above, Duffield and Whitt (1997, Theorem 8) showed that \(H_{x}^{c}(t)=G^{c}(t(\frac{x}{\theta}+1)^{-1})\), which implied \(Y_{x}(\theta,\alpha)\stackrel{{ d}}{{=}}\left(1+\frac{x}{\theta} \right)Y(\theta,\alpha)\). Hence \(E[Y_{x}(\theta,\alpha)]=\left(1+\frac{x}{\theta}\right)E[Y(\theta,\alpha)]\), that is, the mean remaining service time \(E[Y_{t}(\alpha,\theta)]\) is approximately proportional to the elapsed service time \(t\).
## 5 Concluding remarks
This work was motivated by the problem of predicting the population of housed inmates within the prison system in England and Wales, where the size of the prison population is recorded on a regular basis, with attention both to short-term predictions to enable local resource planning, and longer-term projections that can provide insight to policy makers regarding the impact of potential policy variations.
We studied the transient behaviour of the time-varying infinite server queue, \(M_{t}/G/\infty\), fed by a non-homogeneous Poisson arrival process whose occupancy is observed at discrete points in time, but the time in service to that point is not known. Drawing on this analysis, and using publicly available data, we built a model that could be used as a decision support tool for custodial elements of the prison system. We illustrated the use of such a model for population prediction and for analysing the implications of changing external factors influencing the prison population such as government guidelines and sentencing policies. The proposed model produces predictions together with uncertainty bands and aligns with the current guidelines on informed decision making in the UK government (Treasury, 2015).
It is beyond our scope to compare the queueing theory approach to generating predictions with multiple forecasting methods, but we note recent work arguing that for time-series methods Liu et al. (2021) to support general scenario analysis requires extracting components known as 'features' from the time-series data, that in turn can be used to generate alternative scenarios (Kegel et al., 2017; Tuominen et al., 2022). In the queueing theory approach, model parameters have a direct interpretation for the application domain, which straightforwardly enables the study of a variety of public policy initiatives by setting different input values to the model. We demonstrated this in Example 4.3 for changes in the distribution of sentence lengths. As a further example, changes under consideration to the sentencing and release of serious and dangerous sexual and violent offenders (MoJ, 2019), could be studied by changes in the arrival rate (e.g., the custody rate, conviction rate), studies we were unable to undertake, as they require data not available to us. In contrast, time-series methods are not so amenable to what-if style scenario analysis as they offer the possibility of forecasting future observations, but with limited interpretability of the fitted model (Petris et al., 2009).
The contributions of this paper are: (i) the novel synthesis of results from several authors about transient and stationary behaviour of the \(M_{t}/G/\infty\) queue to enable construction of a model suited to short and longer-term predictions, and to supporting considerations of parameter uncertainty; and (ii) illustration of the approach to potential policy changes in the real-world domain of prison occupancy.
Reflecting the data available for model building, we focused on the situation where the system has non-empty initial state and where the elapsed time of each individual
in the system is not known. The dynamics of the queue is a combination of those already in service at some time and those who subsequently arrive, and separation into initial content and new input allowed these sub-populations to be analysed jointly and separately. We drew on results for the transient and stationary distributions of these queueing systems from several authors to enable an analytic approach. Then, using a Bayesian approach with public historical data that allows the inclusion of expert knowledge, we considered parameter uncertainty involved in the prediction of future arrivals and presented a model that maintains interpretability for the domain application. Incorporating other sources of uncertainty into our process Whitt (2002), including model and process uncertainty, and quantifying the contribution from each within our application is a topic of future research.
Further, we note that restoration of the departure process as approximately Poisson also allows the approach to be extended to a network of processes, referred to as a \((M_{t}/G/\infty)^{N}/M\) model, in which queue length distribution models have time-dependent product form and would be appropriate for models of many service systems Massey and Whitt (1993), including further development of a model specific to the prison domain.
The approach is potentially applicable to other service systems, but the queueing model properties of interest will vary according to the application context, and the choice of what approach to take will also depend on the available data. Data driven development of a system model is an increasingly popular approach as discussed in Mandelbaum et al. (2019). However, the infrastructure to collect and manage data at multiple phases and timescales is not yet comprehensive, so parameter inference remains a challenging problem because of limited data availability about successive system states, and for model building methods that can take advantage of available but incomplete data are essential. We acknowledge that some of our articulated modelling assumptions may not apply in domains where fine-grained staffing implications are of interest, for example, hospitals and call centres.
The availability of future information (via predictive algorithms, machine learning, or local observation) and the resultant novel queueing analysis, has policy design implications, described as "a broader shift from being reactive to proactive" as future information becomes part of the policy maker's toolkit Spencer et al. (2014); Walton and Xu (2021).
## Acknowledgements
We would like to thank our colleagues from the Ministry of Justice for helpful discussions in the development of this work, and the journal reviewers for constructive comments that helped to improve the presentation. The work was conducted as part of the _Managing Uncertainty in Government Modelling_ project supported by The Alan Turing Institute. This work was supported by the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research.
|
2310.01397 | Posterior Uncertainty Estimation via a Monte Carlo Procedure Specialized
for Data Assimilation | Through the Bayesian lens of data assimilation, uncertainty on model
parameters is traditionally quantified through the posterior covariance matrix.
However, in modern settings involving high-dimensional and computationally
expensive forward models, posterior covariance knowledge must be relaxed to
deterministic or stochastic approximations. In the carbon flux inversion
literature, Chevallier et al. proposed a stochastic method capable of
approximating posterior variances of linear functionals of the model parameters
that is particularly well-suited for large-scale Earth-system data assimilation
tasks. This note formalizes this algorithm and clarifies its properties. We
provide a formal statement of the algorithm, demonstrate why it converges to
the desired posterior variance quantity of interest, and provide additional
uncertainty quantification allowing incorporation of the Monte Carlo sampling
uncertainty into the method's Bayesian credible intervals. The methodology is
demonstrated using toy simulations and a realistic carbon flux inversion
observing system simulation experiment. | Michael Stanley, Mikael Kuusela, Brendan Byrne, Junjie Liu | 2023-10-02T17:55:39Z | http://arxiv.org/abs/2310.01397v2 | # Posterior Uncertainty Estimation via a Monte Carlo Procedure
###### Abstract
Through the Bayesian lens of data assimilation, uncertainty on model parameters is traditionally quantified through the posterior covariance matrix. However, in modern settings involving high-dimensional and computationally expensive forward models, posterior covariance knowledge must be relaxed to deterministic or stochastic approximations. In the carbon flux inversion literature, Chevallier et al. [6] proposed a stochastic method capable of approximating posterior variances of linear functionals of the model parameters that is particularly well-suited for large-scale Earth-system data assimilation tasks. This note formalizes this algorithm and clarifies its properties. We provide a formal statement of the algorithm, demonstrate why it converges to the desired posterior variance quantity of interest, and provide additional uncertainty quantification allowing incorporation of the Monte Carlo sampling uncertainty into the method's Bayesian credible intervals. The methodology is demonstrated using toy simulations and a realistic carbon flux inversion observing system simulation experiment.
_Keywords--_ Bayesian inference; data assimilation; uncertainty quantification; carbon flux inversion; uncertainty on uncertainty; observing system simulation experiment
## 1 Introduction
Uncertainty quantification (UQ) for data assimilation (DA) tasks is often non-trivial, but scientifically paramount to their understanding and interpretation. Since DA broadly describes methods combining observations with a computational model of a physical system, a Bayesian framework is often sensible for inference on the model parameters, as the posterior distribution quantifies knowledge resulting from this combination. As such, Bayesian statistical models are regularly used as the UQ framework. For example, Bayesian procedures play a central role in the general idea of optimal estimation [24], the broad field of DA [18], and the more specific field of carbon flux estimation [9, 20]. Inference for DA tasks using this statistical framework is typically challenging due to high-dimensional settings (e.g., high-resolution spatiotemporal grids) and the computer model's implicit numerical definition of the physical system of interest, often requiring supercomputers and long compute times. Prior and observation error distributions are often assumed to be Gaussian, yielding a Gaussian posterior distribution under a linear forward model. Although a Gaussian posterior can be exactly characterized by its mean vector and covariance matrix, the high-dimensionality makes dealing directly with the posterior covariance matrix intractable and the implicit computationally demanding forward model makes infeasible standard traditional Bayesian computational techniques, such as Markov Chain Monte Carlo (MCMC). The implicit posterior necessitates the development of computational methods that implicitly access it.
CO\({}_{2}\) flux inversion is a representative example of a high-dimensional DA task to which Bayesian modeling is applied and used to compute estimated flux fields [8, 10, 12]. In this problem, estimates of net surface-atmosphere CO\({}_{2}\) fluxes are inferred from atmospheric CO\({}_{2}\) measurements, with fluxes and atmospheric measurements being related by a chemical transport model (the computational forward model). However, the relatively sparse atmospheric CO\({}_{2}\) observations underconstrain surface fluxes of CO\({}_{2}\), and regularization with prior information is the Bayesian approach to making the problem well-posed. These analyses have historically assimilated measurements of atmospheric CO\({}_{2}\) from a global network of flask and in situ measurements [10], but more recent work [3, 8, 9, 16] has shifted to assimilating space-based column-averaged dry-air mole fractions, denoted \(\mathrm{X}_{\mathrm{CO_{2}}}\), as observations availability has expanded since 2009. In these analyses, the prior and error distributions are typically assumed to be Gaussian and the forward model can be reasonably assumed linear in the net surface-atmosphere fluxes.
When the number of model parameters is low and a forward model run is inexpensive, it is possible to explicitly construct the posterior covariance matrix. Successful examples of this approach date back at least to Vasco et al. [25] in seismic tomography, where inversion is performed on 12,496 model parameters. However, more contemporary problems typically have orders of magnitude more parameters and substantially more expensive forward models, requiring other approaches to access posterior covariance matrix information. Once the discretization of the computational model is set, the dimensionality problem can be handled
either by defining an approximate statistical model on a lower dimensional problem, or by working in some subspace of the full-dimensional problem. A recent example of the first strategy is seen in Zammit-Mangion et al. [27] in the WOMBAT inversion system which lowers the dimension of the statistical model via an intelligently chosen set of basis functions, facilitating MCMC. Alternatively, Petra et al. [23] propose with Stochastic Newton MCMC (SN-MCMC) the possibility for MCMC in the full parameter space by using a low-rank approximation to the posterior covariance within the proposal distribution of a Metropolis-Hasting algorithm. Although WOMBAT and SN-MCMC are both MCMC-based, WOMBAT assumes a linear forward model, while SN-MCMC does not, allowing it to characterize non-Gaussian posteriors. Staying with a linear forward model assumption, other approaches leverage low-rank posterior covariance approximations. Flath et al. [11] develop a low-rank algorithm for approximating the posterior covariance by computing the leading eigenvalues and eigenvectors of a prior-conditioned Hessian matrix of the associated objective function (i.e., the log posterior). In a similar spirit, Kalmikov and Heimbach [17] provide a derivative-based algorithm to compute leading Hessian eigenvalues and eigenvectors and extend the uncertainty quantification to quantities of interest in global ocean state estimation. The algorithms in both Flath et al. [11] and Kalmikov and Heimbach [17] rely upon the Lanczos method [19] for matrix-free computation of the low-rank approximation. Alternatively, Bousser and Henze [1] more recently proposed a low-rank approximation algorithm dependent upon the randomized SVD algorithm [13]. All of the aforementioned methods can be grouped by their reliance upon some low-dimensional deterministic approximation.
In contrast, stochastic approximations of the posterior distribution rely neither upon pre-inversion dimension reductions nor low-rank matrix approximations, but rather generate ensembles of inversions using random generators. In carbon flux inversion, Chevallier et al. [6] developed such a method to estimate the posterior variance of _functionals_ of the flux field (i.e., maps from the flux field to the reals). The method uses the forward model, specified prior, and known observation error distributions in a particularly efficient manner. Broadly, the algorithm creates an ensemble of prior means and observation errors, sampling according to their respective distributions. For each ensemble member, it finds the maximum a posteriori (MAP) estimator, to which the functional is applied. Finally, it finds the empirical variance across the ensemble members to estimate the posterior variance of the functional. This method is well-suited for carbon flux estimation and DA UQ more generally for a few key reasons. First, each ensemble member is computationally independent, making the method parallelizable and hence offering a substantial computational benefit compared to sequential methods, such as MCMC. Second, although in general prior misspecification biases the posterior, the prior mean does not need to be correctly specified in order for the procedure to produce an unbiased estimator of the posterior variance. Third, the ensemble of inversions can flexibly produce UQ estimates for arbitrary functionals post hoc, as opposed to requiring the specification of a functional ahead of the analysis. Finally, since this method is more generally a Monte Carlo (MC) method for a Gaussian statistical model, the method's sampling uncertainty can be analytically characterized and accounted for in the final UQ estimate. The ability to easily characterize this uncertainty of the uncertainty stands in contrast to the difficulty in characterizing deterministic error of the aforementioned low-dimensional approaches.
Although Chevallier et al. [6] appear to have been the first to develop this method, which was later applied in Liu et al. [20], we are unaware of a formal statement or analysis of this algorithm. These previous works also find not quantity the algorithm's MC uncertainty. As such, the primary contributions of this paper are a rigorous formal statement of the algorithm, an analysis showing the convergence of its output to the true posterior quantity of interest, and uncertainty quantification of the algorithm itself so that the algorithm's sampling uncertainty can be accounted for in the final inference.
The rest of this paper is structured as follows. In Section 2, we fully describe the algorithm, present mathematical results proving its correctness, and derive deflation and inflation factors to apply to the estimated posterior uncertainty to quantify the MC uncertainty. Proofs of the mathematical results can be found in Appendix A. In Section 3, we provide two experimental demonstrations: the first is a low-dimensional problem in which we explicitly know the linear forward model and the second is a carbon flux observing system simulation experiment (OSSE) to which we applied this method to compute global monthly flux credible intervals along their MC uncertainty. Finally, we provide some concluding remarks in Section 4. For reference, all mathematical notation in order of appearance is collected in Table 1.
\begin{table}
\begin{tabular}{l l|l l} \hline \(\mathbf{c}\in\mathbb{R}^{n}\) & Scaling factors & \(\mathbf{\hat{y}}\in\mathbb{R}^{n}\) & \(\mathbf{X}_{\mathrm{CO}}\), Observations \\ \(\mathbf{\mu}\in\mathbb{R}^{m}\) & Control fluxes & \(f\) & Forward model \\ \(\mathbf{c}\in\mathbb{R}^{n}\) & Observation Noise & \(\mathbf{R}\) & Observation Noise Covariance \\ \(\mathbf{A}\in\mathbb{R}^{n\times m}\) & Linear forward model & \(\mathbf{a}\circ\mathbf{b}\) & Element-wise multiplication of \(\mathbf{a}\) and \(\mathbf{b}\) \\ \(\mathbf{z}\in\mathbb{R}^{n}\) & Non-bispectrum \(\mathbf{X}_{\mathrm{CO}}\), component & \(\mathbf{\hat{y}}\in\mathbb{R}^{n}\) & Biospheric component of \(\mathbf{X}_{\mathrm{CO}_{\mathrm{z}}}\) \\ \(\mathbf{e}^{\lambda}\in\mathbb{R}^{m}\) & Prior scaling factor expectation & \(\mathbf{B}\in\mathbb{R}^{m\times n}\) & Prior scaling factor covariance \\ \(\tau(\mathbf{e}\,|\,\mathbf{y})\) & Posterior scaling factor density & \(\mathbf{\hat{\nu}}\in\mathbb{R}_{+}\) & Prior variance parameter \\ \(\mathbf{L}_{n}\in\mathbb{R}^{n\times m}\) & \(m\times m\) identity matrix & \(\mathbf{\hat{\rho}}=\mathbf{\hat{\nu}}\circ\mathbf{\hat{\mu}}\) & Flux vector \\ \(\mathbf{A}_{\mathbf{\mu}}\in\mathbb{R}^{n\times m}\) & Forward model with control flux \(\mathbf{\hat{\mu}}\) & \(\mathbf{\Sigma}\in\mathbb{R}^{n\times m}\) & Posterior scaling factor covariance \\ \(\mathbf{\alpha}\in\mathbb{R}^{n\times}\) & Posterior scaling factor expectation & \(\mathbf{\hat{\epsilon}}\in\mathbb{R}^{n}\) & Posterior filter expectation \\ \(\mathbf{\Gamma}\in\mathbb{R}^{n\times m}\) & Posterior flux covariance & \(\mathbf{\hat{\epsilon}}_{\mathbf{x},\mathbf{y}\in\mathbb{R}}\in\mathbb{R}^{m \times}\) & \(\mathbf{\hat{n}}\) MC sample \\ \(\mathbf{\Sigma}_{\mathbf{d}_{\mathrm{x},\mathbf{y}}\in\mathbb{R}^{m\times n}}\) & MC MAP estimator covariance & \(\mathbf{h}\in\mathbb{R}\) & Functional of interest \\ \(\overline{\varphi}\in\mathbb{R}\) & Mean MC functional value & \(\sigma_{\mathrm{x}}^{2}\in\mathbb{R}_{+}\) & Empirical functional variance \\ \(\chi_{M-1}^{2}\) & Chi-squared distribution with \(M-1\) dof & \(\chi_{M-1\alpha\beta\gamma}^{2}\) & Chi-squared (\(\alpha/2\))-quantile \\ \(\alpha\in\{0,1\}\) & Frequentist confidence level & \(\overline{\varphi}\in(0,1)\) & Bayesian credible interval level \\ \(L\) & Deflation factor for MC variance & \(R\) & Inflation factor for MC variance \\ \hline \end{tabular}
\end{table}
Table 1: Mathematical symbols and notation used herein (in order of appearance).
Monte Carlo Method Exposition, Analysis,
and Uncertainty Quantification
### The Bayesian 4D-Var Setup
Following along with the mathematical setup of Henze et al. [15], the prior and posterior distributions are defined in a scaling factor space and hence the prior and posterior distributions on the physical quantity of interest are obtained by multiplying the respective scaling factor by a control quantity. In carbon flux estimation, the control quantity is a control flux, typically an ansatz \(\mathrm{CO_{2}}\) flux between the Earth's surface and the atmosphere. Note that if the prior distribution mean in scaling factor space is unity, then the control flux is also the prior mean in the physical quantity of interest space. Mathematically, fix the scaling factor vector \(\mathbf{c}\in\mathbb{R}^{m}\) and let \(\tilde{\mathbf{y}}\in\mathbb{R}^{n}\) be the observation vector and \(\boldsymbol{\mu}\in\mathbb{R}^{m}\) the control physical quantity. The following model generally describes the relationship between the scaling factors and the observations:
\[\tilde{\mathbf{y}}=f(\mathbf{c};\boldsymbol{\mu})+\boldsymbol{\epsilon}, \quad\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{R}), \tag{1}\]
where \(\mathbf{R}\in\mathbb{R}^{n\times n}\) is the observation covariance matrix. The observation vector \(\tilde{\mathbf{y}}\) is a sequence of \(\mathrm{X}_{CO_{2}}\) observations produced from a remote sensing satellite, e.g., GOSAT or OCO-2 [21]. This model expression is a composition of an atmospheric transport model mapping scaling factors to atmospheric \(\mathrm{CO_{2}}\) concentrations composed with a remote sensing observation operator mapping \(\mathrm{CO_{2}}\) concentrations to \(\mathrm{X}_{CO_{3}}\) scalar values. The atmospheric transport model is known to be affine due to the physics of \(\mathrm{CO_{2}}\) atmospheric transport. In reality, the true mapping from atmospheric \(\mathrm{CO_{2}}\) concentrations to \(\mathrm{X}_{CO_{2}}\) is non-linear, but in line with [20], we use an affine form involving the known GOSAT averaging kernel. As such, the affine composed function \(f\) is of the form \(f(\mathbf{c};\boldsymbol{\mu})=\mathbf{A}(\mathbf{c}\circ\boldsymbol{\mu})+ \mathbf{z}\), where \(\mathbf{A}\in\mathbb{R}^{n\times m}\) is the linear forward model matrix, \(\mathbf{c}\circ\boldsymbol{\mu}\) denotes the component-wise scaling of \(\boldsymbol{\mu}\) by the scaling factors \(\mathbf{c}\), and \(\mathbf{z}\) is comprised of the non-biospheric \(\mathrm{CO_{2}}\) contribution to the observations along with the prior mean of the \(\mathrm{X}_{CO_{2}}\) retrieval algorithm. As such, we define \(\mathbf{y}:=\tilde{\mathbf{y}}-\mathbf{z}\), giving the linear model
\[\mathbf{y}=\mathbf{A}(\mathbf{c}\circ\boldsymbol{\mu})+\boldsymbol{\epsilon}, \quad\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{R}), \tag{2}\]
on which the following analysis is performed. For ease of reference, we herein refer to the linear map \(\mathbf{A}\) as the forward model. We emphasize that the matrix \(\mathbf{A}\) is not explicitly available to us, but is implicitly defined by the atmospheric transport model (and the satellite observation operator, which is usually explicitly available).
To regularize the problem and provide uncertainty quantification on the estimated scaling factors and fluxes, \(\mathbf{c}\) in Equation (2) is given a Gaussian prior distribution, yielding the following Bayesian generative model:
\[\mathbf{c} \sim\mathcal{N}(\mathbf{c}^{b},\mathbf{B}), \tag{3}\] \[\mathbf{y} \mid\mathbf{c} \sim\mathcal{N}(\mathbf{A}(\mathbf{c}\circ\boldsymbol{\mu}), \mathbf{R}), \tag{4}\]
where \(\mathbf{c}^{b}\in\mathbb{R}^{m}\) is the scaling factor prior mean and \(\mathbf{B}\) is the prior covariance matrix.
Finding the posterior mean (or, equivalently, the posterior mode) defined by Equations (3) and (4) characterizes a common DA problem of interest. A typical DA approach in carbon flux estimation is four dimensional variational data assimilation (4D-Var) [9; 20], a method optimizing carbon fluxes simultaneously over all time steps. 4D-Var can be regarded as a least-squares optimization with an \(\ell_{2}\) regularizer (ridge regression), or, equivalently, as maximum a posteriori (MAP) estimation in the Bayesian paradigm. This connection means that the 4D-Var optimization is connected to the posterior resulting from the prior and likelihood in Equations (3) and (4). From the Bayesian perspective, the 4D-Var cost function \(F(\mathbf{c})\) is the negative log-posterior density of the scaling factors given the observations,
\[\begin{split} F(\mathbf{c})&=-\log(\pi(\mathbf{c} \mid\mathbf{y}))=\frac{1}{2}\left(\mathbf{c}-\mathbf{c}^{b}\right)^{\top} \mathbf{B}^{-1}\left(\mathbf{c}-\mathbf{c}^{b}\right)\\ &+\frac{1}{2}\left(\mathbf{y}-\mathbf{A}(\mathbf{c}\circ \boldsymbol{\mu})\right)^{\top}\mathbf{R}^{-1}\left(\mathbf{y}-\mathbf{A}( \mathbf{c}\circ\boldsymbol{\mu})\right)+C,\end{split} \tag{5}\]
where \(C\in\mathbb{R}\) is a normalizing constant for the posterior distribution and \(\pi(\mathbf{c}\mid\mathbf{y})\) denotes the posterior density. Thus, finding the MAP estimator, i.e., the \(\mathbf{c}\) that maximizes the posterior density, is equivalent to finding the vector \(\mathbf{c}\) that minimizes 4D-Var cost function.
In this study (Sect. 3.2), the prior covariance \(\mathbf{B}\) is parameterized with a single real value, \(\mathbf{B}:=b^{\intercal}\mathbf{I}_{m}\), where \(b\in\mathbb{R}\). This is inline with several published studies [9; 20], and implies that all prior spatio-temporal indices are statistically independent. Similarly, the noise covariance \(\mathbf{R}\) is assumed to be a diagonal matrix where each diagonal element is simply the variance of the corresponding \(\mathrm{X}_{CO_{2}}\) observation. As such, each diagonal element depends on the uncertainty of its corresponding \(\mathrm{X}_{CO_{2}}\) retrieval and the observations are assumed statistically independent given the scaling factors.
If the forward model \(\mathbf{A}\) is known explicitly, the posterior mean and covariance of \(\mathbf{c}\) are analytically tractable. We can then find the posterior uncertainty of the physical quantity \(\boldsymbol{\theta}=\mathbf{c}\circ\boldsymbol{\mu}\) given the observations. We rewrite Equation (4) using the short-hand notation for the \(\circ\) operation (see Appendix A), i.e., \(\mathbf{A}(\mathbf{c}\circ\boldsymbol{\mu})=\mathbf{A}_{\boldsymbol{\mu}} \mathbf{c}\):
\[\mathbf{y}\mid\mathbf{c}\sim\mathcal{N}(\mathbf{A}_{\boldsymbol{\mu}}\mathbf{c },\mathbf{R}). \tag{6}\]
Hence, \(\mathbf{c}\mid\mathbf{y}\sim\mathcal{N}(\boldsymbol{\alpha},\mathbf{\Sigma})\), where by the posterior mean and covariance Equations 4.3 and 4.7 in [24] we have:
\[\mathbf{\Sigma} =\left(\frac{1}{b^{2}}\mathbf{I}_{m}+\mathbf{A}_{\boldsymbol{\mu}} ^{\intercal}\mathbf{R}^{-1}\mathbf{A}_{\boldsymbol{\mu}}\right)^{-1}=\left( \left(\mathbf{A}^{\top}\mathbf{R}^{-1}\mathbf{A}\right)\circ\boldsymbol{\mu} \boldsymbol{\mu}^{\intercal}+\frac{1}{b^{2}}\mathbf{I}_{m}\right)^{-1}, \tag{7}\] \[\boldsymbol{\alpha} =\mathbf{\Sigma}\left(\mathbf{A}_{\boldsymbol{\mu}}^{\top} \mathbf{R}^{-1}\mathbf{y}+\frac{1}{b^{2}}\mathbf{c}^{b}\right)=\mathbf{\Sigma} \left(\left(\mathbf{A}^{\top}\mathbf{R}^{-1}\mathbf{y}\right)\circ\boldsymbol{ \mu}+\frac{1}{b^{2}}\mathbf{c}^{b}\right), \tag{8}\]
where Equation (7) follows from Corollary A.3.1 and Equation (8) follows from Lemma A.4. Note, \(\mathbf{\alpha}\) is also the MAP estimator of \(\mathbf{c}\). Furthermore, the posterior distribution for the physical quantity, \(\mathbf{\theta}=\mathbf{c}\circ\mathbf{\mu}\), is \(\mathbf{\theta}\mid\mathbf{y}\sim\mathcal{N}(\mathbf{\delta},\mathbf{\Gamma})\), where
\[\mathbf{\delta} =\mathbb{E}[\mathbf{\theta}\mid\mathbf{y}]=\mathbb{E}[\mathbf{c} \circ\mathbf{\mu}\mid\mathbf{y}]=\mathbb{E}[\ \mathbf{c}\mid\mathbf{y}]\circ\mathbf{\mu}=\mathbf{\alpha}\circ\mathbf{\mu}, \tag{9}\] \[\mathbf{\Gamma} =\mathrm{Cov}[\mathbf{\theta}\mid\mathbf{y}]=\mathrm{Cov}[\mathbf{c} \circ\mathbf{\mu}\mid\mathbf{y}]=\mathrm{Cov}[\mathbf{c}\mid\mathbf{y}]\circ\mathbf{ \mu}\mathbf{\mu}^{\top}=\mathbf{\Sigma}\circ\mathbf{\mu}\mathbf{\mu}^{\top}, \tag{10}\]
where we have used Lemmas A.1 and A.2 from Appendix A.
In practice, the forward model \(\mathbf{\Lambda}\) is only known implicitly via a computer simulator, making direct use of Equations (7)-(10) intractable. Instead, the 4D-Var cost function in Equation (5) is minimized using the L-BFGS-B algorithm [2] with the cost function gradient computed numerically using the adjoint method [15]. After a handful of iterations, L-BFGS-B finds a reasonable approximation of the posterior mean / mode \(\mathbf{\alpha}\), which yields a point estimator of \(\mathbf{\theta}\) using Equation (9). We now describe a procedure that provides an approach for uncertainty quantification of \(\mathbf{\theta}\) despite the intractability of the posterior covariance \(\mathbf{\Sigma}\).
### The Monte Carlo Procedure
To execute the Monte Carlo procedure introduced in [6], we generate \(M\) ensemble members. For each \(k=1,2,\ldots,M\), we sample a new prior mean \(\mathbf{e}_{k}\) and new observation \(\mathbf{y}_{k}\) as follows:
\[\mathbf{c}_{k}^{\mathrm{i.i.d.}} \mathcal{N}(\mathbf{1},\mathbf{\theta}^{\mathrm{T}}\mathbf{I}_{m}), \tag{11}\] \[\mathbf{y}_{k}^{\mathrm{i.i.d.}} \mathcal{N}(\mathbf{\Lambda}\mathbf{\mu},\mathbf{R}), \tag{12}\]
where \(\mathbf{\theta}^{\mathrm{T}}\) is the prior uncertainty mentioned in Section 2.1. Notice that \(\mathbf{\Lambda}\mathbf{\mu}\) is known after a single forward model run, and hence Equation (12) is more illuminatingly seen as sampling Gaussian noise for each Monte Carlo sample: letting \(\mathbf{\epsilon}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{R})\), for \(k=1,\ldots,M\), each Monte Carlo iteration involves sampling a pair, \((\mathbf{c}_{k},\mathbf{y}_{k})\in\mathbb{R}^{m}\times\mathbb{R}^{n}\), where \(\mathbf{y}_{k}:=\mathbf{\Lambda}\mathbf{\mu}+\mathbf{\epsilon}_{k}\) for each \(k=1,\ldots,M\).
The MAP estimator from Equation (8) corresponding to prior mean \(\mathbf{e}_{k}\) and observation \(\mathbf{y}_{k}\) is analytically tractable for each ensemble member when \(\mathbf{A}\) is explicitly known:
\[\mathbf{c}_{MAP}^{k}=\mathbf{\Sigma}\left(\left(\mathbf{A}^{\top}\mathbf{R}^{-1} \mathbf{y}_{k}\right)\circ\mathbf{\mu}+\frac{1}{\mathbf{\theta}^{2}}\mathbf{c}_{k }\right). \tag{13}\]
Similarly, the covariance matrix of this MAP estimator, henceforth denoted as \(\mathbf{\Sigma}_{\mathbf{c}_{MAP}^{k}}\), is
\[\mathbf{\Sigma}_{\mathbf{c}_{MAP}^{k}} =\mathbf{\Sigma}\,\mathrm{Cov}\left[\mathbf{A}^{\top}\mathbf{R}^{-1} \mathbf{y}_{k}\circ\mathbf{\mu}+\frac{1}{\mathbf{\theta}^{2}}\mathbf{c}_{k}\right] \mathbf{\Sigma}^{\top} \tag{14}\] \[=\mathbf{\Sigma}\left(\mathrm{Cov}\left[\mathbf{A}^{\top}\mathbf{R}^ {-1}\mathbf{y}_{k}\circ\mathbf{\mu}\right]+\frac{b^{2}}{\mathbf{I}}\mathbf{I}_{m}\right) \mathbf{\Sigma}\] \[=\mathbf{\Sigma}\left(\mathrm{Cov}\left[\mathbf{A}^{\top}\mathbf{R}^ {-1}\mathbf{y}_{k}\right]\circ\mathbf{\mu}\mathbf{\mu}^{\top}+\frac{1}{\mathbf{ \theta}^{2}}\mathbf{I}_{m}\right)\mathbf{\Sigma}\] \[=\mathbf{\Sigma}\left(\left(\mathbf{A}^{\top}\mathbf{R}^{-1}\mathbf{ A}\right)\circ\mathbf{\mu}\mathbf{\mu}^{\top}+\frac{1}{\mathbf{\theta}^{2}}\mathbf{I}_{m} \right)\mathbf{\Sigma}=\mathbf{\Sigma},\]
since \(\mathbf{\Sigma}^{-1}=\left(\mathbf{A}^{\top}\mathbf{R}^{-1}\mathbf{A}\circ\mathbf{\mu }\mathbf{\mu}^{\top}+\frac{1}{\mathbf{\theta}^{2}}\mathbf{I}_{m}\right)\). This shows that the covariance matrix of the Monte Carlo ensemble of scaling factor MAP estimators is equal to the desired posterior covariance \(\mathbf{\Sigma}\). Note, the \(\circ\) operation step on the third line of derivation (14) follows from Lemma A.2 in Appendix A. To the best of our knowledge, proof of this equality has not appeared in previous literature on this method.
This covariance equality also exists in the physical quantity space, \(\mathbf{e}_{\cdot,\cdot}\), carbon flux space. The estimator of the physical quantity corresponding to \(\mathbf{c}_{MAP}^{k}\) is
\[\mathbf{\theta}_{k}=\mathbf{c}_{MAP}^{k}\circ\mathbf{\mu}. \tag{15}\]
Using the result from Lemma A.2 in Appendix A, the covariance matrix \(\mathrm{Cov}[\mathbf{\theta}_{k}]\) of this estimator is
\[\mathrm{Cov}[\mathbf{\theta}_{k}]=\mathrm{Cov}[\mathbf{c}_{MAP}^{k}\circ\mathbf{\mu}]= \mathrm{Cov}[\mathbf{c}_{MAP}^{k}]\circ\mathbf{\mu}\mathbf{\mu}^{\top}=\mathbf{\Sigma} \circ\mathbf{\mu}\mathbf{\mu}^{\top}=\mathbf{\Gamma}. \tag{16}\]
Hence, the covariance matrix of the Monte Carlo physical quantity estimator is equal to the posterior covariance matrix of that physical quantity.
However, for most DA tasks the forward model is not explicitly available, so each ensemble member MAP estimator \(\mathbf{c}_{MAP}^{k}\) must be obtained with an iterative optimization algorithm minimizing (5) with \(\mathbf{c}_{k}\) and \(\mathbf{y}_{k}\) as the prior mean and observation vectors. Once these ensemble members are obtained, we could in principle estimate the posterior scaling factor covariance matrix \(\mathbf{\Sigma}\) with the empirical covariance estimator \(\mathbf{\Sigma}\) based on the Monte Carlo ensemble as follows,
\[\mathbf{\widehat{\Sigma}}=\frac{1}{M-1}\sum_{k=1}^{M}\left(\mathbf{c}_{MAP}^{k}- \mathbf{\mathbb{E}}\right)\left(\mathbf{c}_{MAP}^{k}-\mathbf{\mathbb{E}}\right)^{\top}, \tag{17}\]
where \(\mathbf{\mathbb{E}}=\frac{1}{M}\sum_{k=1}^{M}\mathbf{c}_{MAP}^{k}\). To translate Equation (17) to the physical quantity space, we simply plug in the empirical covariance estimator to Equation (16), i.e.,
\[\mathbf{\hat{\Gamma}}=\widehat{\mathrm{Cov}[\mathbf{\theta}_{k}]}=\mathbf{\widehat{\Sigma}} \circ\mathbf{\mu}\mathbf{\mu}^{\top}. \tag{18}\]
In practice, DA scenarios like carbon flux inversion are typically high dimensional, making direct interaction with these covariance matrices difficult. Indeed, accurate estimation of \(\mathbf{\Sigma}\) using Equation (17) would require an enormously large Monte Carlo ensemble and would require storing and working with an \(m\times m\) matrix, where \(m\sim 10^{5}\) or larger. Fortunately, we often care about the variance of one-dimensional summaries of \(\mathbf{\theta}\), such as the posterior flux variance for a specific region during some time period, as opposed to the full posterior covariance matrix. For instance, we might wish to estimate North American fluxes over some month. For the remaining presentation of these ideas, we only discuss the posterior scaling factor vector, \(\mathbf{c}\mid\mathbf{y}\), but as we have shown above, obtaining the posterior flux is achieved by a simple component-wise scaling with the control flux \(\mathbf{\mu}\).
Obtaining quantities of the above type is mathematically implemented using a linear functional of the underlying high-dimensional parameter. That is, we wish to characterize the posterior of \(\varphi(\mathbf{c})=\mathbf{h}^{\top}\mathbf{c}\), where \(\mathbf{h}\in\mathbb{R}^{n}\) contains weights necessary to aggregate the desired scaling factors. Hence, building off Equations (8) and (7), we obtain the posterior distribution for the functional of interest:
\[\varphi(\mathbf{c})\mid\mathbf{y}\sim\mathcal{N}(\mathbf{h}^{\top}\mathbf{\alpha}, \mathbf{h}^{\top}\mathbf{\Sigma}\mathbf{h}). \tag{19}\]
We wish to obtain the posterior variance of this functional. Define \(\sigma_{\varphi}^{2}=\text{Var}(\varphi(\mathbf{c})\mid\mathbf{y})=\mathbf{h}^{ \top}\mathbf{\Sigma}\mathbf{h}\). We could inefficiently estimate this using \(\hat{\sigma}_{\varphi}^{2}=\mathbf{h}^{\top}\mathbf{\Sigma}\mathbf{h}\), but we wish to avoid working directly with the full empirical covariance matrix. The following algebraic steps provide a better alternative:
\[\hat{\sigma}_{\varphi}^{2} =\mathbf{h}^{\top}\left(\frac{1}{M-1}\sum_{k=1}^{M}\left(\mathbf{c}_{ MAP}^{k}-\overline{\mathbf{c}}\right)\left(\mathbf{c}_{MAP}^{k}-\overline{\mathbf{c} }\right)^{\top}\right)\mathbf{h} \tag{20}\] \[=\frac{1}{M-1}\sum_{k=1}^{M}\mathbf{h}^{\top}\left(\mathbf{c}_{MAP}^{ k}-\overline{\mathbf{c}}\right)\left(\mathbf{c}_{MAP}^{k}-\overline{\mathbf{c}} \right)^{\top}\mathbf{h}\] (21) \[=\frac{1}{M-1}\sum_{k=1}^{M}\left(\mathbf{c}_{MAP}^{k}-\overline {\mathbf{c}}\right)\right]^{2}\] (22) \[=\frac{1}{M-1}\sum_{k=1}^{M}\left(\varphi_{k}-\overline{\varphi} \right)^{2}, \tag{23}\]
where \(\varphi_{k}=\mathbf{h}^{\top}\mathbf{c}_{MAP}^{k}\) and \(\overline{\varphi}=\mathbf{h}^{\top}\mathbf{\Sigma}=\frac{1}{M}\sum_{k=1}^{M}\varphi_ {k}\). The above algebra shows that the posterior variance of the functional can be computed using the functionals of the Monte Carlo samples without having to form the full empirical covariance matrix. See Algorithm 1 for a succinct exposition of the above procedure. Note, the control flux can be built into the definition of \(\mathbf{h}\) so that the functional has the desired units. Notice also that the functional does not need to be specified when creating the Monte Carlo ensemble. As long as the ensemble \(\{\mathbf{c}_{MAP}^{k}\}_{k=1}^{M}\) is stored and made available to the end users, they may evaluate post-hoc the uncertainty of any functional that is of interest in their specific use-case.
### Quantifying the Monte Carlo Uncertainty
Although Section 2.2 establishes the equality of the Monte Carlo MAP estimator ensemble member covariance to the posterior covariance (and therefore the equality of the Monte Carlo ensemble member functional variance to the posterior functional variance), we have not yet established that the empirical covariance matrix (and functional variance) converges in probability to the true posterior covariance matrix (and functional variance). There are consistency results showing the empirical covariance matrix converging in probability to the true covariance matrix (see, for instance, Chapter 6 in [26]). However, since this application is primarily concerned with linear functionals of the form \(\varphi(\mathbf{c})=\mathbf{h}^{\top}\mathbf{c}\) as described in Section 2.2, we can appeal directly to the consistency of the sample variance as shown, for example, in Chapter 5 of Casella and Berger (2002) [5].
Additionally, using the above algorithm, we would like to know either the uncertainty of the variance estimate given the number of Monte Carlo samples, or the number of samples required to obtain a particular level of Monte Carlo uncertainty on the variance. In essence, we would like to quantify the uncertainty of our uncertainty. To do so, we take a frequentist approach and construct confidence intervals on \(\hat{\sigma}_{\varphi}^{2}\). The confidence intervals can be constructed by recognizing that the ratio of the Monte Carlo functional empirical posterior variance to the true functional posterior variance scaled by \((M-1)\) follows a \(\chi_{M-1}^{2}\) distribution.
Since each sampled \(\mathbf{c}_{MAP}^{k}\) is a linear function of other Gaussian samples (see Equation (13)), \(\mathbf{c}_{MAP}^{k}\) is also Gaussian, and thus the random variables \(\varphi_{k}=\mathbf{h}^{\top}\mathbf{c}_{MAP}^{k}\) (\(k=1,\ldots,M\)) are sampled independently and identically from a Gaussian distribution with some mean and variance \(\sigma_{\varphi}^{2}=\mathbf{h}^{\top}\mathbf{\Sigma}\mathbf{h}\). By Theorem 5.3.1 of Casella and Berger (2002) [5], we have the following distributional result,
\[\frac{(M-1)\hat{\sigma}_{\varphi}^{2}}{\sigma_{\varphi}^{2}}\sim\chi_{M-1}^{2}. \tag{24}\]
Thus, for \(\alpha\in(0,1)\), the distribution in Equation (24) enables creating a \(1-\alpha\) confidence interval for the true posterior variance, \(\sigma_{\varphi}^{2}\), as a function of the empirical posterior variance, \(\hat{\sigma}_{\varphi}^{2}\). Using the exact distribution in Equation (24), we can create either one- or two-sided confidence intervals. Focusing on the two-sided case, we have,
\[\mathbb{P}\left\{\chi_{M-1,\alpha/2}^{2}\leq\frac{(M-1)\hat{\sigma}_{\varphi}^{ 2}}{\sigma_{\varphi}^{2}}\leq\chi_{M-1,1-\alpha/2}^{2}\right\}=1-\alpha, \tag{25}\]
where \(\chi^{2}_{M-1,\alpha/2}\) is the \(\alpha/2\)-quantile of a chi-squared distribution with \(M-1\) degrees of freedom. Hence, with some algebraic manipulation we arrive at the confidence interval of the posterior variance,
\[\mathbb{P}\left\{\frac{(M-1)\hat{\sigma}_{\varphi}^{2}}{\chi^{2}_{M-1,1-\alpha/2 }}\leq\sigma_{\varphi}^{2}\leq\frac{(M-1)\hat{\sigma}_{\varphi}^{2}}{\chi^{2}_ {M-1,\alpha/2}}\right\}=1-\alpha. \tag{26}\]
Since in practice we would like to characterize uncertainty in the same units as the flux estimate, we can provide an analogous confidence interval for the posterior standard deviation by taking square roots of all the terms within the probability statement in Equation (26), giving
\[\mathbb{P}\left\{\hat{\sigma}_{\varphi}\sqrt{\frac{M-1}{\chi^{2}_{M-1,1-\alpha/ 2}}}\leq\sigma_{\varphi}\leq\hat{\sigma}_{\varphi}\sqrt{\frac{M-1}{\chi^{2}_{ M-1,\alpha/2}}}\right\}=1-\alpha. \tag{27}\]
Equation (27) facilitates the computation of a \((1-\alpha)\times 100\%\) frequentist interval estimator of the Bayesian credible interval for the functional of interest \(\varphi\). For each endpoint of the true Bayesian credible interval, we find a confidence interval such that the probability that both endpoint confidence intervals simultaneously cover the true credible interval endpoints is \(1-\alpha\). Let \(\gamma\in(0,1)\) and \(\varphi_{MAP}=\mathbf{h}^{\top}\mathbf{e}_{MAP}\) be the functional MAP estimator as described in Section 2.1. Because the posterior is Gaussian and \(\varphi_{MAP}\) is a linear functional, it is a one-dimensional Gaussian. Hence, the Bayesian \((1-\gamma)\times 100\%\) credible interval is computed as follows,
\[\left[\underline{\varphi}^{*},\overline{\varphi}^{*}\right]=\left[\varphi_{ MAP}-z_{1-\gamma/2}\cdot\sigma_{\varphi},\varphi_{MAP}+z_{1-\gamma/2} \cdot\sigma_{\varphi}\right], \tag{28}\]
where \(z_{1-\gamma/2}\) is the \(1-\gamma/2\) quantile of a standard Gaussian distribution. Equation (27) allows us to construct the aforementioned endpoint confidence intervals as follows. For readability, define \(L^{2}:=\frac{M-1}{\chi^{2}_{M-1,1-\alpha/2}}\) and
\(R^{2}:=\frac{M-1}{X_{M-1/2}}\). Thus, we have the following,
\[1-\alpha =\mathbb{P}\left\{z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\leq z_{1- \gamma/2}\sigma_{\varphi}\leq z_{1-\gamma/2}\hat{\sigma}_{\varphi}R\right\}\] \[=\mathbb{P}\{-z_{1-\gamma/2}\hat{\sigma}_{\varphi}R\leq z_{1- \gamma/2}\sigma_{\varphi}\leq-z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\text{ \ and }\] \[=\mathbb{P}\big{\{}\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{ \varphi}R\leq\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{\varphi}\leq\varphi_{MAP }+z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\text{ \ and }\] \[\varphi_{MAP}+z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\leq\varphi_{MAP }+z_{1-\gamma/2}\hat{\sigma}_{\varphi}\leq\varphi_{MAP}+z_{1-\gamma/2}\hat{ \sigma}_{\varphi}R\big{\}}\] \[=\mathbb{P}\big{\{}\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{ \varphi}R\leq\underline{\varphi}^{*}\leq\varphi_{MAP}-z_{1-\gamma/2}\hat{ \sigma}_{\varphi}L\text{ \ and }\] \[\varphi_{MAP}+z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\leq \overline{\varphi}^{*}\leq\varphi_{MAP}+z_{1-\gamma/2}\hat{\sigma}_{\varphi}R \big{\}}.\]
More concisely, defining
\[\underline{I}:=\left[\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{ \varphi}R,\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{\varphi}L\right], \tag{29}\] \[\overline{I}:=\left[\varphi_{MAP}+z_{1-\gamma/2}\hat{\sigma}_{ \varphi}L,\varphi_{MAP}+z_{1-\gamma/2}\hat{\sigma}_{\varphi}R\right], \tag{30}\]
it follows that
\[\mathbb{P}\left\{\underline{\varphi}^{*}\in\underline{I}\text{ \ and \ }\overline{ \varphi}^{*}\in\overline{I}\right\}=1-\alpha. \tag{31}\]
The intervals \(\underline{I}\) and \(\overline{I}\) quantify uncertainty on uncertainty, and provide a rigorous probabilistic characterization of the Monte Carlo procedure's uncertainty.
In practice, the original Bayesian credible interval in Equation (28) can be modified to account for the Monte Carlo uncertainty. To obtain an _upper bound_ on the Bayesian credible interval, we apply an _inflation_ factor (defined above as \(R\)), and thus obtain the interval \(\left[\underline{\varphi}_{\alpha},\overline{\varphi}_{u}\right]\supset \left[\varphi_{MAP}-z_{1-\gamma/2}\hat{\sigma}_{\varphi}R,\varphi_{MAP}+z_{1 -\gamma/2}\hat{\sigma}_{\varphi}R\right]\), such that \(\mathbb{P}\left\{\left[\underline{\varphi}_{u},\overline{\varphi}_{u}\right] \supset\left[\underline{\varphi}^{*},\overline{\varphi}^{*}\right]\right\}=1- \alpha/2\). This probability is \((1-\alpha/2)\) instead of \((1-\alpha)\) since the probability in Equation (25) evaluated with only the lower bound yields a probability of \((1-\alpha/2)\). Following the same steps as above, we obtain lower and upper bounds on the lower and upper endpoints of credible interval (28), respectively, holding with probability exactly \((1-\alpha/2)\). Similarly, to obtain a _lower bound_ on the Bayesian credible interval, we apply a _deflation_ factor (defined above as \(L\)) and thus obtain the interval \(\left[\underline{\varphi}_{l},\overline{\varphi}_{l}\right]=\left[\varphi_{MAP }-z_{1-\gamma/2}\hat{\sigma}_{\varphi}L,\varphi_{MAP}+z_{1-\gamma/2}\hat{ \sigma}_{\varphi}L\right]\) such that \(\mathbb{P}\left\{\left[\underline{\varphi}_{l},\overline{\varphi}_{l}\right] \subset\left[\underline{\varphi}^{*},\overline{\varphi}^{*}\right]\right\}=1- \alpha/2\), holding with equality by the same logic as that used for the inflation factor.
Observing that the aforementioned inflation and deflation factors monotonically asymptote to one as the number of Monte Carlo samples \(M\) gets large, the effect of the Monte Carlo procedure wans as the number of samples grows. As shown in Table 2, the deflation factor monotonically approaches one from below as \(M\) gets large while the inflation factor monotonically approaches one from above as \(M\) gets large. As is characteristic of DA methods, each Monte Carlo iteration requires a non-trivial amount of computation, which practically restricts the number of Monte Carlo samples that can be obtained. As such, the inflated interval protects against underestimating the uncertainty, while the deflated interval provides a lower bound or "best-case" scenario for the uncertainty of the Bayesian procedure.
## 3 Numerical Examples
### Low-Dimensional Example
We construct a two-dimensional toy example to provide a numerical demonstration that this MC procedure computes a consistent estimate of the posterior covariance, and is numerically close in practice. We define a linear forward model with the following matrix:
\[\mathbf{A}=\begin{bmatrix}1-\epsilon&\epsilon\\ \epsilon&1-\epsilon\end{bmatrix}, \tag{32}\]
\begin{table}
\begin{tabular}{l l l} \hline \# Monte Carlo samples, \(M\) & Deflation: \(L=\sqrt{\frac{M-1}{X_{M-1,1-\alpha/2}^{2}}}\) & Inflation: \(R=\sqrt{\frac{M-1}{X_{M-1,1-\alpha/2}^{2}}}\) \\ \hline
10 & 0.6987 & 1.7549 \\
100 & 0.8785 & 1.1607 \\
1,000 & 0.9580 & 1.0458 \\
10,000 & 0.9863 & 1.0141 \\
100,000 & 0.9956 & 1.0044 \\
1,000,000 & 0.9986 & 1.0014 \\ \hline \end{tabular}
\end{table}
Table 2: Inflation and deflation factors for Monte Carlo (MC) estimated posterior standard deviation with \(\alpha=0.05\). When \(M=100\), by inflating the MC estimated posterior standard deviation by a factor of 1.1607 (inflating by 16.07%), the extra uncertainty resulting from the MC procedure is accounted for with 97.5% confidence. Similarly, when \(M=100\), deflation the MC estimated posterior standard deviation by a factor of 0.8785 provides a lower bound on the true underlying Bayesian uncertainty with 97.5% confidence. When considered simultaneously, the inflation and deflation factors bracket the true uncertainty with 95% confidence.
where \(\epsilon>0\). Let \(\mathbf{\theta}\in\mathbb{R}^{2}\) be the true state of some physical quantity and \(\mathbf{\mu}\in\mathbb{R}^{2}\) be the control state. We use the values in Table 3 to demonstrate the agreement between the analytical equations for the Bayesian procedure and the Monte Carlo procedure in addition to showing their agreement with the empirical covariance computed from the Monte Carlo ensemble members. Using the Table 3 parameters and Equation (7), we obtain the following posterior covariance matrix:
\[\mathbf{\Sigma}=\begin{bmatrix}2.10838562&-0.0867085\\ -0.0867085&0.8693668\end{bmatrix}. \tag{33}\]
For the analytical covariance of the MAP estimator, we obtain the following matrix using Equation (14):
\[\mathbf{\Sigma}_{\mathbf{e}^{0}_{\mathbf{I}_{MAP}}}=\begin{bmatrix}2.10838562&-0.0 867085\\ -0.0867085&0.8693668\end{bmatrix}. \tag{34}\]
Indeed, these matrices are expected to be the same. Using simulated ensemble members, we obtain the following empirical covariance matrix using Equation (17):
\[\widehat{\mathbf{\Sigma}}=\begin{bmatrix}2.09697751&-0.08748562\\ -0.08748562&0.87053267\end{bmatrix}. \tag{35}\]
This empirical covariance matrix is numerically very close to the analytical matrices. Using Equation (6.12) from [26], we can compute an upper bound on the relative error between the empirical covariance matrix and the true posterior covariance matrix with respect to the \(\ell_{2}\) operator norm that holds with at least some desired probability. Concretely, with \(n=2\) and \(M=10^{6}\), this deviation bound implies that the relative error is at most \(0.01784\) (\(1.784\%\)) with probability at least \(95\%\). This tight error bound matches the above numerical closeness.
All of the above quantities exist in the scaling factor space. For the physical quantity space covariance matrix, we obtain the following from Equations (10) and (16):
\[\mathbf{\Gamma}=\begin{bmatrix}0.52709641&-0.04335425\\ -0.04335425&0.8693668\end{bmatrix} \tag{36}\]
\[\text{Cov}[\mathbf{\theta}_{k}]=\begin{bmatrix}0.52709641&-0.04335425\\ -0.04335425&0.8693668\end{bmatrix} \tag{37}\]
\[\frac{1}{M-1}\sum_{k=1}^{M}(\theta_{k}-\overline{\theta})(\theta_{k}- \overline{\theta})^{\top}=\begin{bmatrix}0.52424438&-0.04374281\\ -0.04374281&0.87053267\end{bmatrix} \tag{38}\]
Again, we see that the agreement between the analytical forms and the empirical covariance matrix is very close.
This toy example demonstrates the correctness of the Monte Carlo procedure in terms of the covariance matrix. However, as seen in Algorithm 1, we are actually interested in scenarios in which we are considering a functional of high-dimensional parameters, and hence a variance. Such an example is considered in the following section.
### Carbon Flux Inversion OSSE
We show here an example of this Monte Carlo procedure being used to compute posterior uncertainties for global carbon fluxes. We follow the flux inversion setup used by Byrne et al. [4] (see Section 2.3 of that study). This setup uses the GEOS-Chem Adjoint model [15] to estimate scaling factors on a \(4^{\circ}\times 5^{\circ}\) surface grid from January 2010 up to and including August. For each spatial point, there is one scaling factor parameter for each month, totaling \(m=72\times 46\times 8=26,492\) scaling factors parameters. This model is linear in terms of realistic fluxes (e.g., not including abnormally large negative fluxes), and hence amenable to this uncertainty quantification procedure. The OSSE defines ground-truth fluxes from the Joint UK Land Environment Simulator (JULES) [7, 14] and uses Net Ecosystem Exchange (NEE) fluxes from NOAA's CarbonTracker version CT2016 ([22], with updates documented at [https://www.esrl.noaa.gov/gmd/ccgg/carbontracker/](https://www.esrl.noaa.gov/gmd/ccgg/carbontracker/)) as the control fluxes. The satellite \(X_{CO_{2}}\) observations for the assimilation are generated from the JULES fluxes by running a forward GEOS-Chem simulation and sampling the model with the GOSAT observational coverage and observation operator [21].
The prior uncertainty, as described in Equation (3), is set to \(b=1.5\) (where \(\mathbf{B}:=b^{2}\mathbf{I}_{M}\)). To perform the Monte Carlo procedure, we draw \(M=60\) ensemble members, as described in Sec. 2.2. The \(X_{CO_{2}}\) observation uncertainty \(\mathbf{\Sigma}\) (a diagonal matrix, as the observations are assumed to be independent) comes directly from the GOSAT data product and varies between observations. For each ensemble member \(k\), the output of the
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Value & Description \\ \hline \hline \(\theta\) & \(\begin{bmatrix}1&2\\ 1&1\end{bmatrix}^{\top}\) & True state of the system \\ \(\mathbf{\mu}\) & \(\begin{bmatrix}1&1\\ 1&1\end{bmatrix}^{\top}\) & Control state \\ \(n\) & \(2\) & The number of observations in each ensemble member (also the dimension of \(\mathbf{y}\) in Equation (4)) \\ \(M\) & \(10^{6}\) & The number of MC ensemble members \\ \(b^{2}\) & \(4\) & The prior variance for each element in the scaling factor vector \\ \(\epsilon\) & \(0.05\) & Parameter of the matrix \(\mathbf{A}\) defined in Equation (32) \\ \(\sigma^{2}\) & \(1\) & Observation error variance \\ \hline \hline \end{tabular}
\end{table}
Table 3: Parameter settings for the low-dimensional example.
GEOS-Chem Adjoint optimization provides monthly scaling factor MAP estimators \(\mathbf{c}_{MAP}^{k}\) according to the ensemble member inputs as described by Equation (13). Each ensemble MAP estimator is then multiplied by the control flux to obtain a MAP estimator in flux space.
The functionals of interest \(\varphi\) are monthly global fluxes. The flux values on the 3-hour \(4^{\circ}\times 5^{\circ}\) spatial-temporal grid are mapped to a global monthly flux using a weighted average with weights proportional to the surface area of each grid cell and uniform time weighting. The global flux posterior variance is computed for each month by finding the empirical variance of the Monte Carlo global flux members, as shown in Equation (23). To get a sense of how the DA is reducing prior uncertainty, for each month, we compute a % uncertainty reduction as follows:
\[\%\;\text{Uncertainty\;Reduction}=1-\frac{\sigma_{posterior}}{\sigma_{prior}}. \tag{39}\]
Since we do not precisely know the posterior standard deviation in Equation (39), we consider the reduction both in terms of the raw Monte Carlo point estimate of the posterior standard deviation and its inflated version (i.e., \(R\) as defined in Section 2.3).
The left side of Figure 1 shows the timeseries of global mean functionals and their credible intervals. The posterior flux is shown to have reduced error against the true flux, especially during the boreal summer months.
Similarly, the Monte Carlo posterior uncertainty estimate shows considerable reduction relative to the prior. The uncertainty estimates with inflated endpoints, increase the posterior uncertainty by 22% while the deflated endpoints decrease the posterior uncertainty by 15%, resulting in credible interval endpoint bounds that capture the true credible interval endpoints with 95% probability. The right side of Figure 1 further emphasizes the prior to posterior uncertainty reduction that we mathematically expect. However, we notice that the inflated uncertainty is only reduced during boreal summer months. In January, February, March, and April the inflated Monte Carlo estimated posterior uncertainty is actually larger than the prior uncertainty. There is a logical explanation for this: since most of the landsm generating NEE fluxes is in the Northern Hemisphere and GOSAT requires sunlight to measure \(X_{CO_{2}}\), the satellite observations impose much weaker constraints on the fluxes during boreal winter. Furthermore, since the prior uncertainty is defined as a percentage, the prior is more concentrated during the boreal winter months when the absolute magnitude of the CarbonTracker fluxes is smaller. As a result of these two effects, the actual posterior uncertainty during the winter months is only slightly smaller than the prior uncertainty. Since we are obtaining a noisy Monte Carlo estimate of this uncertainty from using 60 ensemble members, the inflated value accounting for the Monte Carlo uncertainty of the posterior uncertainty is slightly larger than the prior uncertainty.
Figure 1: **(Left)** Estimated posterior \((1-\gamma)\times 100\%=95\%\) credible intervals around the monthly global flux functionals show markedly improved uncertainty over the prior during boreal summer months. The three interval types shown are the unchanged MC estimated intervals (red), the inflated MC estimated intervals (gray), and the deflated MC estimated intervals (orange). As described in Section 2.3, for each month, the true upper and lower credible interval endpoints are contained within the inflated and deflated endpoints with probability \(1-\alpha=0.95\). Note, \(\hat{\sigma}\) is a shorthand notation for the empirical functional standard deviation as defined in Step (3b) of Algorithm 1, \(\overline{\mu}\) is the globally averaged control flux to which the prior uncertainty, \(b\), is applied for each month, and \(R\) and \(L\) are the inflation and deflation factors, respectively, as defined in Table 2. We observe that with even as few as \(M=60\) ensemble members, at the monthly/global scale, the magnitude of the Monte Carlo sampling uncertainty is small in comparison to the posterior uncertainty. **(Right)** Percent reduction in uncertainty from prior to posterior for the monthly global fluxes is most significant during the boreal summer. The light blue curve shows the percent reduction estimated with the unchanged MC estimated posterior standard deviation, while the magenta curve shows the percent reduction estimated with the inflated MC estimated posterior standard deviation. The true reduction is larger than the reduction shown by the latter curve with 97.5% confidence.
Conclusion
For Bayesian uncertainty quantification in which the forward model is only available as a simulator, the carbon flux estimation community has proposed a useful Monte Carlo method to compute posterior uncertainties. This method is especially well-suited to DA tasks since it is parallelizable, works with computationally intensive physical simulators, and allows for flexible post-hoc uncertainty quantification on any desired functional of the model parameters. In this note, we analytically established the mathematical correctness of this procedure in the case of Gaussian prior and error distributions and provided additional uncertainty quantification to account for the Monte Carlo sampling variability in the final estimated credible interval. We also provided two numerical examples. In the first, we demonstrated the agreement between the analytical equations and empirical results for an explicitly known linear forward model. In the second, we showed that this procedure applies to a large-scale DA problem in the form of a carbon flux inversion OSSE, and reasoned that the uncertainty quantification results are mathematically and practically sensible. Future investigations of this method could be based on an exploration of how many ensemble members must be sampled before the Monte Carlo uncertainty is sufficiently less than the posterior uncertainty.
## Acknowledgments
This work was supported by NSF grant DMS-2053804, JPL RSA No. 1670375 and 1689177 and a grant from the C3.AI Digital Transformation Institute. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (grant no. 80NM0018D004). Liu and Byrne would like to acknowledge the funding support from NASA Orbiting Carbon Observatory Science Team program (17-OCO2-17-0013). Finally, we would like to thank Anna Harper for providing the JULES fluxes used in this study's OSSE, the STAMPS research group at Carnegie Mellon University for supporting this work, and the Uncertainty Quantification group at the Jet Propulsion Laboratory for facilitating this collaboration. We would like to acknowledge high-performance computing support from Cheyenne (doi:10.5065/DefRX99HX) provided by NCAR's Computational and Information Systems Laboratory, sponsored by the National Science Foundation.
## Appendix A Supporting Algebraic Results
There are a few key properties of the element-wise multiplication operation that must be stated in order to support the derivation of the equations presented in this paper.
For the following, let \(\mathbf{x}\in\mathbb{R}^{m}\) be a random vector such that \(\mathbb{E}[\mathbf{x}]=\boldsymbol{\mu}\) and \(\mathrm{Cov}[\mathbf{x}]=\boldsymbol{\Sigma}\). Additionally, suppose \(\mathbf{a}\in\mathbb{R}^{m}\) and \(\mathbf{A}\in\mathbb{R}^{n\times m}\).
**Lemma A.1**.: \(\mathbb{E}[\mathbf{x}\circ\mathbf{a}]=\mathbb{E}[\mathbf{x}]\circ\mathbf{a}\,.\)__
Proof.: By definition, we have
\[\mathbb{E}[\mathbf{x}\circ\mathbf{a}]=[\mathbb{E}[x_{i}a_{i}]]_{i}=[a_{i} \mathbb{E}[x_{i}]]_{i}=\mathbb{E}[\mathbf{x}]\circ\mathbf{a}\,. \tag{40}\]
**Lemma A.2**.: \(\mathrm{Cov}[\mathbf{x}\circ\mathbf{a}]=\mathrm{Cov}[\mathbf{x}]\circ\mathbf{ a}\mathbf{a}^{\top}\)__
Proof.: There are two terms that need to be computed: (1) \(\mathrm{Var}[x_{i}a_{i}]\) and (2) \(\mathrm{Cov}[x_{i}a_{i},x_{j}a_{j}]\). (1) is straightforward by properties of variance, namely, \(\mathrm{Var}[x_{i}a_{i}]=a_{i}^{2}\mathrm{Var}[x_{i}]\). (2) simply requires the definition of covariance, i.e.,
\[\mathrm{Cov}[x_{i}a_{i},x_{j}a_{j}]=\mathbb{E}\big{[}\big{(}x_{i}a_{i}- \mathbb{E}[x_{i}a_{i}]\big{)}\big{(}x_{j}a_{j}-\mathbb{E}[x_{j}a_{j}]\big{)} \big{]}=a_{i}a_{j}\mathrm{Cov}[x_{i},x_{j}]. \tag{41}\]
Hence, it follows that \(\mathrm{Cov}[\mathbf{x}\circ\mathbf{a}]=\mathrm{Cov}[\mathbf{x}]\circ\mathbf{ a}\mathbf{a}^{\top}\).
**Lemma A.3**.: _Let \(\boldsymbol{\mu}\in\mathbb{R}^{m}\) and let \(\boldsymbol{A}_{\boldsymbol{\mu}}\) be such that the \(i\)th row of \(\mathbf{A}_{\boldsymbol{\mu}}\) is equal to \([A_{ij}\mu_{j}]_{j}\). Then the following equation holds:_
\[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{A}_{\boldsymbol{\mu}}=\mathbf{A} ^{\top}\mathbf{A}\circ\boldsymbol{\mu}\boldsymbol{\mu}^{\top}. \tag{42}\]
Proof.: To prove Equation (42), first note that \(\boldsymbol{\mu}\boldsymbol{\mu}^{\top}=\left[\mu_{i}\mu_{j}\right]_{ij}\). Let \(i\in[m]\) and \(j\in[n]\). By definition, we have
\[\left[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{A}_{\boldsymbol{\mu}} \right]_{ij}=\sum_{l=1}^{m}A_{il}A_{lj}\mu_{il}\mu_{j}=\mu_{il}\mu_{j}\sum_{l=1}^ {m}A_{il}A_{lj}.\]
Hence, we can see that
\[\left[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{A}_{\boldsymbol{\mu}} \right]_{ij}=\left[\sum_{l=1}^{m}A_{il}A_{lj}\right]_{ij}\cdot\left[\mu_{l}\mu_{ j}\right]_{ij}\]
and we have the desired result.
**Corollary A.3.1**.: _Let \(\mathbf{M}\in\mathbb{R}^{n\times n}\) be a positive-definite matrix and let \(\mathbf{A}_{\boldsymbol{\mu}}\) be defined as in Lemma A.3. It follows that,_
\[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{M}\mathbf{A}_{\boldsymbol{\mu}}= \mathbf{A}^{\top}\mathbf{M}\mathbf{A}\circ\boldsymbol{\mu}\boldsymbol{\mu}^{ \top}. \tag{43}\]
Proof.: Since \(\mathbf{M}\) is positive-definite, it has a lower-triangular Cholesky decomposition, \(\mathbf{M}=\mathbf{L}\mathbf{L}^{\top}\). For all \(i\in[m]\) and \(j\in[n]\), we have the following equivalence:
\[\left[\mathbf{L}^{\top}\mathbf{A}_{\boldsymbol{\mu}}\right]_{ij}=\sum_{l=1}^{n} A_{lj}L_{li}\mu_{j}=\left[\left(\mathbf{L}^{\top}\mathbf{A}\right)_{\boldsymbol{\mu}} \right]_{ij}. \tag{44}\]
Therefore, \(\mathbf{L}^{\top}\mathbf{A}_{\boldsymbol{\mu}}=\left(\mathbf{L}^{\top}\mathbf{ A}\right)_{\boldsymbol{\mu}}\). Thus, Lemma A.3 implies,
\[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{M}\mathbf{A}_{\boldsymbol{\mu}}= \left(\mathbf{L}^{\top}\mathbf{A}_{\boldsymbol{\mu}}\right)^{\top}\left( \mathbf{L}^{\top}\mathbf{A}_{\boldsymbol{\mu}}\right)=\left(\mathbf{L}^{\top} \mathbf{A}\right)_{\boldsymbol{\mu}}^{\top}\left(\mathbf{L}^{\top}\mathbf{A} \right)_{\boldsymbol{\mu}}=\left(\mathbf{L}^{\top}\mathbf{A}\right)^{\top} \left(\mathbf{L}^{\top}\mathbf{A}\right)\circ\boldsymbol{\mu}\boldsymbol{\mu}^ {\top}, \tag{45}\]
and the result follows since \(\left(\mathbf{L}^{\top}\mathbf{A}\right)^{\top}\left(\mathbf{L}^{\top}\mathbf{ A}\right)=\mathbf{A}^{\top}\mathbf{M}\mathbf{A}\).
**Lemma A.4**.: _Let \(\mathbf{A}_{\boldsymbol{\mu}}\in\mathbb{R}^{n\times m}\) be defined as in the above, \(\mathbf{M}\in\mathbb{R}^{n\times n}\) and \(\mathbf{y}\in\mathbb{R}^{n}\). Then,_
\[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{M}\mathbf{y}=\mathbf{A}^{\top} \mathbf{M}\mathbf{y}\circ\boldsymbol{\mu}. \tag{46}\]
Proof.: It is sufficient to show the case when \(\mathbf{M}=\mathbf{I}_{n}\), as otherwise we can simply define a new vector \(\tilde{\mathbf{y}}=\mathbf{M}\mathbf{y}\). By matrix multiplication, for all \(j\in[m]\),
\[\left[\mathbf{A}_{\boldsymbol{\mu}}^{\top}\mathbf{y}\right]_{j}=\sum_{i=1}^{n} A_{ij}y_{i}\mu_{j}=\left[\mathbf{A}^{\top}\mathbf{y}\circ\boldsymbol{\mu} \right]_{j}. \tag{47}\]
**Lemma A.5**.: _Let \(\mathbf{A}_{\boldsymbol{\mu}}\in\mathbb{R}^{n\times m}\) be defined as in the above. Then,_
\[\mathbf{A}(\mathbf{a}\circ\boldsymbol{\mu})=\mathbf{A}_{\boldsymbol{\mu}} \mathbf{a}. \tag{48}\]
Proof.: This property follows simply from the definition of \(\mathbf{A}_{\boldsymbol{\mu}}\) and matrix multiplication.
**Corollary A.5.1**.: _Define \(\mathbf{A}\in\mathbb{R}^{n\times m}\) as above, let \(\alpha,\beta\in\mathbb{R}\), and let \(\mathbf{a},\mathbf{b},\boldsymbol{\mu}\in\mathbb{R}^{m}\). Then \(\mathbf{A}(\mathbf{a}\circ\boldsymbol{\mu})\) is linear in \(\mathbf{a}\)._
Proof.: By Lemma A.5, we have,
\[\mathbf{A}\left((\alpha\mathbf{a}+\beta\mathbf{b})\circ\boldsymbol{\mu} \right)=\mathbf{A}_{\boldsymbol{\mu}}\left(\alpha\mathbf{a}+\beta\mathbf{b} \right)=\alpha\mathbf{A}(\mathbf{a}\circ\boldsymbol{\mu})+\beta\mathbf{A}( \mathbf{b}\circ\boldsymbol{\mu}). \tag{49}\]
|
2305.05347 | On the onset delays of solar energetic electrons and protons: Evidence
for a common accelerator | The processes responsible for the acceleration of solar energetic particles
(SEPs) are still not well understood, including whether SEP electrons and
protons are accelerated by common or separate processes. Using a numerical
particle transport model that includes both pitch-angle and perpendicular
spatial diffusion, we simulate, amongst other quantities, the onset delay for
MeV electrons and protons and compare the results to observations of SEPs from
widely-separated spacecraft. Such observations have previously been
interpreted, in a simple scenario assuming no perpendicular diffusion, as
evidence for different electron and proton sources. We show that, by assuming a
common particle source together with perpendicular diffusion, we are able to
simultaneously reproduce the onset delays for both electrons and protons. We
argue that this points towards a common accelerator for these particles.
Moreover, a relatively broad particle source is required in the model to
correctly describe the observations. This is suggestive of diffusive shock
acceleration occurring at large shock structures playing a significant role in
the acceleration of these SEPs. | R. D. Strauss, N. Dresing, I. G. Richardson, J. P. van den Berg, P. J. Steyn | 2023-05-09T11:20:02Z | http://arxiv.org/abs/2305.05347v1 | # On the Onset Delays of Solar Energetic Electrons and Protons: Evidence for a Common Accelerator
###### Abstract
The processes responsible for the acceleration of solar energetic particles (SEPs) are still not well understood, including whether SEP electrons and protons are accelerated by common or separate processes. Using a numerical particle transport model that includes both pitch-angle and perpendicular spatial diffusion, we simulate, amongst other quantities, the onset delay for MeV electrons and protons and compare the results to observations of SEPs from widely-separated spacecraft. Such observations have previously been interpreted, in a simple scenario assuming no perpendicular diffusion, as evidence for different electron and proton sources. We show that, by assuming a common particle source together with perpendicular diffusion, we are able to simultaneously reproduce the onset delays for both electrons and protons. We argue that this points towards a common accelerator for these particles. Moreover, a relatively broad
particle source is required in the model to correctly describe the observations. This is suggestive of diffusive shock acceleration occurring at large shock structures playing a significant role in the acceleration of these SEPs.
cosmic rays -- diffusion -- Sun: heliosphere -- solar wind -- turbulence
## 1 Introduction
The potential acceleration mechanisms for solar energetic particles (SEPs) are still heavily debated. The most likely acceleration regions are magnetic reconnection (or a closely related mechanism) in solar flares and shock acceleration (either shock drift acceleration or diffusive shock acceleration) related to shocks driven by coronal mass ejections (CMEs). SEP observations at Earth have led to the (potentially oversimplified) classification of events into impulsive (sometimes also referred to as electron-rich), that are associated with short-duration solar flares, and gradual SEP events, that are associated with CMEs and long-duration flares (e.g. Reames, 2013), though some studies have suggested that there is a continuum of SEP event properties (e.g. Cane et al., 2010). It is therefore not clear whether SEP electrons and protons are accelerated by the same acceleration mechanism during the same transient solar events.
Previous work has shown a clear empirical relationship between MeV electron and proton measurements. For instance, the Posner (2007) Relativistic Electron Alert System for Exploration (REleASE) algorithm uses an empirically established relationship between relativistic electron intensities and subsequent proton intensities to predict future proton levels. More recently Dresing et al. (2022) found a similar dependence of electron and proton intensities on shock parameters suggesting a common shock-related accelerator for both species. Additionally, Richardson et al. (2014) found clear linear (in logarithmic space) relationships between onset and peak delays at the observing spacecraft
relative to the onset of the related type III radio emission for different levels of magnetic connection between the spacecraft and the associated flare location. However, these linear relationships from Richardson et al. (2014) are different for electrons and protons and cannot be explained by simple ballistic motion between a single expanding source and the observer, as also discussed by Kollhoff et al. (2021) in relation to the widespread SEP event on November 29, 2020. If particle transport is neglected, the different electron and proton dependencies can be explained by different sources for these SEP electrons and protons, expanding in longitude at different rates.
In this work we examine whether SEP electron and proton measurements at Earth can in fact be explained by a common acceleration source when interplanetary transport (i.e. particle scattering) is also considered. The focus is on simulating the particle onset delays as presented by Richardson et al. (2014). The modelling approach applied here includes perpendicular (cross-field) diffusion. There is increasing evidence that perpendicular diffusion is an essential transport process for SEPs. Kouloumvakos et al. (2022), for instance, show that spacecraft that are magnetically unconnected to a CME-driven shock can still observe a significant SEP increase. The low particle anisotropy levels associated with such poorly connected observers (e.g. Dresing et al., 2014) are also consistent with significant levels of cross-field diffusion: As the perpendicular diffusion process is generally much slower than parallel transport, the initial, highly anisotropic, beam of SEPs predominantly propagates along the mean field without being scattered significantly away from it. However, perpendicular diffusion becomes increasingly effective at later times during the nearly-isotropic decay phase of the SEP events. These nearly-isotropic particle distributions, being scattered effectively perpendicular to the mean field later in the SEP event, are usually associated with low levels of particle anisotropy. Additionally, modelling by e.g. Droge et al. (2016) has shown that significant levels of perpendicular diffusion is needed to reproduce many observed SEP events.
## 2 The Numerical Transport Model
In this work we simulate the transport of SEPs through the turbulent interplanetary medium by solving the following focused transport equation for the distribution function, \(f\),
\[\frac{\partial f}{\partial t}+\nabla\cdot\left(\mu v\hat{b}f\right)+ \frac{\partial}{\partial\mu}\left(\frac{1-\mu^{2}}{2L}vf\right) = \frac{\partial}{\partial\mu}\left(D_{\mu\mu}\frac{\partial f}{ \partial\mu}\right) \tag{1}\] \[+ \nabla\cdot\left(\mathbf{D}_{\perp}^{(x)}\cdot\nabla f\right)\]
using the approach outlined by Strauss and Fichtner (2015). Here, \(\mu\) is the particle pitch-angle cosine with respect to the mean magnetic field, directed in the \(\hat{b}\) direction (see e.g. van den Berg et al., 2020). This model was previously applied to near-relativistic electron transport by Strauss et al. (2017) and
Figure 1: The left panel shows the assumed slab turbulence spectrum at Earth, as a function of the parallel wavenumber, while the shaded regions indicate where electrons (red shading) and protons (blue shading) in the assumed energy ranges will resonate. The right panel shows the resulting electron and proton parallel and perpendicular mean-free-paths as used in this study. Also shown is the magnetic focusing length.
Strauss et al. (2020). In these previous applications, the focus was on calculating the pitch-angle diffusion coefficient, \(D_{\mu\mu}\), and perpendicular diffusion coefficient, \(D_{\perp}\), from first principles using observed solar wind turbulence values. We continue with that approach here, but apply the model to both SEP electron and proton transport. Although we have tried, as far as possible, to constrain all the turbulence quantities from measurements, we accept that there are still some uncertainties related to these quantities and the associated diffusion coefficients calculated from them. The left panel of Fig. 1 shows the assumed slab turbulence spectrum, as a function of the parallel wavenumber \(k_{||}\), at Earth. The red and blue shaded regions indicate where electrons and protons in the assumed energy ranges will, respectively, resonante. Here we use the energy ranges of Richardson et al. (2014), namely 0.7 - 4 MeV for electrons and 14 - 24 MeV for protons. For this illustrative example we assume that the particles will resonante at \(k_{||}\sim r_{L}^{-1}\), where \(r_{L}\) is the particles' Larmor radius (Strauss et al., 2020). This calculation illustrates that protons and electrons, in the assumed energy ranges, will resonate in the inertial range of the slab turbulence, with resonate wavenumbers very close to each other. As usual, a Parker (1958) heliospheric magnetic field (HMF) geometry is assumed with an associated focusing length, \(L\). This value, along with the resulting parallel and perpendicular mean-free-paths for electrons and protons, using the calculations of Strauss et al. (2017), are shown, as a function of radial distance, in the right panel of Fig. 1. The perpendicular diffusion coefficient is discussed in more detail in the next section. We do not include drift effects as these are generally considered to be negligible for low energy (i.e. \(\sim\) MeV) particles (Engelbrecht et al., 2017; van den Berg et al., 2021).
As an inner boundary condition (\(r_{0}=0.05\) AU) to the model, the following function is specified
\[f(r=r_{0},\phi,t)=\frac{C}{t}\exp\left[-\frac{\tau_{a}}{t}-\frac{t}{\tau_{e}} \right]\exp\left[-\frac{(\phi-\phi_{0})}{2\sigma^{2}}\right]. \tag{2}\]
The time dependence of this function is determined by the so-called acceleration (\(\tau_{a}=1\) hr) and escape (\(\tau_{e}=1\) hr) timescales, while a Gaussian source (in terms of longitude) is specified where the broadness, \(\sigma\), can be varied. This function quantifies the assumed accelerated SEP distribution released into the interplanetary medium from the particle accelerator. With the uncertainty surrounding the acceleration process and the fact that these timescales cannot be directly measured, these values are subject to change in future simulations. While \(\tau_{e}\) seemingly has only a small effect on the simulated intensities, the calculated peak delay (discussed later) is sensitive to the choice of \(\tau_{a}\). Work is under way to further constrain these model inputs.
### Efficiency of perpendicular diffusion
As with previous work we assume that perpendicular diffusion is governed by the so-called field line random walk (FLRW) process (Jokipii, 1966), leading to a perpendicular diffusion coefficient of the form
\[D_{\perp}=a|\mu|v\kappa_{FL}, \tag{3}\]
where \(\kappa_{FL}\) is the so-called FLRW diffusion coefficient which describes the diffusion of the magnetic field lines and depends on the underlying turbulence quantities (see also Strauss et al., 2016). Note that \(D_{\perp}\) is proportional to particle speed and that, in general, faster particles diffuse faster perpendicular to the mean HMF. The factor \(a\) is introduced to account for the fact that particles have a finite gyro-radius and cannot perfectly follow diffusing field lines (e.g. Shalchi, 2009). The efficiency of perpendicular diffusion in this model therefore strongly depends on the seemingly _ad hoc_ factor \(a\) which can thus be interpreted as _the probability that particles are tied to fluctuating magnetic field lines_(e.g. Qin & Shalchi, 2014). While some progress has been made to derive \(a\) from first
principles (Shalchi, 2015, 2020), in previous work, Strauss et al. (2017) treated \(a\) as a free parameter and obtained a good comparison between SEP measurements, for 100 keV electrons, and modelling results when \(a=0.2\). Although this value is based on a comparison with a single particle species at a single energy, we adopt \(a=0.2\) for all simulations performed in this work.
## 3 Analytical Estimates and Model Results
### Analytical Estimates
In this section we start by estimating the onset delay as a function of magnetic connection using simplistic analytical arguments. For an observer perfectly magnetically connected to the SEP source, the SEPs simply have to propagate along the mean HMF. Assuming this happens in a ballistic
Figure 2: The left and middle panels are analytical estimates of the onset delay, for a SEP point source, as a function of magnetic connectivity for protons and electrons, respectively. The right panel shows the relationship between the electron and proton onset delays which is independent of \(\lambda_{\perp}\). Observations are taken from Richardson et al. (2014), while analytical estimates are shown for different assumptions of \(\lambda_{\perp}\).
(scatter free) fashion we can estimate the onset delay as \(\tau_{0}=\Delta s_{||}/v\) with \(\Delta s_{||}=1.2\) AU the distance the SEPs would cover along a nominal spiral magnetic field line to 1 AU and \(v\) the particle speed. For 1.7 MeV electrons this is \(v_{e}\sim 7\) AU/hr and for 18 MeV protons, \(v_{p}\sim 1.4\) AU/hr. If we now assume that particles propagate diffusively across HMF lines to reach magnetically unconnected observers, and that this process takes an additional time \(\tau_{d}\), the onset delay becomes \(\tau=\tau_{0}+\tau_{d}\). We can estimate \(\tau_{d}\) by assuming diffusive motion perpendicular to the mean HMF is due to perpendicular diffusion. For an isotropic distribution (implying significant scattering), the diffusive propagation time can be evaluated as \(\tau_{d}\sim\left(\Delta s_{\perp}\right)^{2}/\left(6\kappa_{\perp}\right)\)(see e.g. Strauss et al., 2011). The isotropic form of the perpendicular diffusion coefficient is
\[\kappa_{\perp}=\frac{1}{2}\int_{-1}^{+1}D_{\perp}(\mu)d\mu=\frac{av}{2}\kappa_ {FL}, \tag{4}\]
and, in terms of the perpendicular mean-free-path, \(\lambda_{\perp}=3\kappa_{\perp}/v=(3/2)a\kappa_{FL}\). Note that \(\lambda_{\perp}\) for the FLRW has no speed (energy) dependence and is determined completely by the underlying magnetic turbulence. If we assume the distance covered perpendicular to the field, \(\Delta s_{\perp}\), is only directed in the azimuthal direction (i.e. neglecting the non-radial geometry of the HMF), we can estimate \(\Delta s_{\perp}\sim\Delta s_{||}\Delta\phi\) where \(\Delta\phi\) is the azimuthal angle away from the best magnetic connection to the source. The resulting onset delay then becomes
\[\tau=\frac{\Delta s_{||}}{v}\left[1+\frac{\Delta s_{||}}{2\lambda_{\perp}} \left(\Delta\phi\right)^{2}\right]. \tag{5}\]
These estimates are shown for electrons and protons in Fig. 2 for different values of \(\lambda_{\perp}\). The left and middle panels show the calculated onset delay as a function of magnetic connectivity for protons (left) and electrons (middle). The right panel shows the resulting relationship between the electron and proton onset delays, \(\tau_{p}=(v_{e}/v_{p})\tau_{e}\) that only depends on the ratio of the particles' speeds in this
approach.
In this simplified derivation we assume ballistic motion parallel to the field, but diffusive motion of an isotropic distribution perpendicular to the mean HMF. The effect of pitch-angle scattering and potentially anisotropic distributions are therefore not included consistently and must be included in a modelling approach to obtain a correct estimate of the onset delay. This will be done in the following section where a more realistic Parkerian geometry, that is not simply a radially directed magnetic field, is also included. However, even with this very simplified approach we note that a combination of motion along the mean HMF and diffusion perpendicular to the mean field is sufficient to explain the observed dependence of the onset delay on magnetic connection. Additionally, by assuming a FLRW-type perpendicular diffusion coefficient (which is itself proportional to particle speed) we can explain, to first order, the linear relationship between the observed electron and proton onset delays. A more detailed calculation is performed in the next section.
### On the Geometry of Perpendicular Diffusion
Assuming a narrow source with \(\sigma=5^{\circ}\), Fig. 3 shows contour plots of the normalized intensity of 14 MeV protons (left panel) and 0.7 MeV electrons (right panel) 5 hrs after the initial particle release. The orbit of Earth (circular at a radius of 1 AU) is shown, along with the HMF line connected to the center of the SEP injection, as thick white lines. Also included, as thin dotted white lines, are line elements perpendicular to the HMF at different radial positions. Perpendicular diffusion acts to transport particles along these line elements. It is important to keep in mind that perpendicular diffusion does not imply motion purely in the longitudinal direction, but owing to the Parker HMF geometry (the HMF spiral angle is \(\sim 45^{\circ}\) at Earth's orbit), particles are also transported in the
### Longitudinal Dependence of SEP Intensity
Richardson et al. (2014) examined the longitudinal dependence of the intensity of multi-spacecraft SEP proton and electron events in the energy ranges of 14 - 24 MeV and 0.7 - 4 MeV, respectively. This was done by fitting Gaussian distributions to the peak intensities at the different observing spacecraft (STEREO A/B and SOHO). The histograms in Fig. 4 show these results: The top (blue) distributions are for protons and the bottom (red) distributions are for electrons. The left panels show the connection angle (i.e. the angular offset between the peak of the Gaussian function and the nominal Parker HMF line connected to the parent active region) with positive values indicating a shift towards the west (i.e. direction of solar rotation). Westward shifts are also inferred in other
Figure 3: Contour plots of the SEP proton (left) and electron (right) intensity 5 hrs after particle injection. radial direction. For a fixed source this usually implies more efficient particle transport towards the west of the best-connected HMF line, as these particles are transported _away_ from the Sun, leading to asymmetrical distributions in terms of longitude. In this context, see also the recent simulations of Laitinen et al. (2023).
studies of multi-spacecraft events e.g., Lario et al. (2014), Cohen et al. (2017), and Bruno & Richardson (2021). The right panels show the width of the Gaussian function in terms of the full-width half maximum (FWHM). All the distributions show inter-event variation which is most likely due to different interplanetary conditions during each event and the difference in the accelerator of each event, e.g. larger or smaller CMEs. While we accept that each SEP event will be different, we aim in this work to rather reproduce the _average_ characteristics of these events. These averages are indicated above the histograms in Fig. 4 as solid circles with an associated error bar.
The only remaining free parameter in our modelling approach is the size of the particle source, \(\sigma\). We now simulate SEP intensities for electrons with energies of 0.7 and 4 MeV, and protons with
Figure 4: The histograms show the observed distributions of the connection angle (left panels) and full-width half maximum (FWHM; right panels) of 14 – 24 MeV protons (top panels; in blue) and 0.7 – 4 MeV electrons (bottom panels; in red) from Fig. 24 of Richardson et al. (2014). Observational averages are indicated by the filled circles, while the open green symbols show the results from the best-fit numerical model.
energies of 14 and 25 MeV, the limits of the energy ranges considered by Richardson et al. (2014), calculate the maximum omni-directional intensity as a function of longitude, and, from this, calculate both the resulting connection angle and FWHM. The size of the assumed source is then adjusted, in increments of \(5^{\circ}\), until a good comparison is obtained with the average values of Richardson et al. (2014). A best fit is obtained by assuming \(\sigma=25^{\circ}\). These modelling results are included in Fig. 4 as the green symbols with the associated error bar indicating the deviation for the different energies (i.e. the upper and lower boundaries of the energy channels). Interestingly, both the model and observations indicate a systematic offset of the distribution towards the west (a positive connection angle of \(\sim 15^{\circ}\)). In the modelling approach this can be explained by the geometry of the perpendicular diffusion process, as discussed in Sec. 3.2, leading to more effective (perpendicular) transport in this direction. For a best fit value of \(\sigma=25^{\circ}\), the left panel of Fig. 5 shows the maximum omni
Figure 5: The left panel shows the maximum omni-directional intensity as a function of the magnetic connection angle for best-fit model assuming a source width of \(\sigma=25^{\circ}\) indicated by the black dashed curve. The right panel shows the resulting intensities for different assumptions of the source size (indicated in the legend).
directional intensity as a function of connection angle for the different particle species of different energies, while the black dashed line shows the assumed source size. It is clear that perpendicular diffusion broadens this initial source, while shifting the peak of the distribution towards the west. The right panel shows the resulting distribution for different source sizes.
In this section we adjusted the remaining free parameter in our model, the size of the SEP source, until the measured distribution of the omni-directional intensity could be reproduced. However, if just the particle intensity is considered, this could lead to a degeneracy in the modelling approach: Different combinations of particle sources and levels of perpendicular diffusion can reproduce the same distribution at Earth's orbit. Additional observables are needed to uniquely determine all transport parameters. This was done in Strauss et al. (2017) and we used their value of \(a=0.2\) to fix the level of perpendicular diffusion. However, as we will show in the next section, the model set-up with \(a=0.2\) and \(\sigma=25^{\circ}\), does also lead to a decent comparison with the onset delays of MeV electrons and protons. We are therefore confident that the model gives a fair representation of the observations.
### Electron and Proton Onset Delays
In order to model particle onset delays, a relative background needs to be chosen in the model. We follow the previous approach of Strauss et al. (2017) of calculating the maximum omni-directional intensity, for all longitudes, at 1 AU, and then defining the background as \(1/1000^{th}\) of the peak intensity. The onset delay is then simply calculated as the time from the start of the simulation until this background level is crossed at each longitude. The effects of different background levels are illustrated in Sec. 3.5.
Using the best-fit model of the previous section, Fig. 6 shows the resulting onset delays. The left panel, once again, shows the maximum omni-directional intensity as a function of longitude (different lines correspond to electrons and protons of different energies as indicated in the legend). The middle panel shows the measurements from Richardson et al. (2014) as the data points (blue again corresponding to protons and red to electrons), while the lines are model calculations. If the modelled intensity, at a given longitude, did not cross the background levels, the onset delay is not defined and no model values are shown. As expected, the minimum onset delay is seen at the best magnetic connection between the observer and the source, where the first arriving (usually very anisotropic) particles reach the observer in an almost scatter-free fashion. The onset delay then increases for increasing magnetic connection due to the relative slow (perpendicular) diffusion process
Figure 6: The left panel shows the calculated maximum omni-directional intensity as a function of longitude (i.e. magnetic connection), the middle panel the onset delay in minutes, and the right panel the peak delay in hours. The horizontal dashed line in the left panel indicates 50% of the peak value. Simulations are performed for electrons and protons of different energies as indicated on the legend. All measurements are taken from Richardson et al. (2014).
Figure 8: The calculated proton and electron onset delays (the different symbols) compared to the measurements of Richardson et al. (2014). The left panel shows the model results for different source sizes, while the right panel also assumes a lower background level in the model.
Figure 7: The same as shown in Fig. 6, but model results are now shown for different source sizes.
that transports the particles across magnetic field lines. Although the measurements have significant event-to-event variations, the general trend and ball-park values are consistent with the model results.
The right panel of Fig. 6 shows the calculated and observed peak delay as a function of longitude. This is calculated as the time that elapsed between particle injection (i.e. the start of the simulation) and the occurrence of the maximum omni-directional intensity at each longitude. Although there is, once again, a fair comparison between model results and observations, the model is unable to reproduce the very long peak delays observed for electrons. This is addressed in more detail in Sec. 3.6. Fig. 7 is similar to Fig. 6 but shows the effect of a changing source size on the model calculations. Generally, a larger source leads to shorter onset and peak delays.
The results shown in Figs. 6 and 7 are now combined and presented in Fig. 8. Here we show, in the left panel, the calculated proton onset delay as a function of the calculated electron onset delay. Calculations, shown here as the different symbols, are performed for different source sizes and again compared to measurements from Richardson et al. (2014) (grey data points and green fit to the data). The model results compare well with the measurements and confirm the linear relationship between the electron and proton onset delays. Due to the east-west asymmetry present in the model, the modelled relationship is not completely linear, but exhibits a closed loop. In the right panel of Fig. 8, the model calculation is repeated, but for a different level of the assumed model background as discussed in the next section. While the best-fit model compares well with the measurements, we argue, again, that the variation of e.g. the background level and accelerator sizes from event-to-event can contribute to explaining the large inter-event variation observed for the onset delays.
### Caveats in the Simulation of the Onset Delay
The modelled onset delay is dependent on the assumed background level in the model. The same is true for the observations (see also Laitinen et al., 2015; Zhao et al., 2019). When the (modelled) differential intensity increases slowly with time, as is the case for observers magnetically disconnected from the source, a lower background will result in a shorter onset delay. This is illustrated in Fig. 9 where the background level is changed from \(1/10000^{th}\) of the peak intensity to \(1/10^{th}\) of the peak intensity. The default case is again that of \(1/1000^{th}\) of the peak intensity. It is interesting to note that these different onset delay calculations might also help to explain the observed scatter (inter-event-variations) in the observations: In the model, the same peak intensity is always obtained and the background level is adjusted to simulate the effect of changing background levels whereas for observations of different events, the background of the instrument may remain relatively constant, but the magnitudes (i.e. peak intensity) of the events change. The observed pre-event background
Figure 9: Onset delays for protons, similar to Fig. 6, but simulations are now performed for different background levels in the model.
from preceding events, if present, may also vary from event to event. In each case, the _relative background_, i.e. the peak intensity to background ratio, changes from event to event and can lead to the variation presented in Fig. 9.
### Electron and Proton Peak Delays
As the last part of our work we investigate the modelled relationship between the peak and onset delays. Fig. 10 shows these simulation results for protons (blue in left panel) and electrons (red in right panel) for different particle energies (different lines). These calculations are again compared to the observations of Richardson et al. (2014). For protons, the model reproduces the observed linear dependence very well, although these curves are non-linear: Due to the small asymmetry introduced by the Parker HMF, the onset time is slightly shorter at western longitudes and slightly longer at
Figure 10: The modelled peak delay as a function of the onset delay for protons (left panel) and electrons (right panel). The measurements are again taken from Richardson et al. (2014). Note that, when no model values are shown, the model intensity did not cross the background level at that longitude.
eastern longitudes. On these figures this manifests of this "figure of eight" shape. These results suggest that both the onset delay and peak delay is at a minimum at the best magnetic connection between the source and observer while both increase for increasingly poor connectivity. For electrons, however, the calculated peak delays are much shorter than the values derived from observations. At present it is not clear why this is the case, although we speculate on possible reasons for this discrepancy in the next section.
## 4 Discussion
Using transport coefficients calculated from first principles, we are able to simultaneously reproduce the observed longitudinal intensity distribution of 14 - 24 MeV protons and 0.7 - 4 MeV electrons. This is done by assuming an _average_ source size of \(\sigma=25^{\circ}\). Of course, the measurements show a lot of inter-event variation which can be explained by different interplanetary transport conditions, varying source sizes and, of course, varying levels of peak intensity relative to the background.
When perpendicular diffusion is included in the model, we note a systematic shift of the peak intensity to the west of the best magnetic connection due to the geometry of the Parker HMF. Interestingly, a similar shift is seen in both the measurements, but also in previous simulation work by other authors, e.g. He & Wan (2015).
Our modelling approach can simultaneously reproduce SEP electron and proton onset delays by assuming the same particle source. When interplanetary transport is included, we do not need to invoke different particle sources expanding at different rates to explain the Richardson et al. (2014) observations. The shortest onset delays are noted at the best magnetic connection, while the delay
increases away from best connection due to the slow perpendicular transport process that transports particles across field lines. In fact, we are also able to reproduce the linear relationship observed between electron and proton onset delays by assuming the FLRW process of perpendicular diffusion, where the diffusion coefficient scales linearly with the particle speed. Electrons, being much more mobile than protons at these energies, therefore move faster both along and perpendicular to the mean field.
Our results therefore point towards MeV electrons and protons having a common accelerator or source. The transport model itself cannot distinguish between a shock-related or flare-related process. However, it is clear that the model requires both an extended source (\(\sigma\sim 25^{\circ}\)) and some level of perpendicular diffusion to explain the observed broadness of the SEP events. Such an extended source could point towards a CME-related source. On the other hand, Dresing et al. (2016) find no evidence of interplanetary shock acceleration for \(\sim 100\) keV electrons, while a comparison with remote sensing observations seem to suggest a flare association for these low energy electrons (e.g. Dresing et al., 2021). In previous modelling work we found that 100 keV electron observations can be reproduced, on average, by assuming a compact source with \(\sigma\sim 5^{\circ}\). This likely points to a compact flaring source being the main accelerator of electrons at these low energies. Our present and previous simulation results are therefore consistent with the findings of Dresing et al. (2022): Low energy (\(\sim\)100s of keV) electrons are most likely related to acceleration in compact flaring regions, while \(\sim\)MeV electrons are most likely produced in more extended CME-driven shock regions. Additionally, our results suggest that \(\sim\)MeV electrons and \(\sim 10\) MeV protons are produced at the same source.
One possible deficiency of our present modelling approach is the inability to reproduce the observed peak delays for electrons. The reason for this discrepancy is presently not known. While this could be
due to an incorrect level of pitch-angle diffusion included in the model (more diffusion generally leads to longer peak delays), this is unlikely as the amount of pitch-angle scattering needed to increase the peak delay by an order of magnitude would also increase the onset delay with a similar amount. This would be inconsistent with the observations. These longer peak delays are more likely related to an extended injection of SEP electrons into the interplanetary medium due to either particle trapping or particle re-acceleration at the expanding CME shock; an effect not captured by our current injection profile. This discrepancy continues to be a topic of further investigation.
This work is based on the research supported in part by the National Research Foundation of South Africa (NRF grant numbers SRUG220322419 and RA170929263913). Opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the NRF. The responsibility of the contents of this work is with the authors. N.D. is grateful for support by the Academy of Finland (SHOCKSEE, grant No. 346902). IGR acknowledges support from NASA programs NNH19ZDA001N-LWS, NNH19ZDA001N-HSR, and from the STEREO mission. Figures prepared with Matplotlib (Hunter, 2007) and certain calculations done with NumPy (Harris et al., 2020).
|
2302.01186 | The Power of Preconditioning in Overparameterized Low-Rank Matrix
Sensing | We propose $\textsf{ScaledGD($\lambda$)}$, a preconditioned gradient descent
method to tackle the low-rank matrix sensing problem when the true rank is
unknown, and when the matrix is possibly ill-conditioned. Using
overparametrized factor representations, $\textsf{ScaledGD($\lambda$)}$ starts
from a small random initialization, and proceeds by gradient descent with a
specific form of damped preconditioning to combat bad curvatures induced by
overparameterization and ill-conditioning. At the expense of light
computational overhead incurred by preconditioners,
$\textsf{ScaledGD($\lambda$)}$ is remarkably robust to ill-conditioning
compared to vanilla gradient descent ($\textsf{GD}$) even with
overprameterization. Specifically, we show that, under the Gaussian design,
$\textsf{ScaledGD($\lambda$)}$ converges to the true low-rank matrix at a
constant linear rate after a small number of iterations that scales only
logarithmically with respect to the condition number and the problem dimension.
This significantly improves over the convergence rate of vanilla $\textsf{GD}$
which suffers from a polynomial dependency on the condition number. Our work
provides evidence on the power of preconditioning in accelerating the
convergence without hurting generalization in overparameterized learning. | Xingyu Xu, Yandi Shen, Yuejie Chi, Cong Ma | 2023-02-02T16:13:27Z | http://arxiv.org/abs/2302.01186v3 | # The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing
###### Abstract
We propose \(\mathsf{ScaledGD}(\lambda)\), a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when the true rank is unknown, and when the matrix is possibly ill-conditioned. Using overparametrized factor representations, \(\mathsf{ScaledGD}(\lambda)\) starts from a small random initialization, and proceeds by gradient descent with a specific form of _damped_ preconditioning to combat bad curvatures induced by overparameterization and ill-conditioning. At the expense of light computational overhead incurred by preconditioners, \(\mathsf{ScaledGD}(\lambda)\) is remarkably robust to ill-conditioning compared to vanilla gradient descent (GD) even with overparameterization. Specifically, we show that, under the Gaussian design, \(\mathsf{ScaledGD}(\lambda)\) converges to the true low-rank matrix at a constant linear rate after a small number of iterations that scales only _logarithmically_ with respect to the condition number and the problem dimension. This significantly improves over the convergence rate of vanilla GD which suffers from a polynomial dependency on the condition number. Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized learning.
**Keywords:** low-rank matrix sensing, overparameterization, preconditioned gradient descent method, random initialization, ill-conditioning
## 1 Introduction
Low-rank matrix recovery plays an essential role in modern machine learning and signal processing. To fix ideas, let us consider estimating a rank-\(r_{\star}\) positive semidefinite matrix \(M_{\star}\in\mathbb{R}^{n\times n}\) based on a few linear measurements \(y\coloneqq\mathcal{A}(M_{\star})\), where \(\mathcal{A}:\mathbb{R}^{n\times n}\to\mathbb{R}^{m}\) models the measurement process. Significant research efforts have been devoted to tackling low-rank matrix recovery in a statistically and computationally efficient manner in recent years. Perhaps the most well-known method is convex relaxation (Candes and Plan, 2011; Davenport and Romberg, 2016; Recht et al., 2010), which seeks the matrix with lowest nuclear norm to fit the observed measurements:
\[\min_{M\succeq 0}\quad\|M\|_{*}\qquad\text{s.t.}\quad y=\mathcal{A}(M).\]
While statistically optimal, convex relaxation is prohibitive in terms of both computation and memory as it directly operates in the ambient matrix domain, i.e., \(\mathbb{R}^{n\times n}\). To address this challenge, nonconvex approaches based on low-rank factorization have been proposed (Buer and Monteiro, 2005):
\[\min_{X\in\mathbb{R}^{n\times r}}\quad\frac{1}{4}\big{\|}\mathcal{A}(XX^{\top })-y\big{\|}_{2}^{2}, \tag{1}\]
where \(r\) is a user-specified rank parameter. Despite nonconvexity, when the rank is correctly specified, i.e., when \(r=r_{\star}\), the problem (1) admits computationally efficient solvers (Chi et al., 2019), e.g., gradient descent (GD) with spectral initialization or with small random initialization. However, two main challenges remain when applying the factorization-based nonconvex approach (1) in practice.
* **Unknown rank**. First, the true rank \(r_{\star}\) is often unknown, which makes it infeasible to set \(r=r_{\star}\). One necessarily needs to consider an overparametrized setting in which \(r\) is set conservatively, i.e., one sets \(r\geq r_{\star}\) or even \(r=n\).
* **Poor conditioning**. Second, the ground truth matrix \(M_{\star}\) may well be ill-conditioned, which is commonly encountered in practice. Existing approaches such as gradient descent are still computationally expensive in such settings as the number of iterations necessary for convergence increases with the condition number.
In light of these two challenges, the main goal of this work is to address the following question: _Can one develop an efficient method for solving ill-conditioned matrix recovery in the overparametrized setting?_
### Our contributions: a preview
The main contribution of the current paper is to answer the question affirmatively by developing a _preconditioned_ gradient descent method (\(\mathsf{ScaledGD}(\lambda)\)) that converges to the (possibly ill-conditioned) low-rank matrix in a fast and global manner, even with overparamterized rank \(r\geq r_{\star}\).
**Theorem 1** (Informal).: _Under overparameterization \(r\geq r_{\star}\) and mild statistical assumptions, \(\mathsf{ScaledGD}(\lambda)\)--when starting from a sufficiently small random initialization--achieves a relative \(\varepsilon\)-accuracy, i.e., \(\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\leq\varepsilon\|M_{\star}\|\), with no more than an order of_
\[\log\kappa\cdot\log(\kappa n)+\log(1/\varepsilon)\]
_iterations, where \(\kappa\) is the condition number of the problem._
The above theorem suggests that from a small random initialization, \(\mathsf{ScaledGD}(\lambda)\) converges at a constant linear rate--independent of the condition number--after a small logarithmic number of iterations. Overall, the iteration complexity is nearly independent of the condition number and the problem dimension, making it extremely suitable for solving large-scale and ill-conditioned problems. See Table 1 for a summary of comparisons with prior art.
Our algorithm \(\mathsf{ScaledGD}(\lambda)\) is closely related to scaled gradient descent (\(\mathsf{ScaledGD}\)) (Tong et al., 2021), a recently proposed preconditioned gradient descent method that achieves a \(\kappa\)-independent convergence rate under spectral initialization and exact parameterization. Preserving its low computational overhead, we modify the preconditioner design by introducing a fixed damping term, which prevents the preconditioner itself from being ill-conditioned due to overparameterization. In the exact parameterization setting, our
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline parameterization & reference & algorithm & init. & iteration complexity \\ \hline \multirow{3}{*}{\(r>r_{\star}\)} & Stöger and Soltanolkotabi (2021) & GD & random & \(\kappa^{8}+\kappa^{6}\log(\kappa n/\varepsilon)\) \\ \cline{2-5} & Zhang et al. (2021) & PrecGD & spectral & \(\log(1/\varepsilon)\) \\ \cline{2-5} & **Theorem 2** & \(\mathsf{ScaledGD}(\lambda)\) & random & \(\log\kappa\cdot\log(\kappa n)+\log(1/\varepsilon)\) \\ \hline \multirow{3}{*}{\(r=r_{\star}\)} & Tong et al. (2021) & \(\mathsf{ScaledGD}\) & spectral & \(\log(1/\varepsilon)\) \\ \cline{2-5} & Stöger and Soltanolkotabi (2021) & GD & random & \(\kappa^{8}\log(\kappa n)+\kappa^{2}\log(1/\varepsilon)\) \\ \cline{2-5} & **Theorem 3** & \(\mathsf{ScaledGD}(\lambda)\) & random & \(\log\kappa\cdot\log(\kappa n)+\log(1/\varepsilon)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of iteration complexity with existing algorithms for low-rank matrix sensing under Gaussian designs. Here, \(n\) is the matrix dimension, \(r_{\star}\) is the true rank, \(r\) is the overparameterized rank, and \(\kappa\) is the condition number of the problem instance (see Section 2 for a formal problem formulation). It is important to note that in the overparameterized setting (\(r>r_{\star}\)), the sample complexity of Zhang et al. (2021) scales polynomially with the overparameterized rank \(r\), while that of Stöger and Soltanolkotabi (2021) and ours only scale polynomially with the true rank \(r_{\star}\).
result extends ScaledGD beyond local convergence by characterizing the number of iterations it takes to enter the local basin of attraction from a small random initialization.
Moreover, our results shed light on the power of preconditioning in accelerating the optimization process over vanilla GD while still guaranteeing generalization in overparameterized learning models (Amari et al., 2020). Remarkably, despite the existence of an infinite number of global minima in the landscape of (1) that do not generalize, i.e., not corresponding to the ground truth, starting from a small random initialization, GD (Li et al., 2018; Stoger and Soltanolkotabi, 2021) is known to converge to a generalizable solution without explicit regularization. However, GD takes \(O(\kappa^{8}+\kappa^{6}\log(\kappa n/\varepsilon))\) iterations to reach \(\varepsilon\)-accuracy, which is unacceptable even for moderate condition numbers. On the other hand, while common wisdom suggests that preconditioning accelerates convergence, it is yet unclear if it still converges to a generalizable global minimum. Our work answers this question in the affirmative for overparameterized low-rank matrix sensing, where ScaledGD(\(\lambda\)) significantly accelerates the convergence against the poor condition number--both in the initial phase and in the local phase--without hurting generalization, which is corroborated in Figure 1.
### Related work
Significant efforts have been devoted to understanding nonconvex optimization for low-rank matrix estimation in recent years, see Chi et al. (2019) and Chen and Chi (2018) for recent overviews. By reparameterizing the low-rank matrix into a product of factor matrices, also known as the Burer-Monteiro factorization (Burer and Monteiro, 2005), the focus point has been examining if the factor matrices can be recovered--up to invertible transformations--faithfully using simple iterative algorithms in a provably efficient manner. However, the majority of prior efforts suffer from the limitations that they assume an exact parameterization where the rank of the ground truth is given or estimated somewhat reliably, and rely on a carefully constructed initialization (e.g., using the spectral method (Chen et al., 2021)) in order to guarantee global convergence in a polynomial time. The analyses adopted in the exact parameterization case fail to generalize when overparameterization presents, and drastically new approaches are called for.
Overparameterization in low-rank matrix sensing.Li et al. (2018) made a theoretical breakthrough that showed that gradient descent converges globally to any prescribed accuracy even in the presence of full overparameterization (\(r=n\)), with a small random initialization, where their analyses were subsequently adapted and extended in Stoger and Soltanolkotabi (2021) and Zhuo et al. (2021). Ding et al. (2021) investigated robust low-rank matrix recovery with overparameterization from a spectral initialization, and Ma and Fattahi (2022) examined the same problem from a small random initialization with noisy measurements.
Figure 1: Comparison between ScaledGD(\(\lambda\)) and GD. The learning rate of GD has been fine-tuned to achieve fastest convergence for each \(\kappa\), while that of ScaledGD(\(\lambda\)) is fixed to \(0.3\). The initialization scale \(\alpha\) in each case has been fine-tuned so that the final accuracy is \(10^{-9}\). The details of the experiment are deferred to Section 5.
Zhang et al. (2022, 2021) developed a preconditioned gradient descent method for overparameterized low-rank matrix sensing. Last but not least, a number of other notable works that study overparameterized low-rank models include, but are not limited to, Geyer et al. (2020); Oymak and Soltanolkotabi (2019); Soltanolkotabi et al. (2018); Zhang (2021, 2022).
Global convergence from random initialization without overparameterization.Despite nonconvexity, it has been established recently that several structured learning models admit global convergence via simple iterative methods even when initialized randomly even without overparameterization. For example, Chen et al. (2019) showed that phase retrieval converges globally from a random initialization using a near-minimal number of samples through a delicate leave-one-out analysis. In addition, the efficiency of randomly initialized GD is established for complete dictionary learning (Bai et al., 2018; Gilboa et al., 2019), multi-channel sparse blind deconvolution (Qu et al., 2019; Shi and Chi, 2021), asymmetric low-rank matrix factorization (Ye and Du, 2021), and rank-one matrix completion (Kim and Chung, 2022). Moving beyond GD, Lee and Stoger (2022) showed that randomly initialized alternating least-squares converges globally for rank-one matrix sensing, whereas Chandrasekher et al. (2022) developed sharp recovery guarantees of alternating minimization for generalized rank-one matrix sensing with sample-splitting and random initialization.
Algorithmic or implicit regularization.Our work is related to the phenomenon of algorithmic or implicit regularization (Gunasekar et al., 2017), where the trajectory of simple iterative algorithms follows a path that maintains desirable properties without explicit regularization. Along this line, Chen et al. (2020); Li et al. (2021); Ma et al. (2019) highlighted the implicit regularization of GD for several statistical estimation tasks, Ma et al. (2021) showed that GD automatically balances the factor matrices in asymmetric low-rank matrix sensing, where Jiang et al. (2022) analyzed the algorithmic regularization in overparameterized asymmetric matrix factorization in a model-free setting.
## 2 Problem formulation
Section 2.1 introduces the problem of low-rank matrix sensing, and Section 2.2 provides background on the proposed ScaledGD(\(\lambda\)) algorithm developed for the possibly overparametrized case.
### Model and assumptions
Suppose that the ground truth \(M_{\star}\in\mathbb{R}^{n\times n}\) is a positive-semidefinite (PSD) matrix of rank \(r_{\star}\ll n\), whose (compact) eigendecomposition is given by
\[M_{\star}=U_{\star}\Sigma_{\star}^{2}U_{\star}^{\top}.\]
Here, the columns of \(U_{\star}\in\mathbb{R}^{n\times r_{\star}}\) specify the set of eigenvectors, and \(\Sigma_{\star}\in\mathbb{R}^{r_{\star}\times r_{\star}}\) is a diagonal matrix where the diagonal entries are ordered in a non-increasing fashion. Setting \(X_{\star}\coloneqq U_{\star}\Sigma_{\star}\in\mathbb{R}^{n\times r_{\star}}\), we can rewrite \(M_{\star}\) as
\[M_{\star}=X_{\star}X_{\star}^{\top}. \tag{2}\]
We call \(X_{\star}\) the ground truth low-rank factor matrix, whose condition number \(\kappa\) is defined as
\[\kappa\coloneqq\frac{\sigma_{\max}(X_{\star})}{\sigma_{\min}(X_{\star})}. \tag{3}\]
Here we recall that \(\sigma_{\max}(X_{\star})\) and \(\sigma_{\min}(X_{\star})\) are the largest and the smallest singular values of \(X_{\star}\), respectively.
Instead of having access to \(M_{\star}\) directly, we wish to recover \(M_{\star}\) from a set of random linear measurements \(\mathcal{A}(M_{\star})\), where \(\mathcal{A}:\operatorname{Sym}_{2}(\mathbb{R}^{n})\to\mathbb{R}^{m}\) is a linear map from the space of \(n\times n\) symmetric matrices to \(\mathbb{R}^{m}\), namely
\[y=\mathcal{A}(M_{\star}), \tag{4}\]
or equivalently,
\[y_{i}=\langle A_{i},M_{\star}\rangle,\qquad 1\leq i\leq m.\]
We are interested in recovering \(M_{\star}\) based on the measurements \(y\) and the sensing operator \(\mathcal{A}\) in a provably efficient manner, even when the true rank \(r_{\star}\) is unknown.
### ScaledGD(\(\lambda\)) for overparameterized low-rank matrix sensing
Inspired by the factorized representation (2), we aim to recover the low-rank matrix \(M_{\star}\) by solving the following optimization problem (Burer and Monteiro, 2005):
\[\min_{X\in\mathbb{R}^{n\times r}}\quad f(X)\coloneqq\frac{1}{4}\big{\|} \mathcal{A}(XX^{\top})-y\big{\|}_{2}^{2}, \tag{5}\]
where \(r\) is a predetermined rank parameter, possibly different from \(r_{\star}\). It is evident that for any rotation matrix \(O\in\mathcal{O}_{r}\), it holds that \(f(X)=f(XO)\), leading to an infinite number of global minima of the loss function \(f\).
A prelude: exact parameterization.When \(r\) is set to be the true rank \(r_{\star}\) of \(M_{\star}\), Tong et al. (2021) set forth a provable algorithmic approach called scaled gradient descent (ScaledGD)--gradient descent with a specific form of preconditioning--that adopts the following update rule
\[\textsf{ScaledGD}\colon\qquad X_{t+1}=X_{t}-\eta\underbrace{\mathcal{A}^{*} \mathcal{A}(X_{t}X_{t}^{\top}-M_{\star})X_{t}(X_{t}^{\top}X_{t})^{-1}}_{= \nabla f(X_{t})}. \tag{6}\]
Here, \(X_{t}\) is the \(t\)-th iterate, \(\nabla f(X_{t})\) is the gradient of \(f\) at \(X=X_{t}\), and \(\eta>0\) is the learning rate. Moreover, \(\mathcal{A}^{*}:\mathbb{R}^{m}\mapsto\operatorname{Sym}_{2}(\mathbb{R}^{n})\) is the adjoint operator of \(\mathcal{A}\), that is \(\mathcal{A}^{*}(y)=\sum_{i=1}^{m}y_{i}A_{i}\) for \(y\in\mathbb{R}^{m}\).
At the expense of light computational overhead, ScaledGD is remarkably robust to ill-conditioning compared with vanilla gradient descent (GD). It is established in Tong et al. (2021) that ScaledGD, when starting from spectral initialization, converges linearly at a constant rate--_independent_ of the condition number \(\kappa\) of \(X_{\star}\) (cf. (3)); in contrast, the iteration complexity of GD(Tu et al., 2016; Zheng and Lafferty, 2015) scales on the order of \(\kappa^{2}\) from the same initialization, therefore GD becomes exceedingly slow when the problem instance is even moderately ill-conditioned, a scenario that is quite commonly encountered in practice.
ScaledGD(\(\lambda\)): overparametrization under unknown rank.In this paper, we are interested in the so-called overparameterization regime, where \(r_{\star}\leq r\leq n\). From an operational perspective, the true rank \(r_{\star}\) is related to model order, e.g., the number of sources or targets in a scene of interest, which is often unavailable and makes it necessary to consider the misspecified setting. Unfortunately, in the presence of overparameterization, the original ScaledGD algorithm is no longer appropriate, as the preconditioner \((X_{t}^{\top}X_{t})^{-1}\) might become numerically unstable to calculate. Therefore, we propose a new variant of ScaledGD by adjusting the preconditioner as
\[\textsf{ScaledGD}(\lambda)\colon\qquad X_{t+1}=X_{t}-\eta\underbrace{\mathcal{ A}^{*}\mathcal{A}(X_{t}X_{t}^{\top}-M_{\star})X_{t}}_{=\nabla f(X_{t})}(X_{t}^{ \top}X_{t}+\lambda I)^{-1}, \tag{7}\]
where \(\lambda>0\) is a _fixed_ damping parameter. The new algorithm is dubbed as ScaledGD(\(\lambda\)), and it recovers the original ScaledGD when \(\lambda=0\). Similar to ScaledGD, a key property of ScaledGD(\(\lambda\)) is that the iterates \(\{X_{t}\}\) are equivariant with respect to the parameterization of the factor matrix. Specifically, taking a rotationally equivalent factor \(X_{t}O\) with an arbitrary \(O\in\mathcal{O}_{r}\), and feeding it into the update rule (7), the next iterate
\[X_{t}O-\eta\mathcal{A}^{*}\mathcal{A}(X_{t}X_{t}^{\top}-M_{\star})X_{t}O(O^{ \top}X_{t}^{\top}X_{t}O+\lambda I)^{-1}=X_{t+1}O\]
is rotated simultaneously by the same rotation matrix \(O\). In other words, the recovered matrix sequence \(M_{t}=X_{t}X_{t}^{\top}\) is invariant with respect to the parameterization of the factor matrix.
_Remark_ 1.: We note that a related variant of ScaledGD, called PrecGD, has been proposed recently in Zhang et al. (2022, 2021) for the overparameterized setting, which follows the update rule
\[\textsf{PrecGD}\colon\qquad X_{t+1}=X_{t}-\eta\mathcal{A}^{*}\mathcal{A}(X_{t }X_{t}^{\top}-M_{\star})X_{t}(X_{t}^{\top}X_{t}+\lambda_{t}I)^{-1}, \tag{8}\]
where the damping parameters \(\lambda_{t}=\sqrt{f(X_{t})}\) are selected in an _iteration-varying_ manner assuming the algorithm is initialized properly. In contrast, ScaledGD(\(\lambda\)) assumes a fixed damping parameter \(\lambda\) throughout the iterations. We shall provide more detailed comparisons with PrecGD in Section 3.
Main results
Before formally presenting our theorems, let us introduce several key assumptions that will be in effect throughout this paper.
Restricted Isometry Property.A key property of the operator \(\mathcal{A}(\cdot)\) is the celebrated Restricted Isometry Property (RIP) (Recht et al., 2010), which says that the operator \(\mathcal{A}(\cdot)\) approximately preserves the distances between low-rank matrices. The formal definition is given as follows.
**Definition 1** (Restricted Isometry Property).: The linear map \(\mathcal{A}(\cdot)\) is said to obey rank-\(r\) RIP with a constant \(\delta_{r}\in[0,1)\), if for all matrices \(M\in\mathrm{Sym}_{2}(\mathbb{R}^{n})\) of rank at most \(r\), it holds that
\[(1-\delta_{r})\|M\|_{\mathsf{F}}^{2}\leq\left\|\mathcal{A}(M)\right\|_{2}^{2} \leq(1+\delta_{r})\|M\|_{\mathsf{F}}^{2}. \tag{9}\]
The Restricted Isometry Constant (RIC) is defined to be the smallest positive \(\delta_{r}\) such that (9) holds.
The RIP is a standard assumption in low-rank matrix sensing, which has been verified to hold with high probability for a wide variety of measurement operators. For example, if the entries of \(\{A_{i}\}_{i=1}^{m}\) are independent up to symmetry with diagonal elements sampled from \(\mathcal{N}(0,1/m)\) and off-diagonal elements from \(\mathcal{N}(0,1/2m)\), then with high probability, \(\mathcal{A}(\cdot)\) satisfies rank-\(r\) RIP with constant \(\delta_{r}\), as long as \(m\geq Cnr/\delta_{r}^{2}\) for some sufficiently large universal constant \(C>0\)(Candes and Plan, 2011).
Throughout this paper, we make the following assumption about the operator \(\mathcal{A}(\cdot)\).
**Assumption 1**.: _The operator \(\mathcal{A}(\cdot)\) satisfies the rank-\((r_{\star}+1)\) RIP with \(\delta_{r_{\star}+1}\eqqcolon\delta\). Furthermore, there exist a sufficiently small constant \(c_{\delta}>0\) and a sufficiently large constant \(C_{\delta}>0\) such that_
\[\delta\leq c_{\delta}r_{\star}^{-1/2}\kappa^{-C_{\delta}}. \tag{10}\]
Small random initialization.Similar to Li et al. (2018); Stoger and Soltanolkotabi (2021), we set the initialization \(X_{0}\) to be a small random matrix, i.e.,
\[X_{0}=\alpha G, \tag{11}\]
where \(G\in\mathbb{R}^{n\times r}\) is some matrix considered to be normalized and \(\alpha>0\) controls the magnitude of the initialization. To simplify exposition, we take \(G\) to be a standard random Gaussian matrix, that is, \(G\) is a random matrix with i.i.d. entries distributed as \(\mathcal{N}(0,1/n)\).
Choice of parameters.Last but not least, the parameters of \(\mathsf{ScaledGD}(\lambda)\) are selected according to the following assumption.
**Assumption 2**.: _There exist some universal constants \(c_{\eta},c_{\lambda},C_{\alpha}>0\) such that \((\eta,\lambda,\alpha)\) in \(\mathsf{ScaledGD}(\lambda)\) satisfy the following conditions:_
\[(\mathsf{learning\ rate}) \eta\leq c_{\eta}, \tag{12a}\] \[(\mathsf{damping\ parameter}) \frac{1}{100}c_{\lambda}\sigma_{\min}^{2}(X_{\star})\leq\lambda \leq c_{\lambda}\sigma_{\min}^{2}(X_{\star}),\] (12b) \[(\mathsf{initialization\ size}) \log\frac{\|X_{\star}\|}{\alpha}\geq\frac{C_{\alpha}}{\eta}\log( 2\kappa)\cdot\log(2\kappa n). \tag{12c}\]
We are now in place to present the main theorems.
### The overparameterization setting
We begin with our main theorem, which characterizes the performance of \(\mathsf{ScaledGD}(\lambda)\) with overparameterization.
**Theorem 2**.: _Suppose Assumptions 1 and 2 hold. With high probability (with respect to the realization of the random initialization \(G\)), there exists a universal constant \(C_{\min}>0\) such that for some \(T\leq T_{\min}\coloneqq\frac{C_{\min}}{\eta}\log\frac{\|X_{\star}\|}{\alpha}\), we have_
\[\|X_{T}X_{T}^{\top}-M_{\star}\|_{\mathsf{F}}\leq\alpha^{1/3}\|X_{\star}\|^{5/3}.\]
_In particular, for any prescribed accuracy target \(\varepsilon\in(0,1)\), by choosing a sufficiently small \(\alpha\) fulfilling both (12c) and \(\alpha\leq\varepsilon^{3}\|X_{\star}\|\), we have \(\|X_{T}X_{T}^{\top}-M_{\star}\|_{\mathsf{F}}\leq\varepsilon\|M_{\star}\|\)._
A few remarks are in order.
Iteration complexity.Theorem 2 shows that by choosing an appropriate \(\alpha\), \(\mathsf{ScaledGD}(\lambda)\) finds an \(\varepsilon\)-accurate solution, i.e., \(\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\leq\varepsilon\|M_{\star}\|\), in no more than an order of
\[\log\kappa\cdot\log(\kappa n)+\log(1/\varepsilon)\]
iterations. Roughly speaking, this asserts that \(\mathsf{ScaledGD}(\lambda)\) converges at a constant linear rate after an initial phase of approximately \(O(\log\kappa\cdot\log(\kappa n))\) iterations. Most notably, the iteration complexity is nearly independent of the condition number \(\kappa\), with a small overhead only through the poly-logarithmic additive term \(O(\log\kappa\cdot\log(\kappa n))\). In contrast, \(\mathsf{GD}\) requires \(O(\kappa^{8}+\kappa^{6}\log(\kappa n/\varepsilon))\) iterations to converge from a small random initialization to \(\varepsilon\)-accuracy; see Li et al. (2018); Stoger and Soltanolkotabi (2021). Thus, the convergence of \(\mathsf{GD}\) is much slower than \(\mathsf{ScaledGD}(\lambda)\) even for mildly ill-conditioned matrices.
Sample complexity.The sample complexity of \(\mathsf{ScaledGD}(\lambda)\) hinges upon the Assumption 1. When the entries of \(\{A_{i}\}_{i=1}^{n}\) are independent up to symmetry with diagonal elements sampled from \(\mathcal{N}(0,1/m)\) and off-diagonal elements from \(\mathcal{N}(0,1/2m)\), this assumption is fulfilled as long as \(m\gtrsim nr_{\star}^{2}\cdot\mathsf{poly}(\kappa)\). Our sample complexity depends only on the true rank \(r_{\star}\), but not on the overparameterized rank \(r\) -- a crucial feature in order to provide meaningful guarantees when the overparameterized rank \(r\) is close to the full dimension \(n\). The dependency on \(\kappa\) in the sample complexity, on the other end, has been generally unavoidable in nonconvex low-rank estimation (Chi et al., 2019).
Comparison with Zhang et al. (2022, 2021).As mentioned earlier, our proposed algorithm \(\mathsf{ScaledGD}(\lambda)\) is quite similar to \(\mathsf{PrecGD}\) proposed in Zhang et al. (2021) that adopts an iteration-varying damping parameter. In terms of theoretical guarantees, Zhang et al. (2021) only provides the local convergence for \(\mathsf{PrecGD}\) assuming an initialization close to the ground truth; in contrast, we provide global convergence guarantees where a small random initialization is used. More critically, the sample complexity of \(\mathsf{PrecGD}\)(Zhang et al., 2021) depends on the overparameterized rank \(r\), while ours only depends on the true rank \(r_{\star}\). While Zhang et al. (2022) also studied variants of \(\mathsf{PrecGD}\) with global convergence guarantees, they require additional operations such as gradient perturbations and switching between different algorithmic stages, which are harder to implement in practice. Our theory suggests that additional perturbation is unnecessary to ensure the global convergence of \(\mathsf{ScaledGD}(\lambda)\), as \(\mathsf{ScaledGD}(\lambda)\) automatically adapts to different curvatures of the optimization landscape throughout the entire trajectory.
### The exact parameterization setting
We now single out the exact parametrization case, i.e., when \(r=r_{\star}\). In this case, our theory suggests that \(\mathsf{ScaledGD}(\lambda)\) converges to the ground truth even from a random initialization with a fixed scale \(\alpha>0\).
**Theorem 3**.: _Assume that \(r=r_{\star}\). Suppose Assumptions 1 and 2 hold. With high probability (with respect to the realization of the random initialization \(G\)), there exist some universal constants \(C_{\min}>0\) and \(c>0\) such that for some \(T\leq T_{\min}=\frac{C_{\min}}{\eta}\log(\|X_{\star}\|/\alpha)\), we have for any \(t\geq T\)_
\[\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\leq(1-c\eta)^{t-T}\|M_{\star}\|.\]
Theorem 3 shows that with some fixed initialization scale \(\alpha\), \(\mathsf{ScaledGD}(\lambda)\) takes at most an order of
\[\log\kappa\cdot\log(\kappa n)+\log(1/\varepsilon)\]
iterations to converge to \(\varepsilon\)-accuracy for any \(\varepsilon>0\) in the exact parameterization case. Compared with ScaledGD (Tong et al., 2021) which takes \(O(\log(1/\varepsilon))\) iterations to converge from a spectral initialization, we only pay a logarithmic order \(O(\log\kappa\cdot\log(\kappa n))\) of additional iterations to converge from a random initialization. In addition, once the algorithms enter the local regime, both ScaledGD(\(\lambda\)) and ScaledGD behave similarly and converge at a fast constant linear rate, suggesting the effect of damping is locally negligible. Furthermore, compared with GD(Stiger and Soltanolkotabi, 2021) which requires \(O(\kappa^{8}\log(\kappa n)+\kappa^{2}\log(1/\varepsilon))\) iterations to achieve \(\varepsilon\)-accuracy, our theory again highlights the benefit of ScaledGD(\(\lambda\)) in boosting the global convergence even for mildly ill-conditioned matrices.
## 4 Analysis
In this section, we present the main steps for proving Theorem 2 and Theorem 3. The detailed proofs are collected in the Appendix. All of our statements will be conditioned on the following high probability event regarding the initialization matrix \(G\):
\[\mathcal{E}=\{\|G\|\leq C_{G}\}\cap\{\sigma_{\min}(\widehat{U}^{\top}G)\geq(2n )^{-C_{G}}\}, \tag{13}\]
where \(\widehat{U}\in\mathbb{R}^{n\times r_{\star}}\) is an orthonormal basis of the eigenspace associated with the \(r_{\star}\) largest eigenvalues of \(\mathcal{A}^{*}\mathcal{A}(M_{\star})\), and \(C_{G}>0\) is some sufficiently large universal constant. It is a standard result in random matrix theory that \(\mathcal{E}\) happens with high probability, as verified by the following lemma.
**Lemma 1**.: _With respect to the randomness in \(G\), the event \(\mathcal{E}\) happens with probability at least \(1-(cn)^{-C_{G}(r-r_{\star}+1)/2}-2\exp(-cn)\), where \(c>0\) is some universal constant._
Proof.: See Appendix A.1.
### Preliminaries: decomposition of \(X_{t}\)
Before embarking on the main proof, we present a useful decomposition (cf. (14)) of the iterate \(X_{t}\) into a signal term, a misalignment error term, and an overparametrization error term. Choose some matrix \(U_{\star,\perp}\in\mathbb{R}^{n\times(n-r_{\star})}\) such that \([U_{\star},U_{\star,\perp}]\) is orthonormal. Then we can define
\[S_{t}\coloneqq U_{\star}^{\top}X_{t}\in\mathbb{R}^{r_{\star}\times r},\quad \text{and}\quad N_{t}\coloneqq U_{\star,\perp}^{\top}X_{t}\in\mathbb{R}^{(n-r _{\star})\times r}.\]
Let the SVD of \(S_{t}\) be
\[S_{t}=U_{t}\Sigma_{t}V_{t}^{\top},\]
where \(U_{t}\in\mathbb{R}^{r_{\star}\times r_{\star}}\), \(\Sigma_{t}\in\mathbb{R}^{r_{\star}\times r_{\star}}\), and \(V_{t}\in\mathbb{R}^{r\times r_{\star}}\). Similar to \(U_{\star,\perp}\), we define the orthogonal complement of \(V_{t}\) as \(V_{t,\perp}\in\mathbb{R}^{r\times(r-r_{\star})}\). When \(r=r_{\star}\) we simply set \(V_{t,\perp}=0\).
We are now ready to present the main decomposition of \(X_{t}\), which we use repeatedly in later analysis.
**Proposition 1**.: _The following decomposition holds:_
\[X_{t}=\underbrace{U_{\star}\widetilde{S}_{t}V_{t}^{\top}}_{\text{signal}}+ \underbrace{U_{\star,\perp}\widetilde{N}_{t}V_{t}^{\top}}_{\text{ misalignment}}+\underbrace{U_{\star,\perp}\widetilde{O}_{t}V_{t,\perp}^{\top}}_{\text{ overparametrization}}, \tag{14}\]
_where_
\[\widetilde{S}_{t}\coloneqq S_{t}V_{t}\in\mathbb{R}^{r_{\star}\times r_{\star}},\quad\widetilde{N}_{t}\coloneqq N_{t}V_{t}\in\mathbb{R}^{(n-r_{\star})\times r _{\star}},\quad\text{and}\quad\widetilde{O}_{t}\coloneqq N_{t}V_{t,\perp}\in \mathbb{R}^{(n-r_{\star})\times(r-r_{\star})}. \tag{15}\]
Proof.: See Appendix A.2.
Several remarks on the decomposition are in order.
* First, since \(V_{t,\perp}\) spans the obsolete subspace arising from overparameterization, \(\widetilde{O}_{t}\) naturally represents the error incurred by overparameterization; in particular, in the well-specified case (i.e., \(r=r_{\star}\)), one has zero overparameterization error, i.e., \(\widetilde{O}_{t}=0\).
* Second, apart from the rotation matrix \(V_{t}\), \(\widetilde{S}_{t}\) documents the projection of the iterates \(X_{t}\) onto the signal space \(U_{\star}\). Similarly, \(\widetilde{N}_{t}\) characterizes the misalignment of the iterates with the signal subspace \(U_{\star}\). It is easy to observe that in order for \(X_{t}X_{t}^{\top}\approx M_{\star}\), one must have \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\approx\Sigma_{\star}^{2}\), and \(\widetilde{N}_{t}\approx 0\).
* Last but not least, the extra rotation induced by \(V_{t}\) is extremely useful in making the signal/misalignment terms rationally invariant. To see this, suppose that we rotate the current iterate by \(X_{t}\mapsto X_{t}Q\) with some rotational matrix \(Q\in\mathcal{O}_{r}\), then \(S_{t}\mapsto S_{t}Q\) but \(\widetilde{S}_{t}\) remains unchanged, and similarly for \(\widetilde{N}_{t}\).
### Proof roadmap
Our analysis breaks into a few phases that characterize the dynamics of the key terms in the above decomposition, which we provide a roadmap to facilitate understanding. Denote
\[C_{\max}\coloneqq\begin{cases}4C_{\min},&r>r_{\star},\\ \infty,&r=r_{\star},\end{cases}\qquad\text{and}\qquad T_{\max}\coloneqq\frac{C_ {\max}}{\eta}\log(\|X_{\star}\|/\alpha),\]
where \(T_{\max}\) represents the largest index of the iterates that we maintain error control. The analysis boils down to the following phases, indicated by time points \(t_{1},t_{2},t_{3},t_{4}\) that satisfy
\[t_{1}\leq T_{\min}/16,\quad t_{1}\leq t_{2}\leq t_{1}+T_{\min}/16,\quad t_{2} \leq t_{3}\leq t_{2}+T_{\min}/16,\quad t_{3}\leq t_{4}\leq t_{3}+T_{\min}/16.\]
* _Phase I: approximate power iterations._ In the initial phase, \(\mathsf{ScaledGD}(\lambda)\) behaves similarly to GD, which is shown in Stoger and Soltanolkotabi (2021) to approximate the power method in the first few iterations up to \(t_{1}\). After this phase, namely for \(t\in[t_{1},T_{\max}]\), although the signal strength is still quite small, it begins to be aligned with the ground truth with the overparameterization error kept relatively small.
* _Phase II: exponential amplification of the signal._ In this phase, \(\mathsf{ScaledGD}(\lambda)\) behaves somewhat as a mixture of GD and \(\mathsf{ScaledGD}\) with a proper choice of the damping parameter \(\lambda\asymp\sigma_{\min}^{2}(X_{\star})\), which ensures the signal strength first grows exponentially fast to reach a constant level no later than \(t_{2}\), and then reaches the desired level no later than \(t_{3}\), i.e., \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\approx\Sigma_{\star}^{2}\).
* _Phase III: local linear convergence._ At the last phase, \(\mathsf{ScaledGD}(\lambda)\) behaves similarly to \(\mathsf{ScaledGD}\), which converges linearly at a rate independent of the condition number. Specifically, for \(t\in[t_{3},T_{\max}]\), the reconstruction error \(\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\) converges at a linear rate up to some small overparameterization error, until reaching the desired accuracy for any \(t\in[t_{4},T_{\max}]\).
### Phase I: approximate power iterations
It has been observed in Stoger and Soltanolkotabi (2021) that when initialized at a small scaled random matrix, the first few iterations of GD mimic the power iterations on the matrix \(\mathcal{A}^{*}\mathcal{A}(M_{\star})\). When it comes to \(\mathsf{ScaledGD}(\lambda)\), since the initialization size \(\alpha\) is chosen to be much smaller than the damping parameter \(\lambda\), the preconditioner \((X_{t}^{\top}X_{t}+\lambda I)^{-1}\) behaves like \((\lambda I)^{-1}\) in the beginning. This renders \(\mathsf{ScaledGD}(\lambda)\) akin to gradient descent in the initial phase. As a result, we also expect the first few iterations of \(\mathsf{ScaledGD}(\lambda)\) to be similar to the power iterations, i.e.,
\[X_{t}\approx\left(I+\frac{\eta}{\lambda}\mathcal{A}^{*}\mathcal{A}(M_{\star}) \right)^{t}X_{0},\qquad\text{when $t$ is small.}\]
Such proximity between \(\mathsf{ScaledGD}(\lambda)\) and power iterations can indeed be justified in the beginning period, which allows us to deduce the following nice properties _after_ the initial iterates of \(\mathsf{ScaledGD}(\lambda)\).
**Lemma 2**.: _Under the same setting as Theorem 2, there exists an iteration number \(t_{1}:t_{1}\leq T_{\min}/16\) such that_
\[\sigma_{\min}(\widetilde{S}_{t_{1}})\geq\alpha^{2}/\|X_{\star}\|, \tag{16}\]
_and that, for any \(t\in[t_{1},T_{\max}]\), \(\widetilde{S}_{t}\) is invertible and one has_
\[\|\widetilde{O}_{t}\|\leq(C_{2,b}\kappa n)^{-C_{2,b}}\|X_{\star}\|\sigma_{\min} \big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t}\big{)}, \tag{17a}\]
\[\|\widetilde{O}_{t}\| \leq\left(1+\frac{\eta}{12C_{\max}\kappa}\right)^{t-t_{1}}\alpha^{ 5/6}\|X_{\star}\|^{1/6}, \tag{17b}\] \[\|\widetilde{N}_{t}\widetilde{S}_{t}^{-1}\Sigma_{\star}\| \leq c_{2}\kappa^{-C_{\delta}/2}\|X_{\star}\|,\] (17c) \[\|\widetilde{S}_{t}\| \leq C_{2.a}\kappa\|X_{\star}\|, \tag{17d}\]
_where \(C_{2.a}\), \(C_{2.b}\), \(c_{2}\) are some positive constants satisfying \(C_{2.a}\lesssim c_{\lambda}^{-1/2}\), \(c_{2}\lesssim c_{\delta}/c_{\lambda}^{3}\), and \(C_{2.b}\) can be made arbitrarily large by increasing \(C_{\alpha}\)._
Proof.: See Appendix C.
_Remark_ 2. Let us record two immediate consequences of (17), which sometimes are more convenient for later analysis. From (17a), we may deduce
\[\|\widetilde{O}_{t}\|\leq(C_{2.b}\kappa n)^{-C_{2.b}}\|X_{\star}\|\sigma_{\min }(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\sigma_{\min}(\widetilde{S}_{t})\leq \kappa(C_{2.b}\kappa n)^{-C_{2.b}}\sigma_{\min}(\widetilde{S}_{t})\leq(C_{2.b }^{\prime}\kappa n)^{-C_{2.b}^{\prime}}\sigma_{\min}(\widetilde{S}_{t}), \tag{18}\]
where \(C_{2.b}^{\prime}=C_{2.b}/2\), provided \(C_{2.b}>4\). It is clear that \(C_{2.b}^{\prime}\) can also be made arbitrarily large by enlarging \(C_{\alpha}\). Similarly, from (17b), we may deduce
\[\|\widetilde{O}_{t}\|\leq\left(1+\frac{\eta}{12C_{\max}\kappa} \right)^{t-t_{1}}\alpha^{5/6}\|X_{\star}\|^{1/6} \leq\left(1+\frac{\eta}{12C_{\max}\kappa}\right)^{\frac{C_{\max} }{\eta}\log(\|X_{\star}\|/\alpha)}\alpha^{5/6}\|X_{\star}\|^{1/6}\] \[\leq(\|X_{\star}\|/\alpha)^{1/12}\alpha^{5/6}\|X_{\star}\|^{1/6}= \alpha^{3/4}\|X_{\star}\|^{1/4}. \tag{19}\]
Lemma 2 ensures the iterates of \(\mathsf{ScaledGD}(\lambda)\) maintain several desired properties after iteration \(t_{1}\), as summarized in (17). In particular, for any \(t\in[t_{1},T_{\max}]\): (i) the overparameterization error \(\|\widetilde{O}_{t}\|\) remains small relatively to the signal strength measured in terms of the scaled minimum singular value \(\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t} \big{)}\), and remains bounded with respect to the size of the initialization \(\alpha\) (cf. (17a) and (17b) and their consequences (18) and (19)); (ii) the scaled misalignment-to-signal ratio remains bounded, suggesting the iterates remain aligned with the ground truth signal subspace \(U_{\star}\) (cf. (17c)); (iii) the size of the signal component \(\widetilde{S}_{t}\) remains bounded (cf. (17d)). These properties play an important role in the follow-up analysis.
_Remark_ 3. It is worth noting that, the scaled minimum singular value \(\sigma_{\min}((\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t})\) plays a key role in our analysis, which is in sharp contrast to the use of the vanilla minimum singular value \(\sigma_{\min}(\widetilde{S}_{t})\) in the analysis of gradient descent (Stoger and Soltanolkotabi, 2021). This new measure of signal strength is inspired by the scaled distance for \(\mathsf{ScaledGD}\) introduced in Tong et al. (2021, 2022), which carefully takes the preconditioner design into consideration. Similarly, the metrics \(\|\widetilde{N}_{t}\widetilde{S}_{t}^{-1}\Sigma_{\star}\|\) in (17c) and \(\big{\|}\Sigma_{\star}^{-1}(\widetilde{S}_{t+1}\widetilde{S}_{t+1}^{\top}- \Sigma_{\star}^{2})\Sigma_{\star}^{-1}\big{\|}\) (to be seen momentarily) are also scaled for similar considerations to unveil the fast convergence (almost) independent of the condition number.
### Phase II: exponential amplification of the signal
By the end of Phase I, the signal strength is still quite small (cf. (16)), which is far from the desired level. Fortunately, the properties established in Lemma 2 allow us to establish an exponential amplification of the signal term \(\widetilde{S}_{t}\) thereafter, which can be further divided into two stages.
1. In the first stage, the signal is boosted to a constant level, i.e., \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\succeq\frac{1}{10}\Sigma_{\star}^{2}\);
2. In the second stage, the signal grows further to the desired level, i.e., \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\approx\Sigma_{\star}^{2}\).
We start with the first stage, which again uses \(\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t} \big{)}\) as a measure of signal strength in the following lemma.
**Lemma 3**.: _For any \(t\) such that (17) holds, we have_
\[\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t+1} \big{)}\geq(1-2\eta)\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2} \widetilde{S}_{t}\big{)}.\]
_Moreover, if \(\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t}\big{)} \leq 1/3\), then_
\[\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t+1} \big{)}\geq\left(1+\frac{1}{8}\eta\right)\sigma_{\min}\big{(}(\Sigma_{\star}^{2 }+\lambda I)^{-1/2}\widetilde{S}_{t}\big{)}.\]
Proof.: See Appendix D.1.
The second half of Lemma 3 uncovers the exponential growth of the signal strength \(\sigma_{\min}\big{(}(\Sigma_{\star}^{2}+\lambda I)^{-1/2}\widetilde{S}_{t} \big{)}\) until a constant level after several iterations, which resembles the exponential growth of the signal strength in GD(Stoger and Soltanolkotabi, 2021). This is formally established in the following corollary.
**Corollary 1**.: _There exists an iteration number \(t_{2}:t_{1}\leq t_{2}\leq t_{1}+T_{\min}/16\) such that for all \(t\in[t_{2},T_{\max}]\), we have_
\[\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\succeq\frac{1}{10}\Sigma_{\star}^{ 2}. \tag{20}\]
Proof.: See Appendix D.2.
We next aim to show that \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\approx\Sigma_{\star}^{2}\) after the signal strength is above the constant level. To this end, the behavior of \(\mathsf{ScaledGD}(\lambda)\) becomes closer to that of \(\mathsf{ScaledGD}\), and it turns out to be easier to work with \(\big{\|}\Sigma_{\star}^{-1}(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}-\Sigma_ {\star}^{2})\Sigma_{\star}^{-1}\big{\|}\) as a measure of the scaled recovery error of the signal component. We establish the approximate exponential shrinkage of this measure in the following lemma.
**Lemma 4**.: _For all \(t\in[t_{2},T_{\max}]\) with \(t_{2}\) given in Corollary 1, one has_
\[\big{\|}\Sigma_{\star}^{-1}(\widetilde{S}_{t+1}\widetilde{S}_{t+1}^{\top}- \Sigma_{\star}^{2})\Sigma_{\star}^{-1}\big{\|}\leq(1-\eta)\left\|\Sigma_{ \star}^{-1}(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}-\Sigma_{\star}^{2}) \Sigma_{\star}^{-1}\right\|+\frac{1}{100}\eta. \tag{21}\]
Proof.: See Appendix D.3.
With the help of Lemma 4, it is straightforward to establish the desired approximate recovery guarantee of the signal component, i.e., \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\approx\Sigma_{\star}^{2}\).
**Corollary 2**.: _There exists an iteration number \(t_{3}:t_{2}\leq t_{3}\leq t_{2}+T_{\min}/16\) such that for any \(t\in[t_{3},T_{\max}]\), one has_
\[\frac{9}{10}\Sigma_{\star}^{2}\preceq\widetilde{S}_{t}\widetilde{S}_{t}^{ \top}\preceq\frac{11}{10}\Sigma_{\star}^{2}. \tag{22}\]
Proof.: See Appendix D.4.
### Phase III: local convergence
Corollary 2 tells us that after iteration \(t_{3}\), we enter a local region in which \(\widetilde{S}_{t}\widetilde{S}_{t}^{\top}\) is close to the ground truth \(\Sigma_{\star}^{2}\). In this local region, the behavior of \(\mathsf{ScaledGD}(\lambda)\) becomes closer to that of \(\mathsf{ScaledGD}\) analyzed in Tong et al. (2021). We turn attention to the reconstruction error \(\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\) that measures the generalization performance, and show it converges at a linear rate independent of the condition number up to some small overparameterization error.
**Lemma 5**.: _There exists some universal constant \(c_{5}>0\) such that for any \(t:t_{3}\leq t\leq T_{\max}\), we have_
\[\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\leq(1-c_{5}\eta)^{t-t_{3}}\sqrt{ r_{\star}}\|M_{\star}\|+8c_{5}^{-1}\|M_{\star}\|\max_{t_{3}\leq\tau\leq t} \left(\frac{\|\widetilde{O}_{\tau}\|}{\|X_{\star}\|}\right)^{1/2}. \tag{23}\]
_In particular, there exists an iteration number \(t_{4}:t_{3}\leq t_{4}\leq t_{3}+T_{\min}/16\) such that for any \(t\in[t_{4},T_{\max}]\), we have_
\[\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathsf{F}}\leq\alpha^{1/3}\|X_{\star}\|^{5/ 3}\leq\varepsilon\|M_{\star}\|. \tag{24}\]
_Here, \(\varepsilon\) and \(\alpha\) are as stated in Theorem 2._
Proof.: See Appendix E.
### Proofs of main theorems
Now we are ready to collect the results in the preceding sections to prove our main results, i.e., Theorem 2 and Theorem 3.
We start with proving Theorem 2. By Lemma 2, Corollary 1, Corollary 2 and Lemma 5, the final \(t_{4}\) given by Lemma 5 is no more than \(4\times T_{\min}/16\leq T_{\min}/2\), thus (24) holds for all \(t\in[T_{\min}/2,T_{\max}]\), in particular, for some \(T\leq T_{\min}\), as claimed.
Now we consider Theorem 3. In case that \(r=r_{\star}\), it follows from definition that \(\widetilde{O}_{t}=0\) vanishes for all \(t\). It follows from Lemma 5, in particular from (23), that
\[\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathrm{F}}\leq(1-c_{5}\eta)^{t-t_{3}}\sqrt{r _{\star}}\|M_{\star}\|,\]
for any \(t\geq t_{3}\) (recall that \(T_{\max}=\infty\) by definition when \(r=r_{\star}\)). Note that \((1-c_{5}\eta)^{t}\sqrt{r_{\star}}\leq(1-c_{5}\eta)^{t-T+t_{3}}\) if \(T-t_{3}\geq 4\log(r_{\star})/(c_{5}\eta)\) given that \(\eta\leq c_{\eta}\) is sufficiently small. Thus for any \(t\geq T\) we have
\[\|X_{t}X_{t}^{\top}-M_{\star}\|_{\mathrm{F}}\leq(1-c_{5}\eta)^{t-T}\|M_{\star }\|.\]
It is clear that one may choose such \(T\) which also satisfies \(T\leq t_{3}+8/(c_{5}\eta)\leq t_{3}+T_{\min}/16\). We have already shown in the proof of Theorem 2 that \(t_{3}\leq 4\times T_{\min}/16\leq T_{\min}/4\), thus \(T\leq T_{\min}\) as desired.
_Remark_ 4. In the overparameterized setting, our theory guarantees the reconstruction error to be small until some iteration \(T_{\max}\). This is consistent with the phenomenon known as _early stopping_ in prior works of learning with overparameterized models (Li et al., 2018; Stoger and Soltanolkotabi, 2021).
## 5 Numerical experiments
In this section, we conduct numerical experiments to demonstrate the efficacy of \(\mathsf{ScaledGD}(\lambda)\) for solving overparametrized low-rank matrix sensing. We set the ground truth matrix \(X_{\star}=U_{\star}\Sigma_{\star}\in\mathbb{R}^{n\times r_{\star}}\) where \(U_{\star}\in\mathbb{R}^{n\times r_{\star}}\) is a random orthogonal matrix and \(\Sigma_{\star}\in\mathbb{R}^{r_{\star}\times r_{\star}}\) is a diagonal matrix whose condition number is set to be \(\kappa\). We set \(n=150\) and \(r_{\star}=3\), and use random Gaussian measurements with \(m=10nr_{\star}\). The overparameterization rank \(r\) is set to be \(5\).
Comparison with overparametrized GD. We run \(\mathsf{ScaledGD}(\lambda)\) and GD with random initialization and compare their convergence speeds under different condition numbers \(\kappa\) of the ground truth \(X_{\star}\); the result is depicted in Figure 1. Even for a moderate range of \(\kappa\), GD slows down significantly while the convergence speed of \(\mathsf{ScaledGD}(\lambda)\) remains almost the same with a almost negligible initial phase, which is consistent with our theory. The advantage of \(\mathsf{ScaledGD}(\lambda)\) enlarges as \(\kappa\) increase, and is already more than 10x times faster than GD when \(\kappa=7\).
Effect of initialization size.We study the effect of the initialization scale \(\alpha\) on the reconstruction accuracy of \(\mathsf{ScaledGD}(\lambda)\). We fix the learning rate \(\eta\) to be a constant and vary the initialization scale. We run \(\mathsf{ScaledGD}(\lambda)\) until it converges.1 The resulting reconstruction errors and their corresponding initialization scales are plotted in Figure 2. It can be inferred that the reconstruction error increases with respect to \(\alpha\), which is consistent with our theory.
Footnote 1: More precisely, in accordance with our theory which requires early stopping, we stop the algorithm once we detected that the training error no longer decreases significantly for a long time (e.g. 100 iterations).
## 6 Discussions
This paper demonstrates that an appropriately preconditioned gradient descent method, called \(\mathsf{ScaledGD}(\lambda)\), guarantees an accelerated convergence to the ground truth low-rank matrix in overparameterized low-rank matrix sensing, when initialized from a sufficiently small random initialization. Furthermore, in the case of exact parameterization, our analysis guarantees the fast global convergence of \(\mathsf{ScaledGD}(\lambda)\) from a small random initialization. Collectively, this complements and represents a major step forward from prior analyses of \(\mathsf{ScaledGD}\)(Tong et al., 2021) by allowing overparametrization and small random initialization. This works opens up a few exciting future directions that are worth further exploring.
* _Asymmetric case._ Our current analysis is confined to the recovery of low-rank positive semidefinite matrices, with only one factor matrix to be recovered. It remains to generalize this analysis to the recovery of general low-rank matrices with overparameterization.
* _Robust setting._ Many applications encounter corrupted measurements that call for robust recovery algorithms that optimize nonsmooth functions such as the least absolute deviation loss. One such example is the scaled subgradient method (Tong et al., 2021), which is the nonsmooth counterpart of ScaledGD robust to ill-conditioning, and it'll be interesting to study its performance under overparameterization.
* _Noisy case._ For simplicity, our analysis focused on the noise-free case, while in practice, one often deals with noisy data. It is of great importance to understand the statistical error rates of ScaledGD(\(\lambda\)) under a variety of noise models.
* _Other overparameterized learning models._ Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized low-rank matrix sensing, which is one kind of overparameterized learning models. It will be greatly desirable to extend the insights developed herein to other overparameterized learning models, for example tensors (Dong et al., 2022; Tong et al., 2022) and neural networks (Wang et al., 2021).
## Acknowledgements
The work of Y. Chi is supported in part by Office of Naval Research under N00014-19-1-2404, and by National Science Foundation under CAREER ECCS-1818571, CCF-1806154, CCF-1901199 and ECCS-2126634.
|
2307.12042 | Catastrophe theoretic approach to the Higgs Mechanism | A geometric perspective of the Higgs Mechanism is presented. Using Thom's
Catastrophe Theory, we study the emergence of the Higgs Mechanism as a
discontinuous feature in a general family of Lagrangians obtained by varying
its parameters. We show that the Lagrangian that exhibits the Higgs Mechanism
arises as a first-order phase transition in this general family. We find that
the Higgs Mechanism (as well as Spontaneous Symmetry Breaking) need not occur
for a different choice of parameters of the Lagrangian, and further analysis of
these unconventional parameter choices may yield interesting implications for
beyond standard model physics. | Samyak Jain, Ameeya Bhagwat | 2023-07-22T10:21:02Z | http://arxiv.org/abs/2307.12042v2 | # Catastrophe theoretic approach to the Higgs Mechanism
###### Abstract
A geometric perspective of the Higgs Mechanism is presented. Using Thom's Catastrophe Theory, we study the emergence of the Higgs Mechanism as a discontinuous feature in a general family of Lagrangians. We show that the Lagrangian that exhibits the Higgs Mechanism arises as a first-order phase transition in this general family.
## I Introduction
Catastrophe Theory is a geometrical framework developed to study sudden and discontinuous changes in dynamical systems under smooth perturbations. Thom showed (see [1]) that any smooth function of \(n\) variables and \(r\) parameters (\(r\leq 5\)) can be mapped to one and only one of 11 known families of functions (catastrophes). These catastrophes have unique geometries, and have already been studied in detail for sudden discontinuous changes under smooth perturbations to their parameters, thus allowing us to study _any_\(r\) parameter smooth function by finding mappings that take us to one of these known catastrophes (see [2] and [3] for details). Catastrophe theory deals with systems that have these \(r\) parameter functions as their potentials. We emphasize that these catastrophes are unique and cannot related to each other by smooth variable transformations.
Proposals to study quantum systems using Catastrophe theory have been suggested (see [4]). Motivated by these, we hunt for discontinuous features typical of catastrophes in the Lagrangian demonstrating the Higgs Mechanism. In Section II, we give a brief introduction to the cusp catastrophe (following the terminology of [4]), which is one of the simplest but most commonly arising catastrophes. In Section III, following [5], we briefly outline how the Higgs Mechanism arises from a combination of Spontaneous Symmetry Breaking and local gauge invariance. In Section IV, we show how the Lagrangian (its potential) used in Section III is related to the Cusp Catastrophe, which describes a more general family of potentials. We show that attenuating the parameters of this general family leads to a first-order phase transition, as defined in [4]. We additionally observe a discontinuous change in an affine property of the system (number of critical points of the Lagrangian's potential) as we arrive at the usual Lagrangian that shows the Higgs Mechanism.
## II The Cusp Catastrophe
As observed earlier, the cusp catastrophe is an extremely common catastrophe in physical systems. The cusp potential reads as:
\[V_{\rm cusp}(a,b;x)=x^{4}+ax^{2}+bx,\quad x\in\mathbb{R}\]
with \(a,b\) as parameters. The critical points are defined as points where the first derivative of this potential vanishes, and satisfy
\[4x_{c}^{3}+2ax_{c}+b=0 \tag{1}\]
This can be solved to see how the number of critical points \(x_{c}\) varies with \(a,b\). We obtain a single critical point (a local minimum) for all choices of parameters \(a,b\) except for the region
\[a<0,\quad|b|\leq\frac{4}{3\sqrt{6}}\sqrt{-a^{3}} \tag{2}\]
where we have a local maxima sandwiched between 2 local minima. Fig.1 shows how this looks in the \(b-a\) plane. In Fig.(2) we show a clearer geometric picture by plotting Eq.(1) in the \(b-a-x\) space. This structure defines the so-called _catastrophe manifold_ (see [2]).
Figure 1: Variation in the number of critical points of the cusp potential in the \(b-a\) plane.
a) For \(b=-\frac{1}{4}\), we have two negative critical points, and the ground state is at the positive critical point (a minimum). b) At \(b=0\), the minima become degenerate, and the ground state can be at either minimum. Till now, the ground state energy has been decreasing with increasing \(b\). c) As soon as \(b>0\), the ground state shifts abruptly to the negative minimum. The ground state energy now suddenly starts decreasing with increasing \(b\). Since the ground state was never at \(x=0\), the rate of change of energy given by Eq.(4) changes from a strictly positive value to a strictly negative value. Thus the rate of change of energy has shifted discontinuously, completing the first-order phase transition.
Figure 3: The first-order phase transition in the cusp potential is shown. We fix \(a=-1\), vary \(b\) from \(\frac{-1}{4}\) to \(\frac{1}{4}\) and plot the cusp potential. The ground states for the potentials are indicated by red dots.
Figure 2: The catastrophe manifold for the Cusp Catastrophe. We can see how the critical points (number and values) vary across the \(b-a\) plane.
As in [4], we assume that for any fixed choice of parameters, the energy of the system described by this potential dwells at the lowest minima of the potential. Consider \(a<0\) and \(b\) varying from negative to positive values. We see that at \(b=0\), the minima become degenerate, and the lowest potential minimum shifts abruptly (see Fig. 3). We can show that while the ground state energy \(E_{g}\) is continuous (because the shift happens when the minima are degenerate), the rate of change of the ground state energy \(\frac{dE_{g}}{db}\) is discontinuous:
\[\frac{dE_{g}}{db}=\frac{dE(x_{c},a,b)}{db}=\frac{\partial E(x_{c},a,b)}{\partial b}\] \[+\frac{dE(x_{c},a,b)}{dx_{c}}\frac{dx_{c}}{db} \tag{3}\]
Note that \(x_{c}\) here is the particular critical point at which the ground state dwells. Now, the second term vanishes because \(\frac{dE(x_{c},a,b)}{dx_{c}}=0\) by the definition of \(x_{c}\). Thus
\[\frac{dE_{g}}{db}=x_{c} \tag{4}\]
Since \(x_{c}\) is discontinuous at \(b=0\), the first derivative of the ground-state energy is discontinuous as well. This is called a first-order phase transition ([4]).
As shown in [4], we obtain a second-order phase transition (the second derivative being discontinuous) at \(a=b=0\). We do not describe this in detail because, as we shall see in Section IV, we are concerned with only the first-order phase transition in this cusp potential.
## III The Higgs mechanism
We begin with the Mexican Hat Lagrangian that demonstrates spontaneous symmetry breaking [5]
\[{\cal L}=\frac{1}{2}(\partial_{\mu}\phi_{1}^{*})(\partial^{\mu} \phi_{1})+\frac{1}{2}(\partial_{\mu}\phi_{2}^{*})(\partial^{\mu}\phi_{2})\] \[+\frac{\mu}{2}(\phi_{1}^{2}+\phi_{2}^{2})-\frac{\lambda}{4}(\phi _{1}^{2}+\phi_{2}^{2})^{2} \tag{5}\]
We define
\[\phi=\phi_{1}+i\phi_{2},\quad\phi^{*}=\phi_{1}-i\phi_{2} \tag{6}\]
The Lagrangian thus becomes
\[{\cal L}=\frac{1}{2}(\partial_{\mu}\phi)^{*}(\partial^{\mu}\phi)+\frac{\mu}{2 }(\phi^{*}\phi)-\frac{\lambda}{4}(\phi^{*}\phi)^{2} \tag{7}\]
Enforcing local gauge invariance, we introduce a field \(A^{\mu}\) that transforms like
\[A^{\mu}\to A^{\mu}+\partial^{\mu}\lambda \tag{8}\]
under a transformation
\[\phi\to e^{i\theta(x)}\phi \tag{9}\]
We further complete the Proca Lagrangian by introducing the term \(\frac{1}{16\pi}F^{\mu\nu}F_{\mu\nu}\). Finally, we replace partial derivatives with covariant derivatives to obtain the locally gauge-invariant Lagrangian:
\[{\cal L}=\frac{1}{2}\left(\left[(\partial_{\mu}-iqA_{\mu}\right] \phi^{*})\left(\left[\partial^{\mu}+iqA^{\mu}\right]\right)\phi\right)\] \[+\frac{1}{2}\mu^{2}(\phi^{*}\phi)-\frac{1}{4}\lambda^{2}(\phi^{*} \phi)^{2}-\frac{1}{16\pi}F^{\mu\nu}F_{\mu\nu} \tag{10}\]
The Higgs Mechanism is seen by expanding this Lagrangian density about the minima (ground state) of the potential of the initial Lagrangian :
\[V=\frac{\lambda}{4}(\phi^{*}\phi)^{2}-\frac{\mu}{2}(\phi^{*}\phi) \tag{11}\]
The ground state is obtained at
\[\phi^{*}\phi=\frac{\mu^{2}}{\lambda^{2}} \tag{12}\]
We conveniently define
\[\phi_{1}=\eta+\frac{\mu}{\lambda},\phi_{2}=\xi \tag{13}\]
As shown in [5], upon rewriting the Lagrangian in Eq. (10) in terms of these new fields we obtain a new particle defined by the field \(\eta\) (the Higgs) and other interaction terms. A Goldstone Boson (\(\xi\)) is obtained as well, along with a bilinear term
\[-2i\left(\frac{\mu}{\lambda}\frac{q}{\hbar c}\right)(\partial_{\mu}\xi)\left( A^{\mu}\right) \tag{14}\]
This term cannot be interpreted as an interaction term, because it implies that the particles defined by \(\xi\) and \(A^{\mu}\) cannot exist independently. To get rid of the Goldstone Boson and this bilinear term, [5] exploits the local gauge invariance of the Lagrangian defined by Eq.(10), and chooses
\[\theta=-\tan^{-1}\left(\frac{\phi_{2}}{\phi_{1}}\right) \tag{15}\]
In this gauge, \(\phi_{2}\) trivially vanishes, thus eliminating both the Goldstone Boson and the bilinear term. This completes the Higgs Mechanism.
We emphasize that at no step was the initial Lagrangian given by Eq.(10) altered, it was only redefined by simple re-parameterization of the initial fields \(\phi_{1}\) and \(\phi_{2}\), and a suitable \(\theta\) was chosen at the end. At any point, we can revert to the initial choice of fields and arrive at the initial Lagrangian. The key features of the Mechanism are the emergence of the Higgs (\(\eta\)), elimination of the Goldstone Boson (\(\xi\)), and the non-physical interaction term (Eq.(14)) when the Lagrangian is expanded about its potential's minimum and a suitable gauge is chosen.
Mapping to the cusp potential
In this section, we study the rather unique choice of Lagrangian defined by Eq.(7). The Higgs Mechanism as detailed above is a very specific phenomenon, and need not arise in a more general Lagrangian. We thus now apply a catastrophe theoretic approach to the potential defined by Eq.(11).
We see that the potential only depends on \(\phi^{*}\phi\), and effectively is a one-variable function. We define \(r=\sqrt{\phi^{*}\phi}\) and obtain:
\[V=\frac{\lambda^{2}}{4}r^{4}-\frac{\mu}{2}r^{2} \tag{16}\]
This potential can be mapped to the cusp potential defined by Eq.(1) via
\[x=\sqrt{\frac{\lambda}{2}}\,r,\quad a=-\frac{\mu}{\lambda},\quad b=0 \tag{17}\]
We note that this automatically assigns a unique geometry (that of the Cusp Catastrophe) to this family. The family cannot be mapped to any other catastrophe. As emphasized in Section I, each Catastrophe has a unique geometry, and thus a family of functions can never be mapped to more than one catastrophe. All the re-parameterizations of the initial Lagrangian in Section III can be mapped to (only) the Cusp Catastrophe, simply because the initial Lagrangian maps to the Cusp Catastrophe.
We immediately see that we indeed have something special happening. We have already shown in Section II that we have a first order-phase transition at \(b=0\) for a fixed \(a<0\), which is clearly true for our potential. This means that the potential that gave rise to the Higgs Mechanism in Section III can be generalized by adding a term \(br\) with \(b\neq 0\). Thus the Higgs Mechanism arises with a first-order phase transition in the family of Lagrangians with potentials given by
\[V_{f}=\frac{\lambda}{4}r^{4}-\frac{\mu}{2}r^{2}+br,b\in\mathbb{R} \tag{18}\]
We will later show that the Higgs Mechanism does not arise in a general Lagrangian of this family.
We further note that \(r=\sqrt{\phi^{*}\phi}\) is strictly non-negative. It is easy to check that if we fix \(a<0\), we have our critical points as
\[0<b<\frac{4}{3\sqrt{6}}\sqrt{-a^{3}}\longrightarrow\text{two positive, one negative}\]
\[b=0\longrightarrow\text{one positive, one negative and one as 0}\]
\[-\frac{4}{3\sqrt{6}}\sqrt{-a^{3}}<b<0\longrightarrow\text{two negative, one positive}\]
Enforcing \(r\geq 0\), we are left with 2 critical points for \(b\geq 0\) and 1 critical point for \(b<0\). Thus, in addition to the first-order phase transition, the number of critical points changes as well when the Higgs Mechanism arises. It is easy to verify that the Higgs Mechanism cannot show up with the new term
\[br=b\sqrt{\phi^{*}\phi}=\sqrt{\left(\frac{\mu}{\lambda}+\eta\right)^{2}+\xi^{ 2}} \tag{19}\]
There is no way to interpret this term as an interaction. The fields cannot be promoted to operators under canonical quantization. We cannot exploit the local gauge invariance of \(\mathcal{L}\) to get rid of this term, \(\sqrt{\phi^{*}\phi}\) is trivially invariant under the transformation defined by Eq.(9), and vanishes only for \(\phi_{1}=\phi_{2}=0\). Thus the Higgs mechanism does not show up in this general Lagrangian family, it arises suddenly only when \(b=0\).
## V Summary and conclusion
We present a catastrophe theoretic interpretation of the Higgs Mechanism. We showed that the Lagrangian demonstrating the Higgs Mechanism (Eq.(7)) is a special member of a general family of Lagrangians. The potential of this family of Lagrangians can be easily mapped to the cusp catastrophe, thus assigning a unique geometry to the family. We find that the family demonstrates a first-order phase transition (as defined in Section II) when its parameters are varied continuously so as to arrive at the usual Lagrangian exhibiting the Higgs Mechanism.
## VI Acknowledgements
We would like to thank Urjit A. Yajnik for his valuable insight into our work. We thank Ajay Patvardhan for his suggestions. |
2305.04059 | Decentralised Semi-supervised Onboard Learning for Scene Classification
in Low-Earth Orbit | Onboard machine learning on the latest satellite hardware offers the
potential for significant savings in communication and operational costs. We
showcase the training of a machine learning model on a satellite constellation
for scene classification using semi-supervised learning while accounting for
operational constraints such as temperature and limited power budgets based on
satellite processor benchmarks of the neural network. We evaluate mission
scenarios employing both decentralised and federated learning approaches. All
scenarios achieve convergence to high accuracy (around 91% on EuroSAT RGB
dataset) within a one-day mission timeframe. | Johan Östman, Pablo Gomez, Vinutha Magal Shreenath, Gabriele Meoni | 2023-05-06T14:14:48Z | http://arxiv.org/abs/2305.04059v1 | # Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit
###### Abstract
Onboard machine learning on the latest satellite hardware offers the potential for significant savings in communication and operational costs. We showcase the training of a machine learning model on a satellite constellation for scene classification using semi-supervised learning while accounting for operational constraints such as temperature and limited power budgets based on satellite processor benchmarks of the neural network. We evaluate mission scenarios employing both decentralized and federated learning approaches. All scenarios achieve convergence to high accuracy (around 91% on EuroSAT RGB dataset) within a one-day mission timeframe.
## 1 Introduction
A new generation of satellites is currently bringing hardware suitable for machine learning (ML) onboard spacecraft into Earth orbit. Recent works [1] explored the possibility to train ML models in a distributed manner onboard satellite constellations. Distributed onboard training brings the potential to reduce communication requirements, operational cost and time, and improve autonomy by sharing ML models, trained close to the sensors, instead of the collected data. While previous missions have demonstrated the ability to perform inference onboard spacecraft for data processing [2], training onboard presents additional challenges. Convincingly addressing operational constraints is crucial, as the computational cost of training is significantly higher, and the lack of labeled examples during the mission can often be prohibitive.
In this work, we investigate the training of an ML model onboard a satellite constellation for scene classification. We employ a semi-supervised learning approach called MSMatch [3],
which we successfully distribute using decentralized learning techniques. Operational constraints such as temperature, communication windows, and limited power budgets are modeled using PASEOS [4], a specialized Python module. We provide detailed results on various scenarios involving decentralized and federated learning approaches.
## 2 Methods
This work is built upon three core components: the semi-supervised learning method MSMatch, modeling constraints with PASEOS, and adapting MSMatch for distributed implementation.
### MSMatch
One of the primary challenges for ML applications on spacecraft is the scarcity of labeled training data, particularly before launch. Often, there is an insufficient number of labeled examples for training a model on the ground, necessitating the use of semi- and self-supervised techniques in many instances. MSMatch is a semi-supervised classification method specifically designed for such scenarios [3], and has been proven to achieve high accuracy even when trained with merely a few labels per class.
MSMatch employs consistency regularization and pseudo-labeling to train primarily on unlabeled images. It fundamentally relies on the consistency between the model's predictions on two differently augmented (one strongly, one weakly) versions of the same image. A pseudo-label is generated for the weakly augmented version. Additionally, a supervised loss is applied to the limited available labeled examples. With as few as five labeled samples per class, MSMatch can achieve accuracies above 90% on established benchmarks such as EuroSAT [5]. The method has also demonstrated effectiveness with multispectral data.
For our implementation, we built upon the existing open-source codebase available online1. We utilized the EfficientNet-lite models (efficientnet-lite0), derived from the original EfficientNets [6], as the backend. Due to their small memory footprint and efficiency, these models are well-suited for embedded systems and, therefore, onboard processors.
Footnote 1: [https://github.com/gomezz/MSMatch](https://github.com/gomezz/MSMatch) Accessed: 2023-02-27
### Paseos
Training machine learning models in space necessitates accounting for factors such as power budgets, thermal management, and communication windows, as these directly impact the viability of training [4]. Communication windows, in particular, are a critical factor in distributed computing scenarios [1], [7].
To model these constraints, we employ the open-source Python module PASEOS [4] (Version 0.1.3). PASEOS simulates spacecraft orbital dynamics and power budgets, taking into account power consumption, available solar panels, and eclipses. Thermal management is modeled using a single-node ordinary differential equation [4]. Packet communication is calculated based on the assumed available bandwidth and the presence of a line of sight between communication partners. PASEOS operates asynchronously to the training pipeline, thereby limiting the
ability to train and exchange models. A comprehensive description of the models can be found in the article dedicated to PASEOS [4].
### Decentralized MSMatch
In this study, we demonstrate the capability of training MSMatch in a distributed environment by leveraging well-established techniques for merging local models, such as federated averaging [1]. The Message Passing Interface (MPI) is employed to enable asynchronous training of multiple models while concurrently running PASEOS simulations for each satellite. It is assumed that labeled training data are available prior to launch and preloaded onto each satellite. Unlabeled training data, on the other hand, are randomly distributed among the satellites, with each satellite receiving a fixed number of distinct, unlabeled samples. The hyperparameters of the decentralized MSMatch are comparable to those in [3], with a few exceptions: the batch size is reduced to 32 for labeled data and 96 for unlabeled data, and the learning rate is increased to 0.03. MPI facilitates communication if PASEOS simulations indicate an available window for data exchange. A decentralized MSMatch scenario involving two satellites and a ground station is depicted in Fig. 1. It is important to note that while the satellites share the same labeled examples, the unlabeled data differ. The code for our work is openly accessible online2.
Footnote 2: [https://github.com/gomezzz/DistMSMatch](https://github.com/gomezzz/DistMSMatch) Accessed: 2023-04-05
Figure 1: Decentralized MSMatch for two satellites and a ground station.
## 3 Results
### Setup and Scenarios
To test the proposed method we rely on the EuroSAT dataset [5] used in the original MSMatch paper [3] to enable a direct comparison. The dataset is comprised of 27000 \(64\times 64\) pixel, 13-channel images from Sentinel-2A data classified into ten classes, such as forest or river. In our experimental results, we utilize only the RGB channels. The choice to employ solely the RGB channels stems from two factors: firstly, the RGB channels have already demonstrated satisfactory performance; and secondly, the inclusion of all channels would substantially prolong the training time. However, it should be noted that relying on only the RGB channels places us in a less favorable situation, and leveraging more available data would likely enhance performance further.
To demonstrate the ability to learn in a semi-supervised way, five labeled images from each class, i.e., 50 labeled images in total, are extracted and loaded onto each satellite. That is, the satellites are loaded with the same 50 images. The test set consists of 2700 images and the remaining 24250 images are treated as unlabeled data and are randomly split into eight partitions, i.e., 3031.25 images per partition on average.
The satellites utilize a radio frequency (RF) link of 1 Mbps when communicating with a ground station, and optical inter-satellite links (ISL) in orbit at 100 Mbps. To account for tracking and alignment, we assume the ISL between two satellites to exhibit a setup-time of 30 seconds before every communication attempt [8]. The satellites are equipped with a 0.2772 MJ battery and solar panels that charge at 20 W. The parameters of the thermal model assume a mass of six kilograms, an initial temperature of 26.85 degree Celsius, sun absorptance of 0.9, and an infrared absorptance of 0.5. The sun-facing and Earth-facing area of each satellite are 0.015 m\({}^{2}\) and 0.01 m\({}^{2}\), respectively. The emissive area is 0.1 m\({}^{2}\) and the thermal capacity is 5000 J/(kg * K). The utilized EfficientNet-lite0 network occupies 12.7 MB of storage in a compressed state and the model exchange with the ground takes 201.78 seconds whereas the model exchange via ISL takes 32.03 seconds. Training a batch required 15.98 seconds on a Unibap iX10 satellite processor CPU. Further, communications (to ground), communications (ISL), training, and standby are assumed to consume 10 W, 13.5 W, 30 W, and 5 W, respectively. The PASEOS simulation is run with the default configuration of v0.1.3. The initial epoch of the simulation is 2022-Dec-17 14:42:42.
The investigated, distributed scenarios, as displayed in Fig. 2, involve a constellation in a Sentinel-like orbit (sun-synchronous at 786 km altitude with 98.62\({}^{\circ}\)inclination) featuring eight satellites. The first scenario (Ground Station) assumes model exchanges via three (linked) ground stations on Gran Canaria, Svalbard and in Matera, Italy. From a satellite perspective, the ground stations are viewed as a single unit. In the second (Swarm), eight satellites are assumed to communicate via ISL. Finally, in the third (Relay), one of the European Data Relay Satellite System relays (EDRS-C) is assumed to act as a central server for federated learning. Communication delays due to the relay potentially being busy are neglected. In the federated settings (Ground Station and Relay), the global model is updated asynchronously whenever a local model becomes available by a convex combination, with weight 0.4, of the global model and the newly received model similarly to [9] (with constant weighting function). Furthermore, each of
the satellites will attempt to share their local models every 1000 seconds.
### Training Results
The results obtained from 24 hours of simulation time (equivalent to 14.34 orbital revolutions) have been averaged over three independent runs per scenario and are presented in Table 1. All three scenarios attain an average accuracy exceeding 91%, with the Ground Station scenario achieving the highest accuracy at 91.51%. It can also be seen that the standard deviation is the lowest for Relay and largest for Swarm. This is expected as the Relay and Ground Station scenarios involves sharing a global model in contrast to the Swarm scenario.
The Swarm scenario is the most communication-intensive, with satellites sharing an average of 1168.40 MB of data. This is because there are always neighboring satellites available to receive local models. In contrast, satellites in the Relay scenario transmit an average of 455.61 MB of data, which is considerably less than in the Swarm scenario. This difference is due to the relay satellite occasionally being obscured by Earth. The Ground Station scenario has the least data transmission, with satellites transmitting only 185.74 MB of data, as the link to the ground stations is infrequently available.
Over the simulated 24-hour period, satellites in the Ground Station, Swarm, and Relay
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Setup & Accuracy [\%] & Transmitted & Power & Time Training & Time Communiacing & time between \\ & & Data [MB] & [Wh] & [\% of total] & [\% of total] & communications \\ \hline Ground Station & 91.51 \(\pm\)0.95 & 185.74 & 447.58 & 54.25 & 1.71 & 5913.6 \\ \hline Swarm & 90.96\(\pm\)1.34 & 1168.40 & 449.36 & 53.76 & 3.34 & 1878.4 \\ \hline Relay & 91.19\(\pm\)0.76 & 455.61 & 449.37 & 54.45 & 1.32 & 2349.3 \\ \hline \end{tabular}
\end{table}
Table 1: Results averaged over the eight satellites over three different runs.
Figure 2: Visualization of the different constellations and communication setups.
scenarios communicate their local models an average of 14.625, 46, and 35.875 times, respectively. It is important to note, however, that this is not reflected in the relative time spent communicating, as the Relay scenario spends the least time communicating due to the Ground Station relying on RF communications. The power consumption and total training time are similar across all three scenarios. Note, however, that the cost of operating the ground station or the relay satellite is not accounted for.
The convergence of the top-1 accuracy (averaged over three independent runs) for each satellite is depicted in Fig. 3. The performance among different satellites in the Swarm and Relay scenarios is more consistent due to frequent communication, which prevents satellites from deviating significantly. The Swarm scenario reaches a local optimum after approximately two orbital revolutions, resulting in an 88.5% top-1 accuracy, while the Relay scenario attains a less favorable local optimum in the same time frame, with an 86.7% top-1 accuracy. Interestingly, satellites in both scenarios escape their local optima after eight orbital revolutions and find a more favorable optimum after 12 revolutions. Although the Ground Station scenario is not as consistent as the other two scenarios, it exhibits similar behavior.
As previously mentioned, PASEOS enables accounting for constraints such as power and temperature. Figure 4 illustrates the average temperature and power consumption. In our numerical experiments, the spacecraft enters standby mode to recharge or cool down when the state of charge drops below 0.2 or the temperature exceeds 313.15 K (indicated by dashed lines in Figure 4). However, communication is prioritized and will always be performed.
The temperature behaves similarly across all settings, as the primary influencing factor is the training process. The temperature (represented by blue curves) can be observed to trigger standby mode after approximately 10 orbital revolutions. Subsequently, the satellites enter standby mode to cool down and initiate training as soon as the temperature no longer violates the constraint, causing the temperature to oscillate around the constraint temperature.
In contrast, the state of charge (depicted by red curves) exhibits different behavior across the three scenarios. In the Swarm scenario, spacecraft consistently communicate their models. However, in the Ground Station and Relay scenarios, satellites do not always have a communication link available to share local models, resulting in less predictable battery consumption among the satellites. Since training consumes the most power and the orbit allows the spacecraft
Figure 3: Top-1 accuracy on the test set for the satellites averaged over three runs. Different colors indicate different satellites.
to charge for most of the time, the more frequent standby mode triggered by temperature enables the spacecraft to recharge the battery.
The results depicted in Figure 3 are derived from 5 labeled examples per class, with the remaining data samples left unlabeled. To evaluate the influence of the labeled dataset size, we concentrate on the Swarm scenario and conduct the experiment with varying labeled dataset sizes. Figure 5 illustrates the average top-1 accuracy (across satellites and 3 independent runs).
As anticipated, the top-1 accuracy increases as the size of the labeled dataset expands. Notably, performance approaches 90% with just 3 labeled examples per class, and with 100 labels per class, the top-1 accuracy attains 96.2%. For comparison purposes, we include the performance of the centralized implementation of MSMatch [3]. It is important to recognize that this comparison may not be entirely fair, as the centralized version undergoes training for a substantially longer duration and employs a larger EfficientNet model. Nonetheless, the distributed version of MSMatch presented herein proves to be competitive and achieves comparable performance.
Figure 4: Average temperature and state of charge of the constellation. Temperatures in blue and state of charge in red.
Figure 5: Swarm top-1 test accuracy vs number of labeled examples per class. The points are averaged over satellites and three independent runs.
## 4 Conclusion
In this study, we illustrate the feasibility of training a state-of-the-art neural network in a semi-supervised manner, distributed across a satellite constellation using current satellite processors. Depending on the orbital configuration and assets, the constellation learns to classify the EuroSAT dataset with up to 91.51% accuracy after a simulated training duration of 24 hours. Moving forward, the incorporation of more intricate scenarios, communication schemes, and a refined satellite architecture will enable further optimizations and increased fidelity.
## 5 Acknowledgement
The authors would like to thank Unibap AB for providing the iX-10 100 device that was used for our experiments. The work of Johan Ostman was funded by Vinnova under grant 2020-04825 and the work of Vinutha Magal Shreenath under Vinnova grant 2021-03643 and under Swedish National Space Agency grant 2022-00013.
|
2302.06831 | Analytical Model of Nonlinear Fiber Propagation for General
Dual-Polarization Four-Dimensional Modulation Format | Coherent dual-polarization (DP) optical transmission systems encode
information on the four available degrees of freedom of an optical field: the
two polarization states, each with two quadrature components. Such systems
naturally operate based on a four-dimensional (4D) signal space. Having a
general analytical model to accurately estimate nonlinear interference (NLI) is
key to analyze such transmission systems as well as to study how different
DP-4D formats are affected by NLI. However, the available models in the
literature are not completely general. They either do not apply to the entire
DP-4D formats or do not consider all the NLI contributions. In this paper, we
develop a model that applies to all DP-4D modulation formats with independent
symbols. Our model takes self-channel interference, cross-channel interference
and multiple-channel interference effects into account. As an application of
our model, we further study the effects of signal-noise interactions in
long-haul transmission via the proposed model. When compared to previous
results in the literature, our model is more accurate at predicting the
contribution of NLI for both low and high dispersion fibers in single- and
multi-channel transmission systems. For the NLI, we report an average gap from
split step Fourier simulation results below 0.15 dB. The simulation results
further show that by considering signal-noise interactions, the proposed model
in long-haul transmission can reduce the transmission reach prediction error by
4%. | Zhiwei Liang, Bin Chen, Yi Lei, Gabriele Liga, Alex Alvarado | 2023-02-14T05:04:36Z | http://arxiv.org/abs/2302.06831v3 | Analytical Model of Nonlinear Fiber Propagation for General Dual-Polarization Four-Dimensional Modulation Formats
###### Abstract
Coherent dual-polarization (DP) optical transmission systems encode information on the four available degrees of freedom of an optical field: the two polarization states, each with two quadrature components. Such systems naturally operate based on a four-dimensional (4D) signal space. Having a general analytical model to accurately estimate nonlinear interference (NLI) is key to analyze such transmission systems as well as to study how different DP-4D formats are affected by NLI. However, the available models in the literature are not completely general. They either do not apply to the entire DP-4D formats or do not consider all the NLI contributions. In this paper we develop a model that applies to the entire class of DP-4D modulation formats. Our model takes self-channel interference, cross-channel interference and multiple-channel interference effects into account. As an application of our model, we further study the effects of signal-noise interactions in long-haul transmission via the proposed model. When compared to previous results in the literature, our model is more accurate at predicting the contribution of NLI for both low and high dispersion fibers in single- and multi-channel transmission systems. For the NLI, we report an average gap from split step Fourier simulation results below 0.15 dB. The simulation results further show that by considering signal-noise interactions, the proposed model in long-haul transmission improves the NLI power accuracy prediction by up to 8.5%.
Nonlinear interference model, Four-dimensional modulation formats, Signal-noise interaction, Optical fiber communications
## I Introduction
In optical communication systems, one of the main challenges is to make efficient use of existing network resources. To achieve this, signal shaping has been investigated in optical fiber communications as an effective approach to achieve high spectral efficiencies (SEs). Shaping can be performed by changing the probability or position of the constellation points, which is known as probabilistic shaping (PS) [2] and geometrical shaping (GS) [3], resp.
Performing joint shaping over multiple dimensions (e.g., time slots, wavelengths, and spatial dimensions) to achieve large performance gains has recently received interest in the literature for both the additive white Gaussian noise (AWGN) [4, 5] and the optical fiber channel [5, 6, 7, 8]. When constellation shaping tailored to the AWGN channel is used in the nonlinear optical channel, however, a performance penalty is introduced. This penalty is caused by the modulation-dependent nonlinear interference (NLI). This effect was studied for example in [9, 10].
In order to harvest most of the gains in the nonlinear fiber channel, heuristic ideas have been used in the literature. These include for example adding constraints of constant modulus [11, 12] and shell shaping [7] in the optimization. Such heuristics have the potential to significantly reduce the NLI, however, to fully maximize the performance in the nonlinear channel, an accurate NLI modelling is required.
As shown in [13, Fig. 3], using the split-step Fourier method (SSFM) for constellation optimization indeed makes the modulation format more robust against nonlinearities. The disadvantage of this approach is that it is a very time-consuming process, unless simple systems (single-span and/or single-channel) systems are considered. Therefore, a general NLI model that allows fast and accurate estimation to capture the effect of modulation-dependent NLI is essential for optimizing and analyzing the performance of modulation formats in optical communication systems.
In the last two decades, many analytical nonlinear models have been presented in the literature. The models can be broadly grouped into time-domain and frequency-domain models. Some of the most prominent ones are [14, 15, 16, 17, 18, 19, 20, 21, 22]. The most popular model is the Gaussian noise (GN) model, which was derived based on the assumption that the signal statistically behaves as Gaussian noise over uncompensated links. Soon after the introduction of the GN model, it was pointed out in [21, 22] that ignoring all modulation-format-dependent terms leads to a substantial NLI overestimation.
To analyze and reduce the limitations of the GN model, a number of modulation format-dependent correction formulas have been proposed, effectively lifting the Gaussianity assumption of the transmitted signal. The first part of Table I shows the details of _traditional_ models for 2D modulation formats.
As shown in Table I, the models in [21, 23] derived the correction terms including self-channel interference (SCI) and cross-phase modulation (XPM) in time domain. The model in [22] derived all the main NLI effects including SCI, cross-channel interference (XCI)1 and multiple-channel interference (MCI) were derived in frequency domain. A major drawback of all these traditional models is that they can only be applied to polarization-multiplexed 2D (PM-2D) modulation formats, where two identical 2D formats are used to transmit information independently over the two orthogonal polarizations. PM-2D formats are only a subset of all possible dual-polarization four-dimensional (DP-4D) modulation formats.
Footnote 1: Recall that there are 4 XCI terms, often called X1, X2, X3, and X4 (see Fig. 3 ahead, where X1 corresponds to XPM.
In order to fully explore the potential of DP-4D modulation formats in the nonlinear fiber channel, [24] introduced the first 4D NLI model as a tool to efficiently trade-off linear and nonlinear shaping gains. The frequency-domain model in [24] applies only to single-channel scenarios since it only considered SCI. Later in [25], a time-domain model was introduced that considers both SCI and cross-phase modulation (XPM). The model in [25] is only valid for 4D symmetric constellations2 and high-dispersion fiber systems (e.g., standard single mode fiber (SMF) in dispersion-uncompensated system). The model in [25] was then extended to take SCI, XCI and MCI into account in [26]. The model in [26] can be used for low dispersion fiber but still makes the assumption of 4D symmetric constellations. Recently, based on [25], a model for arbitrary 4D modulation formats was introduced in [27]. The model in [27] was derived in the time domain but only accounts partially for NLI effects (SCI and XPM terms only). The state-of-the-art NLI models for 4D modulation formats are summarized in the second part of Table I.
Footnote 2: Constellations which are symmetric with respect to the origin, and have the same power in both polarizations.
The contribution of this paper is twofold. First, we derive an "ultimate" 4D NLI model that covers the entire DP-4D class of modulation formats. We achieve this by extending the 4D NLI model (SCI-only) in [24] to consider all the main NLI contributions, including SCI, XCI and MCI. Secondly, following our preliminary results of considering signal-noise interactions for single-channel systems presented in [1], we also extend the analysis in [1] to wavelength division multiplexed (WDM) systems. We perform a comprehensive analysis of simulation results in multi-channel WDM systems with three different fibers with different dispersion. Our study is validated by analytically studying the effective signal-to-noise ratio (SNR) using general DP-4D formats. The numerical results show that the proposed 4D nonlinear model has a superior accuracy with a maximum deviation of 0.15 dB in terms of NLI power for all 4D modulation formats studied in this work. Moreover, by considering the signal-noise interactions in a multi-channel long-haul transmission system, the SNR estimation error can be reduced within 0.1 dB with respect to SSFM results, which can be translated into a 4% prediction accuracy improvement in terms of transmission reach.
This paper is structured as follows. In Sec. II, we present the system model and review the effective SNR considering the signal-signal interactions and signal-noise interactions. Sec. III presents the expression of the proposed NLI model and the key steps of its derivation. Sec. IV is devoted to validate this model and assess the contribution of signal-noise interaction via a wide range of 4D modulation formats. Sec. V concludes this paper and outlines the potential direction for future research. Finally, the appendix provides a detailed derivation of the NLI model.
## II System Model And Performance Metrics
### _System Model_
In this work, we consider the equivalent model of an optical fibre system shown in Fig. 1 (a). The physical channel is a multi-span, \(h\)-channel WDM fibre system using one Erbium-doped fibre amplifier (EDFA) per span. At the transmitter, the input bits are mapped into 4D symbols using a predefined DP-4D modulation format and its corresponding binary labeling. The 4D symbols are jointly modulated using a transmitter digital signal processing (DSP) block, including upsampling, root-raised cosine (RRC) pulse shaping and power setting. After transmitting the symbols over the physical channel, the received symbols are processed by a receiver DSP block, including chromatic dispersion compensation, matched filtering, sampling and ideal phase compensation for potential constant phase rotation. The symbols are then demapped by a 4D demapper to generate soft information (i.e. log-likelihood ratios), which estimates the transmitted bits, to get the system performance metrics.
There are mainly two ways of generating a sequence of 4D symbols. These are schematically shown in the Fig. 1 (b). The left hand side of Fig. 1 (b) shows the case of PM-2D formats, where the 4D points can be described using independent and identically distributed 2D points on each polarization. The right hand side of Fig. 1 (b) show the more general case called DP-4D, where the 2D constellations are jointly modulated on two orthogonal polarization state. In this case, the 2D projections in each polarization are not independent. Fig. 1 (c) shows the NLI effects that such transmitted signals will experience in a multi-channel WDM optical system due to the Kerr effect, which inlcude SCI, XCI and MCI. All these effects will be investigated separately in detail in the Sec. III.
### _Performance Metrics_
It is known that calculating performance metrics of optical transmission system using SSFM simulations is a time-consuming task. An NLI model can be an efficient way to solve this problem. The general idea is shown in Fig. 1 (d), where the NLI model is used to replace the time-consuming SSFM simulations in order to predict certain performance metrics. To explore the role of signal-noise interactions, in this paper we focus on predicting the effective SNR. To this end, we will use our proposed model to estimate NLI power coefficients \(\eta_{ss}\) and \(\eta_{sn}\), which are associated to signal-signal (SS) and signal-noise (SN) components, respectively. As we will show later, these two coefficients are sufficient to estimate the effective SNR in a multi-channel multi-span scenario for arbitrary DP-4D modulation formats.
Under an additive NLI noise assumption, the effective SNR is defined as:
\[\text{SNR}_{\text{eff}}\triangleq\frac{P}{N_{s}\sigma_{\text{ASE}}^{2}+\sigma _{\text{ss}}^{2}+\sigma_{\text{sn}}^{2}}, \tag{1}\]
where \(P\) denotes the transmitted signal power per channel, \(N_{s}\) is the number of spans. The total noise power consists of three parts: i) amplified spontaneous emission (ASE) noise over one span denoted as \(\sigma_{\text{ASE}}^{2}\), ii) signal-signal (S-S) NLI power denoted as \(\sigma_{\text{ss}}^{2}\) and iii) signal-ASE noise (S-N) NLI power denoted as \(\sigma_{\text{sn}}^{2}\). The effective SNR in (1) corresponds to the SNR after fiber propagation and the receiver DSP including chromatic dispersion compensation and phase compensation.
For dual-polarized signals over multi-span transmission, the signal-signal NLI power \(\sigma_{\text{ss}}^{2}\) in (1) can be approximated as [28, eq. (1)]
\[\sigma_{\text{ss}}^{2}\approx\tilde{\eta}_{\text{ss}}N_{s}^{1+\varepsilon}P^{ 3}=\eta_{\text{ss}}P^{3}, \tag{2}\]
where \(\varepsilon\) is a coherence factor for NLI, which is a function of fiber link parameters (attenuation, dispersion, span length, etc) [29, eq. (40)]. The signal-signal NLI power coefficient (over one span) is denoted by \(\tilde{\eta}_{\text{ss}}\). As shown in (2), from now on we will use \(\eta_{\text{ss}}\) to denote the accumulated signal-signal NLI power coefficient over \(N_{s}\) spans, where \(\eta_{\text{ss}}=\tilde{\eta}_{\text{ss}}N_{s}^{1+\varepsilon}\).
The ASE noise generated by EDFA leads not only to an additive white Gaussian noise (AWGN) but also to a nonlinear interference produced by the interaction of ASE noise and the transmitted signal [30]. Under the assumption of a flat transmitted signal spectrum and same propagated signal and ASE noise bandwidth, the signal-ASE noise NLI power coefficient can be estimated as \(\tilde{\eta}_{\text{sn}}=3\tilde{\eta}_{\text{ss}}\), where the \(\tilde{\eta}_{\text{sn}}\) is the signal-ASE noise NLI power coefficient over one span [31, Sec. 3] [28, eq. (1)]. Thus, by following [31, eq. (8)], the NLI power of signal-ASE noise interactions can be expressed as
\[\sigma_{\text{sn}}^{2}=\xi\tilde{\eta}_{\text{sn}}\sigma_{\text{ASE}}^{2}P^{ 2}=3\xi\tilde{\eta}_{\text{ss}}\sigma_{\text{ASE}}^{2}P^{2}, \tag{3}\]
where \(\xi=\sum_{n=1}^{N_{s}}n^{1+\varepsilon}\approx\frac{N_{s}^{2+\varepsilon}}{2+ \varepsilon}+\frac{N_{s}^{1+\varepsilon}}{2}\) is the signal-ASE noise NLI accumulation coefficient [31, Sec. 3].
Using (2) and (3) to estimate the total NLI power, effective SNR can be estimated as
\[\text{SNR}_{\text{eff}}\triangleq\frac{P}{\underbrace{N_{s}\sigma_{\text{ASE }}^{2}}_{\sigma_{\text{MS}}^{2}}+\underbrace{\eta_{\text{ss}}P^{3}}_{\sigma_{ \text{ss}}^{2}}+\underbrace{3\eta_{\text{ss}}(\frac{N_{s}}{2+\varepsilon}+ \frac{1}{2})\sigma_{\text{ASE}}^{2}P^{2}}_{\sigma_{\text{ss}}^{2}}}. \tag{4}\]
Note that \(\eta_{\text{ss}}\) is a constant value (for a given system configuration) linked to the contributions of both modulation-independent and modulation-dependent nonlinearities, thus NLI power is also a function of a given 4D modulation format.
## III The Nonlinear model Derivation
In [24], a detailed derivation of the SCI term accounting for DP-4D formats was shown, which has been also validated in [1, 32]. Here, we only recall the main defining formulas and key conclusions from [1, 32]. Then, we extend this model into multi-channel WDM systems in consideration of XCI and MCI.
### _SCI of The Nonlinear Model_
For single channel systems, only SCI contributions exists. Therefore, for general DP-4D formats, the modulation
Fig. 1: (a) System model under investigation in this work which consists of a general optical fibre system. (b) PM-2D vs. DP-4D formats. (c) An example of effects experienced by three frequency channels along the modulated dimensions. (d) A general modulation-dependent performance metrics estimation block.
-dependent coefficient \(\eta_{\text{ss}}=(\eta_{\text{ss,SCI}}^{\text{x}},\eta_{\text{ss,SCI}}^{\text{y}})\) for the x polarization can be calculated using [32, eq. (1)]
\[\eta_{\text{ss,SCI}}^{\text{x}}= \left(\frac{8}{9}\right)^{2}\frac{\gamma^{2}}{P^{3}}[R_{s}^{3}( \Phi_{1}\chi_{1}+\Phi_{2}\chi_{2}+\Phi_{3}\chi_{3})+R_{s}^{2}(\Psi_{1}\chi_{4}\] \[+2\Re\{\Psi_{2}\chi_{5}+\Psi_{3}\chi_{5}^{*}\}+\Psi_{4}\chi_{6}+2 \Re\{\Lambda_{1}\chi_{7}+\Lambda_{2}\chi_{7}^{*}\}\] \[+\Lambda_{3}\chi_{8}+2\Re\{\Lambda_{4}\chi_{9}+\Lambda_{5}\chi_{9 }^{*}\}+\Lambda_{6}\chi_{10})+R_{s}\Xi_{1}\chi_{11}], \tag{5}\]
where \(\Re\{\cdot\}\) denote the real part of a complex number, the coefficients \(\Phi_{i},i=1,2,3,\Psi_{i},i=1,2,3,4,\Lambda_{i},i=1,2,...,6\), and \(\Xi_{1}\) are modulation format-dependent terms as functions of several different intra- and cross- polarization moments, where can be found in [24, Table 8]. The coefficients \(\chi_{i},i=1,2,...,11\) (also in [24, Table 8]) are the frequency-dependent integrals over the channel bandwidth, which are independent with the shape of the modulation format. The expression for \(\eta_{\text{ss,SCI}}^{\text{y}}\) can be found from (5) by simply applying the transformation x \(\rightarrow\) y and y \(\rightarrow\) x.
Fig. 2 shows the noise power, i.e., \(\sigma_{\text{ASPhot}}^{2}\), \(\sigma_{\text{ss}}^{2}\), \(\sigma_{\text{st}}^{2}\) in (4), against transmission distance for two modulation formats: PM-256QAM and 4D-PRS64 [12]. The system parameters are shown in Table II. For all distance shown, the total ASE noise power is constellation-independent and the NLI contribution of S-S and S-N are smaller. Considering for example 4D-PRS64 at a distance of \(1600\) km, \(\sigma_{\text{in}}^{2}\) differs from \(\sigma_{\text{ss}}^{2}\) by a factor of 17.2 dB, while the difference is reduced to 10.6 dB for that of 7500 km. The proportion of \(\sigma_{\text{in}}^{2}\) in NLI power keeps increasing as the number of fiber span increases.
The dashed lines in Fig. 2 show the noise contributions for PM-256QAM. This figure shows the dependency of the NLI noise on the modulation format. A 0.3 dB gap can be observed when comparing these two modulation formats at 4000 km. The dashed line also shows the same trend as the solid line that the gap reduces from 17.2 dB at 1,500 km to 10.6 dB at 7,500 km. This indicates that the effect of signal-ASE noise NLI can not be fully neglected in long-distance transmission. More results of modulation formats are shown in the Sec. III.
### _XCI and MCI of the Nonlinear Model_
In this subsection, to allow the considered 4D NLI model proposed in [24] to use for a wide range of purpose, we extend it to the multi-channel WDM systems for general DP-4D modulation formats by deriving the expressions of XCI and MCI.
As shown in Fig. 1 (c), the Kerr effect includes SCI, XCI and MCI. Thus, the NLI power coefficient \(\eta_{\text{ss}}\) can be defined as
\[\eta_{\text{ss}}=\eta_{\text{ss,SCI}}+\eta_{\text{ss,XCI}}+\eta_{\text{ss, MCI}}, \tag{6}\]
where the \(\eta_{\text{ss,SCI}}\) can be calculated using (5). For the other terms, we followed an approach similar to EGN model [33], meaning that we derived the XCI (marked as X1, X2, X3 and X4 in Fig. 3) and MCI (marked as M0, M1, M2 and M3 in Fig. 3) terms, which can be used to calculate the \(\eta_{\text{ss,XCI}}\) and \(\eta_{\text{ss,MCI}}\).
To find an analytical expression for NLI power, firstly, a solution to the Manakov equation, which is the fundamental equation of dual-polarization fiber non-linear dispersive propagation, must be found. The Manakov equation can be written in time domain as [34]
\[\frac{\partial E(t,z)}{\partial z}=-\frac{\alpha}{2}E(t,z)-j\frac{\beta_{2}}{ 2}\frac{\partial^{2}E(t,z)}{\partial t^{2}}+j\frac{8}{9}\gamma|E(t,z)|^{2}E(t,z), \tag{7}\]
where \(\alpha\) is the loss coefficient, \(\beta_{2}\) is dispersion coefficient and \(\gamma\) is the nonlinear coefficient. As it is well-known, Manakov equation has no general closed-form solutions. Like most of the existing NLI power models in the literature [21, 22, 29], the model derived here operates within a first-order perturbative framework. In particular, a frequency-domain first-order regular perturbation (RP) approach in the \(\gamma\) coefficient is performed [35, 20]. Therefore, the first order RP solution of the Manakov equation after \(N_{s}\) spans is expressed as [24]
\[E(f,N_{s},L_{s}) =[E_{\text{x}}(f,N_{s},L_{s}),E_{\text{y}}(f,N_{s},L_{s})]^{\text{ T}}\] \[=-j\frac{8}{9}\gamma\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}E ^{\text{T}}(f_{1},0)E^{*}(f_{2},0)\] \[E(f-f_{1}+f_{2},0)\mu(f_{1},f_{2},f,N_{s},L_{s})df_{1}df_{2}. \tag{8}\]
Fig. 3: Example of regions of XCI and MCI for 5 channels.
Fig. 2: Noise power versus transmission distance at launch power of \(P=0.5\) dBm with a single channel. Noise is shown separately, as total ASE noise, signal-signal NLI and signal-ASE noise NLI in (4).
Due to the lumped amplification and identical spans, \(\mu(f_{1},f_{2},f,N_{s},L_{s})\) is define in [22] as the 'link function' which weights the generation of NLI and relates to channel parameters, but not depends on modulation formats.
On the other hand, the XCI and MCI contributions of multi-channel WDM systems can be summed up independently, so here we only consider the channel of interest (COI) where its center frequency is zero for simplicity and an interfering (INT) channel with center frequency \(f_{c}\). In addition, the transmitted signal model is assumed to be a periodic with period \(T\), where \(W\) symbols are transmitted every period \(T\). Thus, the transmitted signal model can be derived as (see Appendix A)
\[\begin{split} E(f,0)=\sqrt{\Delta_{f}}&\sum_{k=- \infty}^{+\infty}\nu_{k}\delta(f-k\Delta_{f})+\sqrt{\Delta_{f}}\\ &\cdot\sum_{k=-\infty}^{+\infty}\xi_{k}\delta(f-f_{c}-k\Delta_{f }),\end{split} \tag{9}\]
where \(\Delta_{f}=1/T\), \(f_{c}\) is the center frequency of INT channel and
\[\begin{split}\nu_{k}=[\nu_{\text{x},k},\nu_{\text{y},k}]^{\text{T }}=\sqrt{\Delta_{f}}P(k\Delta_{f})\sum_{n=-(W-1)/2}^{(W-1)/2}\mathbf{a}_{n}e^{-j2 \pi\frac{k_{\text{N}}}{W}},\end{split} \tag{10}\]
\[\begin{split}\xi_{k}=[\xi_{\text{x},k},\xi_{\text{y},k}]^{\text{T }}=\sqrt{\Delta_{f}}P(f_{c}+k\Delta_{f})\sum_{n=-(W-1)/2}^{(W-1)/2}\mathbf{b}_{n}e^ {-j2\pi\frac{k_{\text{N}}}{W}},\end{split} \tag{11}\]
in which \(\mathbf{a}_{n}=[a_{\text{x},n},a_{\text{y},n}]^{T}\) is random variables (RVs) transmitted by the COI, complex symbols modulated on two arbitrary orthogonal polarization states x and y, respectively. \(\mathbf{b}_{n}=[b_{\text{x},n},b_{\text{y},n}]^{T}\) is RVs transmitted by an INT channel.
Substituting the spectrum of transmitted periodic signal (9) in (8), we obtain the PSD of received NLI, which is dealt with in Appendix B. Therefore, the \(\eta_{\text{ss,XCI}}=(\eta_{\text{ss,XCI}}^{\text{x}},\eta_{\text{ss,XCI}}^{ \text{y}})\) for x polarization can be written as
\[\begin{split}\eta_{\text{ss,XCI}}^{\text{x}}=\left(\frac{8}{9} \right)^{2}\frac{\gamma^{2}}{P^{3}}\Bigg{[}\sum_{X1_{i},X2_{i},X3_{i}}& \{R_{s}^{3}[\Phi_{1}\chi_{1}(f)+\Phi_{2}\chi_{2}(f)]\\ +& R_{s}^{2}\Psi_{1}\chi_{3}(f)\}+\bar{S}_{X4_{i}}^{ \text{x}}\Bigg{]},\end{split} \tag{12}\]
where the \(\bar{S}_{X4_{i}}\) is similar to the term in [24, eq. (42)] by swapping \(a\to b\), because the summation of \(S_{i}\) (the \((f_{1},f_{2},f_{3})\) triples are located in COI) is similar as the summation of \(X4_{i}\) (the \((f_{1},f_{2},f_{3})\) triples are located in INT) in form as shown in the (24). The terms \(\Phi_{1},\Phi_{2},\Psi_{1}\) are given in Table IV (see Appendix C), while the terms \(\chi_{1},\chi_{2},\chi_{3}\) are expressed in Table VI (see Appendix C). By swapping the polarization labels x \(\rightarrow\) y and y \(\rightarrow\) x, the \(\eta_{\text{ss,XCI}}^{\text{y}}\) can be also obtained from (12).
Fig. 4 shows the NLI power coefficient versus the number of spans for three different fiber types, i.e. standard SMF, non-zero dispersion-shifted fiber (NZDSF), low dispersion fiber (LSF). The XPM approximation \(\eta_{\text{ss,XPM}}\) amounts to the X1 region only, shown as a green solid line. The red solid line represents \(\eta_{\text{ss,XCI}}\) given by (12). The marks represent the simulation results which account for all NLI except SCI. Fig. 4 (a) shows that the XCI or XPM approximation is sufficient to present simulated NLI (except SCI) over high accumulated dispersion scenarios for example standard SMF fiber. According to the inset of Fig. 4 (b), the XPM approximation underestimate the simulated NLI by about 0.79 dB, while the XCI can reduce such error to 0.37 dB. This suggests that in low dispersion scenario the contribution of regions X2-X4 could not be neglected. The inset of Fig. 4 (c) considered a ultra-low dispersion fiber, showing a wide underestimate error of about 1.19 dB for XPM approximation and 0.94 dB for XCI. Under such low accumulated dispersion scenarios, the XCI only can not represent all NLI (except SCI), i.e., MCI must be considered in such low dispersion fibers.
Generally, MCI is always neglected, which has been investigated in [36]. However, Fig. 4 (c) shows the MCI is necessarily in ultra-low dispersion fibers. The reason is that the contribution to NLI is limited by the 'link function'. When dispersion is relatively low, the MCI is not negligible due to the delay of \(\mu\) is much lower [33]. Therefore, to accurately predict the nonlinear interference in various scenarios, the MCI contribution was derived following an approach similar to [33] as shown in Fig. 3. The MCI was divided into four islands marked as M0, M1, M2 and M3. The M1 and M2 (yellow regions and red regions) have a similar structure as XCI in the region X1 (blue regions), and M3 (green regions) is similar with the region X3 (purple regions). In particular, the M0 is produced entirely according to the GN-model as the white regions in Fig. 3. Thus, the \(\eta_{\text{ss,XCI}}=(\eta_{\text{ss,MCI}}^{\text{x}},\eta_{\text{ss,MCI}}^{ \text{y}})\) is similar to (12), which can be expressed as
\[\begin{split}\eta_{\text{ss,MCI}}^{\text{x}}=\left(\frac{8}{9} \right)^{2}\frac{\gamma^{2}}{P^{3}}\Bigg{[}\sum_{M1_{i},M2_{i},M3_{i}}& \{R_{s}^{3}[\Phi_{1}\chi_{1}(f)+\Phi_{2}\chi_{2}(f)]\\ +& R_{s}^{2}\Psi_{1}\chi_{3}(f)\}+\bar{S}_{M0_{i}}^{ \text{x}}\Bigg{]}.\end{split} \tag{13}\]
Note that the terms \(\chi_{1},\chi_{2},\chi_{3}\) have different domain of integration in diffident lozenge-shaped island as shown in Fig. 3.
## IV Simulation results and analysis
The numerical validation of the model in this work is performed via SSFM simulations where the optical nonlinearity is kept as the only noise. The simulated multi-span optical system is described in Table II. To verify the reliability of our proposed model, various 4D modulation formats considered in our simulations are the sphere packing database in [37], some recently proposed 4D symmetric formats, such as 4D-PRS64 [12] and a family of 4D orthant-symmetric (OS) formats [38], and some non-symmetric modulation formats for example c4_16 [39] and w4_64 [40].3
In this section, we compared 4D modulation formats in terms of \(\eta\). To validate the \(\eta\) value, SSFM does not consider the ASE noise. In absence of other noise sources, \(\eta\) can be estimated via the received SNR for COI via the relationship
\[\eta\approx\frac{1}{\text{SNR}^{\text{est}}P_{ch}^{2}}, \tag{14}\]
where \(P_{ch}\) denotes the transmitted power per channel and \(\text{SNR}^{\text{est}}\) is estimated via [25, eq. (22)]
\[\text{SNR}^{\text{est}}=\frac{\sum_{i=1}^{M}|\bar{y}_{i}|^{2}}{\sum_{i=1}^{M} \mathbb{E}[|Y-\bar{y}_{i}|^{2}|X=x_{i})}, \tag{15}\]
in which the \(X\) and \(Y\) are RVs representing the transmitted symbols and received symbols, respectively. \(M\) is the cardinality size of the modulation format, i.e., the number of constellation points, and \(x_{i}\) represents the \(i\)-th constellation point. The \(\bar{y}_{i}=\mathbb{E}\{Y|X=x_{i}\}\), where \(\mathbb{E}\{\cdot\}\) is the statistical expectation. Note that NLI models were derived under the perturbation theory, i.e. NLI models is used to predict optical communication system performance in a relative low nonlinear state. Thus, all channels used a flat launch power of \(P_{ch}=-20\) dBm.
### _Simulation Validation for Multi-Channel Transmission_
Fig. 5 is a similar plot to Fig. 4, which shows the NLI power coefficient \(\eta\) versus number of spans in the link for multi-span transmission with 9 WDM channels and the voronoi4_32
Fig. 4: Simulation results of multi-span 9-channel optical fiber transmission system for three fiber types: (a) SMF, (b) NZDSF and (c) LSF. The non-symmetric constellation voronoi4_32 was propagated. Red solid line is \(\eta_{\text{ns\_XCI}}\) ((12) of this paper). Green solid line is XPM approximation \(\eta_{\text{ns\_XPM}}\). The marks are SSFM simulations with single-channel nonlinearity (SCI) removed.
Fig. 5: Simulation results of multi-span 9-channel optical fiber transmission system for three fiber types: (a) SMF, (b) NZDSF and (c) LSF. The non-symmetric constellation voronoi4_32 was propagated. Red solid line is \(\eta_{\text{ns\_XMICI}}\) (i.e., XCI + MCI). Green solid line is XPM approximation \(\eta_{\text{ns\_XPM}}\). The marks are simulations with single-channel non-linearity (SCI) removed.
modulation format. As expected, Fig. 5 (a)-(c) show that the 4D model with MCI under consideration can match well with the SSFM results for all the three considered fibers including the low and ultra-low dispersion fibers. On the other hand, it illustrates that the contribution of MCI terms could not fully neglect, especially in low accumulated dispersion scenarios.
In Fig. 6 (a), the \(\eta_{\text{x}}\) (left) and \(\eta_{\text{y}}\) (right) are estimated by using i) the 4D model including SCI, XCI and MCI (red bars), ii) EGN model (blue bars), iii) SSFM (brown bars) for 16-point, 64-point, 128-point, 256-point and 4096-point constellations, respectively. Note that \((\eta_{\text{x}},\eta_{\text{y}})\approx(P_{\text{x}}(\text{SNR}_{\text{x}} \cdot P^{3})^{-1},P_{\text{y}}(\text{SNR}_{\text{y}}\cdot P^{3})^{-1})\), where \(\eta_{\text{x}}\), \(\eta_{\text{y}}\), \(P_{\text{x}}\), \(P_{\text{y}}\), \(\text{SNR}_{\text{x}}\) and \(\text{SNR}_{\text{y}}\) are NLI power coefficient, transmitted power and \(\text{SNR}^{\text{oct}}\) over x and y polarization, respectively. By contrast, for PM-2D modulation formats such as PM-64QAM, our model has the same result as the EGN model and approximate SSFM results. For non-symmetric constellations, the EGN model leads to inaccuracy of up to 0.76 dB for \(\eta_{\text{x}}\) in "c4_16". Even for symmetric constellations, the EGN model also leads to inaccuracy of up to 0.60 dB for \(\eta_{\text{y}}\) in "ab4_256". Such errors between the EGN model results and the SSFM results indicates obvious limitations of EGN model in predicting the NLI of 4D modulation formats. For all constellations shown, the 4D model has ability to predict NLI of DP-4D modulation formats within acceptable margin of error (\(<\) 0.07 dB).
To further validate this 4D model, more 4D modulation formats with different constellation cardinalities against the NLI power coefficient \(\eta\) are investigated. Among these, the minimum values of the NLI power coefficient \(\eta\) are shown in Fig. 6 (b) for each \(M\). The corresponding values are also listed in Table III for x- and y- polarization. For all constellations shown, the EGN model overestimates the value of the NLI power coefficient \(\eta\) with deviations up to 1.96 dB (\(M\)=8). The reason is shown in _Example 1_. Conversely, the 4D model is in perfect agreement with the simulation results with the maximum only deviation about 0.15 dB (all constellation cardinalities \(M\)).
weights of signal-ASE noise interaction in the prediction of the effective SNR for general DP-4D modulation formats were shown. It suggests that the signal-ASE noise interaction plays a role in long-haul transmission systems. Therefore, in this subsection, we validate this conclusion via (4) including both of the signal-signal interaction and signal-ASE noise interaction. And we further study this conclusion in multi-channel WDM systems. The estimation difference of effective SNR between a given model (4D with S-S only or 4D with S-S and S-N) estimation and the SSFM simulation is defined as \(\Delta\text{SNR}_{\text{eff}}^{\text{model}}\triangleq\text{SNR}_{\text{eff}}^ {\text{model}}-\text{SNR}_{\text{eff}}^{\text{SSFM}}\).
Fig. 7 shows the NLI power estimation for various modulation formats with different cardinality \(M\) over 8000 km SMF. For the studied constellations, the NLI powers estimated via the proposed 4D model with S-N are all closer to that estimated by SSFM, and the estimation accuracy can be improved by up to 8.5%. The 4D model with S-S only leads to inaccuracies range of up to 0.33 dB and 0.2 dB for \(\Delta\text{SNR}_{\text{eff}}^{\text{4D}}\) in "overnoi4_32" and "4D-OS512", respectively. "voronoi4_32" is a highly asymmetric constellation in X/Y-pol, while symmetric constellations have almost the same \(\Delta\text{SNR}_{\text{eff}}^{\text{4D}}\) such as 4D-OS512 and 4D-PR564.
Fig. 8 shows the transmission performance estimation in terms of normalized generalized mutual information (NGMI) for the 4D model with S-N using 4D-OS128 modulation formats. In Fig. 8 (a) and (b), we can observe that the 4D model with S-N can reduce the transmission reach prediction error by 4% in reach relative to the 4D model with S-S only at NGMI of 0.8 for single-channel and multi-channel systems, respectively. The prediction accuracy gains come from the larger overestimated \(\text{SNR}_{\text{eff}}\) of 0.3 dB for the 4D model with S-S only compared to the SSFM results. As shown in the insets of Fig. 8 (a) and (b), the proposed 4D model with S-N reduces the SNR deviation within 0.1 dB for both single-channel and multi-channel systems compared to the 4D model with S-S only. Therefore, the 4D model with S-N could provide a better accuracy on performance prediction than 4D model with S-S only, especially in long-distance transmission.
## V Conclusion
An "ultimate" 4D nonlinear interference model by accounting the intra- and inter-channel nonlinearity for the entire DP-4D class of modulation formats was proposed and validated in detail for both single and multi-channel optical transmission system. Unlike the EGN model ignoring the inter-polarization dependency, no more assumptions are made on either the marginal or joint statistics of the two polarization components of the transmitted 4D modulation formats besides being zero-mean. Therefore, the proposed model has the ability to predict the SCI, XCI and MCI nonlinear terms which divide into intra-polarization and cross-polarization terms for arbitrary DP-4D modulation formats. In addition, the proposed model is accurate for various scenarios, including both high and low dispersion fiber systems. By comparing the experienced NLI for different 4D modulation formats, the numerical results show that the EGN model overestimates the NLI power up to 1.96 dB, while the proposed 4D NLI model can reduce the NLI power estimation error within 0.15 dB from the SSFM simulation results.
On the other hand, we further assessed the signal-ASE noise interaction in multi-channel WDM systems. By lifting a underlying assumption of ignoring the signal-ASE noise
Fig. 8: Transmission performance using 4D-OS128 modulation format.
Fig. 7: Simulation results of multi-span optical fiber transmission with single channel. NLI power (\(P_{\text{NLI}}=\delta_{\text{m}}^{2}+\delta_{\text{m}}^{2}\)) for 4D various modulation formats at distance of 8000 km.
nonlinear interactions, simulation results show a performance prediction improvement in terms of effective SNR with respect to the existing 4D model in a 9-channel transmission system.
In summary, the 4D NLI model in this work could provide a powerful analytical tool for designing nonlinear-tolerant 4D modulation formats. Future work will focus on deriving a simple closed-form formula to realise real-time use.
## Appendix A Derivation of Transmitted signal model
The transmitted signal model is crucial in the derivation of the NLI model. Under the assumption that the transmitted signal model is assumed to be a periodic with period \(T\), where \(W\) symbols are transmitted every period \(T\). Therefore, the WDM transmitted signal model can be expressed as
\[E(t,0)=\sum_{h=1}^{\#ch}\widetilde{E}(0,t)e^{jf_{c}t}, \tag{16}\]
where \(h\) is the WDM channel-index, \(f_{c}\) is the center frequency of INT channel and
\[\widetilde{E}(t,0)=\sum_{n=-(W-1)/2}^{(W-1)/2}\mathbf{a}_{n}p(t-nT_{s}), \tag{17}\]
is the transmitted signal model over single channel in which \(T_{s}=1/R_{s}=T/W\) represents the symbol period, \(R_{s}\) is the symbol rate.
Fourier transforms of the \(E(t,0)\), we obtain
\[E(f,0)=\sqrt{\Delta_{f}}\sum_{h=1}^{\#ch}\sum_{k=-\infty}^{+\infty}\zeta_{k,h} \delta(f-f_{c}-k\Delta_{f}), \tag{18}\]
where \(\Delta_{f}=1/T\) and
\[\begin{split}\zeta_{k,h}&=[\zeta_{x,k,h},\zeta_{y,k,h}]^{\mathrm{T}}\\ &=\sqrt{\Delta_{f}}P(f_{c}+k\Delta_{f})\sum_{n=-(W-1)/2}^{(W-1)/ 2}\mathbf{a}_{n}e^{-j2\pi\frac{kn}{W}},\end{split} \tag{19}\]
is the discrete Fourier transforms of the \(h\)-th channel transmitted symbol sequence (\(\mathbf{a}_{n}\) or \(\mathbf{b}_{n}\)).
Here, we only consider a COI whose central frequency is set to zero, and an INT channel with central frequency \(f_{c}\). Therefore, the transmitted signal model for two channels (COI and INT), can be simplified as
\[\begin{split} E(f,0)=&\sqrt{\Delta_{f}}\sum_{k=- \infty}^{+\infty}\nu_{k}\delta(f-k\Delta_{f})\\ &+\sqrt{\Delta_{f}}\sum_{k=-\infty}^{+\infty}\xi_{k}\delta(f-f_{ c}-k\Delta_{f}),\end{split} \tag{20}\]
where
\[\nu_{k}=[\nu_{\mathrm{x},k},\nu_{\mathrm{y},k}]^{\mathrm{T}}=\sqrt{\Delta_{f}} P(k\Delta_{f})\sum_{n=-(W-1)/2}^{(W-1)/2}\mathbf{a}_{n}e^{-j2\pi\frac{kn}{W}}, \tag{21}\]
\[\begin{split}\xi_{k}&=[\xi_{\mathrm{x},k},\xi_{ \mathrm{y},k}]^{\mathrm{T}}=\sqrt{\Delta_{f}}P(f_{c}+k\Delta_{f})\sum_{n=-(W-1 )/2}^{(W-1)/2}\mathbf{b}_{n}e^{-j2\pi\frac{kn}{W}}.\end{split} \tag{22}\]
## Appendix B PSD of first order NLI
Here we use the frequency-domain regular perturbation (RP) approach in the \(\gamma\) coefficient. The process of deriving the Manakov solution is similar to [24]. The solution after \(N_{s}\) spans is expressed as (8), where the \(\mu(f_{1},f_{2},f,N_{s},L_{s})\) can be expressed as
\[\begin{split}\mu(f_{1},f_{2},f,N_{s},L_{s})\triangleq& \frac{1-e^{-\alpha L_{s}}e^{j4\pi^{2}\beta_{2}(f-f_{1})(f_{2}-f_{1})L_{s}}}{ \alpha-j4\pi^{2}\beta_{2}(f-f_{1})(f_{2}-f_{1})}\\ &\cdot\sum_{l=1}^{N_{s}}e^{-j4\pi^{2}\beta_{2}(f-f_{1})(f_{2}-f_{ 1})L_{s}}.\end{split} \tag{23}\]
This formula is a function of the link parameters and not dependent on characteristics of the launched signal so that it is called as 'link function'. As shown in this formula, the 'link function' represents the contribution of different NLI field \((f_{1},f_{2},f)\) accumulated in different spans. On the other hand, the 'link function' relates to the fiber parameters as we mentioned in Sec. III.
Substituting the spectrum of the transmitted periodic signal (9) in (8), for instance, we obtain the x component,
\[\begin{split}& E_{\mathrm{x}}(f,N_{s},L_{s})=-j\frac{8}{9}\gamma \Delta_{f}^{3/2}\sum_{i=-\infty}^{\infty}\delta(f-i\Delta_{f})\\ &\left[\sum_{S_{i}}(\nu_{\mathrm{x},k}\nu_{\mathrm{x},m}^{*}\nu_ {\mathrm{x},n}+\nu_{\mathrm{y},k}\nu_{\mathrm{y},m}^{*}\nu_{\mathrm{x},n})\mu( S_{i},N_{s},L_{s})\right.\\ &\left.+\sum_{X1_{i}}(2\nu_{\mathrm{x},k}\xi_{\mathrm{x},m}^{*} \xi_{\mathrm{x},n}+\nu_{\mathrm{y},k}\xi_{\mathrm{y},m}^{*}\xi_{\mathrm{x},n}+ \nu_{\mathrm{x},k}\xi_{\mathrm{y},m}^{*}\xi_{\mathrm{y},n})\mu(X1_{i},N_{s},L_{ s})\right.\\ &\left.+\sum_{X2_{i}}(2\nu_{\mathrm{x},k}\nu_{\mathrm{x},m}^{*} \xi_{\mathrm{x},n}+\nu_{\mathrm{y},k}\nu_{\mathrm{y},m}^{*}\xi_{\mathrm{x},n}+ \nu_{\mathrm{x},k}\nu_{\mathrm{y},m}^{*}\xi_{\mathrm{y},n})\mu(X2_{i},N_{s},L_{ s})\right.\\ &\left.+\sum_{X3_{i}}(\nu_{\mathrm{x},k}\xi_{\mathrm{x},m}^{*}\nu_ {\mathrm{x},n}+\nu_{\mathrm{y},k}\xi_{\mathrm{y},m}^{*}\nu_{\mathrm{x},n})\mu(X3 _{i},N_{s},L_{s})\right.\\ &\left.+\sum_{X4_{i}}(\xi_{\mathrm{x},k}\xi_{\mathrm{x},m}^{*}\xi_ {\mathrm{x},n}+\xi_{\mathrm{y},k}\xi_{\mathrm{y},m}^{*}\xi_{\mathrm{x},n})\mu(X4 _{i},N_{s},L_{s})\right.\\ &\left.+\sum_{X5_{i}}(\xi_{\mathrm{x},k}\nu_{\mathrm{x},m}^{*}\xi_ {\mathrm{x},n}+\xi_{\mathrm{y},k}\nu_{\mathrm{y},m}^{*}\xi_{\mathrm{x},n})\mu(X 5_{i},N_{s},L_{s})\right],\end{split} \tag{24}\]
where
\[\begin{split} X1_{i}&=S_{i}=\{(k,m,n):(k-m+n)\Delta_{f}=i \Delta_{f}\}\\ X2_{i}&=X4_{i}=\{(k,m,n):(k-m+n)\Delta_{f}+f_{c}=i \Delta_{f}\}\\ X3_{i}&=\{(k,m,n):(k-m+n)\Delta_{f}-f_{c}=i\Delta_{f} \}\\ X5_{i}&=\{(k,m,n):(k-m+n)\Delta_{f}+2f_{c}=i\Delta_{f} \}\,,\end{split} \tag{25}\]
is the integration regions.
The first summation \(S_{i}\) is SCI, which is dealt with in [24]. Note that the summation of \(X5_{i}\) is always zero [33,
Appendix C], when the channels do not overlap, i.e. the INT channel center frequency satisfies the relation of \(f_{c}\geq R_{s}\).
The PSD of the first order NLI is define as [24, eq. (20)]
\[\bar{S}(f,N_{s},L_{s}) =[\bar{S}_{\text{x}}(f,N_{s},L_{s}),\bar{S}_{\text{y}}(f,N_{s},L_{s })]^{\text{T}}\] \[=[\mathbb{E}\{|E_{\text{x}}(f,N_{s},L_{s})|^{2}\},\mathbb{E}\{|E_ {\text{y}}(f,N_{s},L_{s})|^{2}\}], \tag{26}\]
where \(\mathbb{E}\{\cdot\}\) is the statistical expectation.
Substituting the expression (24) in (26), we obtain the PSD of received NLI. The PSD of NLI can be written as
\[S(f,N_{s},L_{s})= S_{\text{SCI},S_{i}}(f,N_{s},L_{s})+S_{\text{XCI},X1_{i}}(f,N_{ s},L_{s})\] \[+S_{\text{XCI},X2_{i}}(f,N_{s},L_{s})+S_{\text{XCI},X3_{i}}(f,N_{ s},L_{s})\] \[+S_{\text{XCI},X4_{i}}(f,N_{s},L_{s}), \tag{27}\]
where the \(S_{\text{XCI},X1_{i}}(f,N_{s},L_{s})\), \(S_{\text{XCI},X2_{i}}(f,N_{s},L_{s})\), \(S_{\text{XCI},X3_{i}}(f,N_{s},L_{s})\) and \(S_{\text{XCI},X1_{i}}(f,N_{s},L_{s})\) are the second, third, fourth and fifth term of (24), respectively.
For the sake of brevity, we just present the detailed derivation of the set \(X1_{i}\) for x component. As for the field on the y polarization, it can be found by swapping the subscripts x and y, and the other set can be derived following the same approach.
In region \(X1_{i}\), we have
\[S_{\text{XCI},X1_{i},x}(f,N_{s},L_{s})=\left(\frac{8}{9}\right) ^{2}\gamma^{2}\Delta_{f}^{3}\sum_{i=-\infty}^{\infty}\delta(f-i\Delta_{f})\] \[\cdot\sum_{k,m,n\in X1_{i}}\sum_{k^{\prime},m^{\prime},n^{\prime} \in X1_{i}}\mu(X1_{i},N_{s},L_{s})\mu^{*}(X1_{i},N_{s},L_{s})\] \[\cdot[4\mathbb{E}\{\nu_{x,k}\nu_{x,k^{\prime}}^{*}\}\mathbb{E}\{ \xi_{x,m}^{*}\xi_{x,n}\xi_{x,m^{\prime}}\xi_{x,n^{\prime}}^{*}\}+2\mathbb{E}\{ \nu_{x,k}\nu_{x,k^{\prime}}^{*}\}\] \[\cdot\mathbb{E}\{\xi_{x,m}^{*}\xi_{x,n}\xi_{y,m^{\prime}}\xi_{x,n ^{\prime}}^{*}\}+2\mathbb{E}\{\nu_{x,k}\nu_{x,k^{\prime}}^{*}\}\mathbb{E}\{ \xi_{x,m}^{*}\xi_{x,n}\xi_{y,m^{\prime}}\xi_{y,n^{\prime}}^{*}\}\] \[+2\mathbb{E}\{\nu_{y,k}\nu_{x,k^{\prime}}^{*}\}\mathbb{E}\{\xi_{y,m}^{*}\xi_{x,m}\xi_{x,m^{\prime}}^{*}\}+\mathbb{E}\{\nu_{y,k}\nu_{x,k^{ \prime}}^{*}\}\mathbb{E}\{\xi_{y,m}^{*}\xi_{x,n^{\prime}}^{*}\}\] \[+2\mathbb{E}\{\nu_{x,k}\nu_{x,k^{\prime}}^{*}\}\mathbb{E}\{\xi_{y,m}^{*}\xi_{x,m}\xi_{x,m^{\prime}}^{*}\}+\mathbb{E}\{\nu_{x,k}\nu_{x,k^{ \prime}}^{*}\}\mathbb{E}\{\nu_{x,k}\nu_{y,k^{\prime}}^{*}\}\] \[\cdot\mathbb{E}\{\xi_{y,m}^{*}\xi_{y,m}^{*}\xi_{x,m^{\prime}}^{*} \}+\mathbb{E}\{\nu_{x,k}\nu_{x,k^{\prime}}^{*}\}\mathbb{E}\{\xi_{y,m}^{*}\xi_{y,m}\xi_{y,m^{\prime}}^{*}\xi_{y,n^{\prime}}^{*}\}\}. \tag{28}\]
This is now a six-dimension sum and the complete auto-correlation function consist of nine terms. For ease of analysis, we simply rewrite (28) as
\[S_{\text{XCI},X1_{i},x}(f,N_{s},L_{s})=\left(\frac{8}{9}\right) ^{2}\gamma^{2}\Delta_{f}^{3}\sum_{i=-\infty}^{\infty}\delta(f-i\Delta_{f})\] \[\cdot\sum_{k,m,n\in X1_{i}}\sum_{k^{\prime},m^{\prime},n^{\prime} \in X1_{i}}\left\{\mu(X1_{i},N_{s},L_{s})\mu^{*}(X1_{i},N_{s},L_{s})\right.\] \[\cdot\sum_{i\in\{0,1,\ldots,W-1\}^{6}}[A_{1,\mathbf{i}}(k,m,n,k^{ \prime},m^{\prime},n^{\prime})\] \[\left.+A_{2,\mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})+A_{3, \mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\right.\] \[\left.+A_{4,\mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})+A_{5, \mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\right.\] \[\left.+A_{6,\mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})+A_{7, \mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\right]\}, \tag{29}\]
where \(\mathbf{i}\triangleq(i_{1},i_{2},...,i_{6})\) and
\[A_{1,\mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\] \[\triangleq 4\Delta_{f}^{3}\left\{\mathbb{E}\{a_{x,i_{1}}a_{x,i_{4}}^{*}\} \mathbb{E}\{\mathbb{E}_{x,i_{2}}^{*}b_{x,i_{3}}b_{x,i_{5}}b_{x,i_{6}}^{*}\}\right\} \tag{30}\] \[\cdot e^{-j\frac{2\pi}{W}(ki_{1}-mi_{2}+ni_{3}-k^{\prime}i_{4}+m^ {\prime}i_{5}-n^{\prime}i_{6})}.\]
The other \(A_{i}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\) are similar to \(A_{1,\mathbf{i}}(k,m,n,k^{\prime},m^{\prime},n^{\prime})\).
Note that under the assumption that the sequence of vector RVs \(\mathbf{a}_{n}\) and \(\mathbf{b}_{n}\) are independent, identically distributed (i.i.d.), and with \(\mathbb{E}\{\mathbf{a}_{n}\}=\mathbb{E}\{\mathbf{b}_{n}\}=0\). Some combinations of \((i_{1},i_{2},i_{3},i_{4},i_{5},i_{6})\) are zero contribution, for example, in the case of \(i_{1}=i_{4}\neq i_{2}=i_{3}=i_{5}\neq i_{6}\), we have
\[\mathbb{E}\{a_{x,i_{1}}a_{x,i_{4}}^{*}\}\mathbb{E}\{b_{x,i_{2}}^{ *}b_{x,i_{3}}b_{x,i_{5}}b_{x,i_{6}}^{*}\}\] \[= \mathbb{E}\{|a_{x,i_{1}}|^{2}\}\mathbb{E}\{|b_{x,i_{2}}|^{2}b_{x,i_{3}}\}\cdot\mathbb{E}\{b_{x,i_{6}}\} \tag{31}\] \[= 0.\]
From this follows that, any combinations of the first-order moment and other-order correlation are zero-contribution combinations. Therefore, we have four possible combinations
\[1) i_{1}=i_{4} i_{2}=i_{3}=i_{5}=i_{6}\] \[2) i_{1}=i_{4} i_{2}=i_{3} i_{5}=i_{6},\] \[3) i_{1}=i_{4} i_{2}=i_{5} i_{3}=i_{6},\] \[4) i_{1}=i_{4} i_{2}=i_{6} i_{3}=i_{5}.\]
Here we give a detailed derivation of the (30), and the others can be derived following the same approach. As the (32) shown, its \(2^{nd}-\)order moment is the set of \(i_{1}=i_{4}\),
\[\mathbb{E}\{\nu_{x,k^{\prime}}\nu_{x,k^{\prime}}^{*}\} =\Delta_{f}|P(k\Delta_{f})|^{2}\mathbb{E}\{|a_{x}|^{2}\}\sum_{i_{1}=i _{4}=0}^{W-1}e^{-j\frac{2\pi}{W
* The third case \(3)\) is defined as \(i_{2}=i_{5}\neq i_{3}=i_{6}\): Its \(4^{th}\)-order moment is shown as \[\mathbb{E}^{3}\{\xi_{x,m}^{*}\xi_{x,n}\xi_{x,m^{\prime}}\xi_{x,n^{ \prime}}^{*}\}=\mathcal{P}_{mmm^{\prime}n^{\prime}}\mathbb{E}^{2}\{|b_{x}^{2}\}\] \[(R_{x}^{2}\delta_{m^{\prime}-m-pW}\delta_{m^{\prime}+n-pW}-R_{s} \Delta_{f}\delta_{n-m+m^{\prime}-n^{\prime}-pW}).\] (37)
* The fourth case \(4)\) is defined as \(i_{2}=i_{6}\neq i_{3}=i_{5}\): Its \(4^{th}\)-order moment is shown as \[\mathbb{E}^{4}\{\xi_{x,m}^{*}\xi_{x,n}\xi_{x,m^{\prime}}\xi_{x,n^{ \prime}}^{*}\}=\mathcal{P}_{mmm^{\prime}n^{\prime}}|\mathbb{E}\{b_{x}^{2}\}|^{2}\] \[(R_{s}^{2}\delta_{-m-n^{\prime}-pW}\delta_{m^{\prime}+n-pW}-R_{s} \Delta_{f}\delta_{n-m+m^{\prime}-n^{\prime}-pW}).\]
Note that we remove the terms with \(n=m\) or \(n^{\prime}=m^{\prime}\) because they have been shown to contribute a frequency-flat and constant phase shift which could be compensated at the receiver. Therefore, sum these contribution together, we can get the solution of (30).
## Appendix C Correlation coefficient
In this section, the detail expressions of the terms in (12) are shown.
|
2306.02178 | When the Fourier transform is one loop exact? | We investigate the question: for which functions
$f(x_1,...,x_n),~g(x_1,...,x_n)$ the asymptotic expansion of the integral $\int
g(x_1,...,x_n) e^{\frac{f(x_1,...,x_n)+x_1y_1+...+x_ny_n}{\hbar}}dx_1...dx_n$
consists only of the first term. We reveal a hidden projective invariance of
the problem which establishes its relation with geometry of projective
hypersurfaces of the form $\{(1:x_1:...:x_n:f)\}$. We also construct various
examples, in particular we prove that Kummer surface in $\mathbb{P}^3$ gives a
solution to our problem. | Maxim Kontsevich, Alexander Odesskii | 2023-06-03T19:13:50Z | http://arxiv.org/abs/2306.02178v2 | # When the Fourier transform is one loop exact?
###### Abstract
We investigate the question: for which functions \(f(x_{1},...,x_{n}),\ g(x_{1},...,x_{n})\) the asymptotic expansion of the integral \(\int g(x_{1},...,x_{n})e^{\frac{f(x_{1},...,x_{n})+x_{1}y_{1}+...+x_{n}y_{n}}{ \hbar}}dx_{1}...dx_{n}\) consists only of the first term. We reveal a hidden projective invariance of the problem which establishes its relation with geometry of projective hypersurfaces of the form \(\{(1:x_{1}:...:x_{n}:f)\}\). We also construct various examples, in particular we prove that Kummer surface in \(\mathbb{P}^{3}\) gives a solution to our problem.
###### Contents
* 1 Introduction
* 2 Formal wave functions
* 3 Reformulation of the problem in terms of conical germs
* 3.1 Admissible pairs and projectively dual hypersurfaces
* 3.2 Projective invariance of the problem
* 3.3 Generalization to the projectively dual lower-dimensional cones
* 4 Reformulation of the problem in terms of constraints on wave functions
* 4.1 Abstract formalism
* 4.2 Explicit formulas in general case
* 4.3 Algebraic case
* 4.4 Explicit formulas in the algebraic case
* 4.5 A question about monodromic regular holonomic \(D\)-modules
* 5 Examples of admissible hypersurfaces and corresponding pairs
* 5.1 Quadratic hypersurfaces
* 5.2 Hypersurfaces admitting quadratic parametrization
* 5.3 Ruled surfaces in \(\mathbb{P}^{3}\)
* 5.4 Steiner Roman surface
* 5.5 Generalized Steiner Roman hypersurfaces
* 5.6 Kummer surfaces in \(\mathbb{P}^{3}\)
* 5.7 Extensions of admissible pairs and families of surfaces of degree four
* 5.8 Toric hypersurfaces
* 5.9 Segre cubic in \(\mathbb{P}^{4}\)
* 6 Classification of admissible pairs of functions in one variable
* 7 Toward a classification of admissible pairs of functions in two variables
* 8 A potential application to generalized Dirichlet series
Conjectures and open questions
## Appendix. Explicit formulas for equations
Introduction
This paper is inspired by beautiful results of [1] where a number of surprising identities for the Fourier transforms were discovered, for example
\[\int_{\mathbb{R}^{2}}sign(x_{2})|x_{2}|^{-\frac{2}{3}}e^{i\frac{x_{1}^{3}}{x_{2} }}e^{i(x_{1}y_{1}+x_{2}y_{2})}dx_{1}dx_{2}=\frac{2\pi i}{\sqrt{3}}\,sign(y_{2}) |y_{2}|^{-\frac{2}{3}}e^{\frac{i}{27}\frac{y_{1}^{3}}{y_{2}}}.\]
The problem studied in [1] is the following. Let \(F\) be a local field, \(\psi\) a nontrivial unitary additive character of \(F\), and \(\chi_{1},...,\chi_{k}\) multiplicative characters of \(F\). A complex valued distribution of the form
\[\psi(Q(x_{1},...,x_{n}))\prod_{j=1}^{k}\chi_{j}(P_{j}(x_{1},...,x_{n}))\]
is called elementary if \(Q\) is a rational function and \(P_{j}\) are polynomials. The problem is when the Fourier transform of such a distribution is also elementary. It was observed in [1] that one can set formally \(\psi(x)=e^{\frac{i\pi}{\hbar}}\) and use the formal stationary phase method. This observation leads us to the following formal version of the problem where we no longer consider actual integrals and deal with the asymptotic expansion of oscillating integrals. Moreover, our functions \(f,g\) (analogous of \(Q,P_{j}\) from [1]) are no longer assumed to be rational and can be locally analytic or just formal germs.
Let \(f(x_{1},...,x_{n})\) be a function in \(n\) variables such that its Hessian is not identically zero:
\[\det\left(\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}\right)_{i,j} \neq 0. \tag{1.1}\]
Let \(\vec{x}=\vec{t}\) be a critical point of the function1
Footnote 1: Here and in the sequel we use vector notations like \(\vec{x}=(x_{1},...,x_{n})\), \(\vec{t}=(t_{1},...,t_{n})\) etc.
\[f(x_{1},...,x_{n})+x_{1}y_{1}+...+x_{n}y_{n}. \tag{1.2}\]
It follows from (1.1) that the mapping \(\vec{y}\mapsto\vec{t}\) has a non-degenerate Jacobian at generic point. Indeed, equating to zero first derivatives of (1.2) with respect to variables \(x_{1},...,x_{n}\) we get
\[y_{i}=-\frac{\partial f(t_{1},...,t_{n})}{\partial t_{i}},\quad i=1,...,n \tag{1.3}\]
and the Jacobian of this map is proportional to the Hessian (1.1) at \(\vec{x}=\vec{t}\). Let \(g(x_{1},...,x_{n})\) be another function in \(x_{1},...,x_{n}\). By the formal Fourier transform of the function \(g(\vec{x})e^{\frac{f(\vec{t})}{\hbar}}\) we mean the perturbative expansion of the formal integral
\[\int g(x_{1},...,x_{n})e^{\frac{f(x_{1},...,x_{n})+x_{1}y_{1}+...+x_{n}y_{n}}{ \hbar}}dx_{1}...dx_{n}\]
given by the stationary phase method at the critical point \(\vec{x}=\vec{t}\). Recall that this expansion has the form
\[\int g\ e^{\frac{f+x_{1}y_{1}+...+x_{n}y_{n}}{h}}dx_{1}...dx_{n}=(2\pi\hbar)^{\frac {n}{2}}e^{\frac{f}{h}}\det\Bigg{(}-\frac{\partial^{2}f}{\partial x_{i}\partial x _{j}}\Bigg{)}_{i,j}^{-\frac{1}{2}}\Bigg{|}_{\vec{x}=\vec{t}}\ \sum_{k=0}^{\infty}A_{k}\hbar^{k} \tag{1.4}\]
where coefficients \(A_{k}\) of formal power series in \(\hbar\) can be written as differential polynomials in \(f(t_{1},...,t_{n}),\ g(t_{1},...,t_{n})\) with coefficients in \(\mathbb{Q}\), divided by \(\det\left(\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}\right)_{i,j}^{3k }\Big{|}_{\vec{x}=\vec{t}}\), see Appendix.
We want to study the following question: for which functions \(f,g\) we have
\[A_{k}=0\quad\mbox{ for all }\quad k\geq 1. \tag{1.5}\]
This condition can be written more explicitly in the form2
Footnote 2: Here and in the sequel such identities for integrals with unspecified domain of integration mean that the r.h.s. is the perturbative expansion of the l.h.s. given by stationary phase method. Moreover, we often write the r.h.s. up to multiplication by a fourth root of unity \(\pm 1,\pm i\).
\[\int g(\vec{x})e^{\frac{f(\vec{x})+\vec{x}\cdot\vec{y}}{h}}d^{n}\vec{x}=(2\pi \hbar)^{\frac{n}{2}}\hat{g}(\vec{y})e^{\frac{f(\vec{y})}{h}}. \tag{1.6}\]
Here
\[\hat{f}(\vec{y})=(f(\vec{x})+\vec{x}\cdot\vec{y})\Big{|}_{\vec{x}=\vec{t}} \tag{1.7}\]
is the Legendre transform of function \(f\), i.e. the critical value of \(f(\vec{x})+\vec{x}\cdot\vec{y}\), and
\[\hat{g}(\vec{y})=g(\vec{x})\cdot\det\Bigg{(}-\frac{\partial^{2}f}{\partial x_ {i}\partial x_{j}}\Bigg{)}_{i,j}^{-\frac{1}{2}}\Bigg{|}_{\vec{x}=\vec{t}}. \tag{1.8}\]
By a loose analogy with the terminology from physics literature we call the condition (1.6) the 1-loop exactness of the formal Fourier transform.3
Footnote 3: The name is not totally precise as the Feynman graphs appearing in the expansion (1.4) are not necessarily connected, see Appendix.
**Definition 1.1.** The pair of functions \(f,g\) is called admissible if the Hessian of \(f\) is not identically zero (see (1.1)), the function \(g\) is not identically zero, and the identity (1.6) holds.
Notice that if we fix a function \(f\), then the set of functions \(g\) such that the pair \(f,g\) is admissible or \(g=0\), is a vector space, which we denote by \(V_{f}\).
**Definition 1.2.** The rank of a function \(f\) is the dimension of vector space of functions
\[V_{f}=\{g\ |\ f,g\mbox{ is an admissible pair or }g=0\}.\]
The rank of an admissible pair \(f,g\) is the rank of \(f\).
**Definition 1.3.** A function \(f\) is admissible if its rank is larger than zero. In other words, \(f\) is admissible if there exists a (non-zero) function \(g\) such that pair \(f,g\) is admissible.
Sometimes, if \(f\) is fixed and clear from the context, we will call \(g\) admissible if \(f,g\) is an admissible pair.
**Remark 1.1.** First, in the case when \(f\) is a concave function defined globally on \(\mathbb{R}^{n}\), and satisfying the condition
\[\lim_{|\vec{x}|\rightarrow\infty}\frac{f(\vec{x})}{|\vec{x}|}=-\infty\quad( \mbox{e.g.}\quad f(\vec{x})=-\frac{1}{2}\sum_{i=1}^{n}x_{i}^{2}\ ),\]
and \(g\) is also defined globally,
\[|g(\vec{x})|\leq C_{1}e^{C_{2}|\vec{x}|},\]
the integral \(\int_{\mathbb{R}^{n}}g(\vec{x})e^{\frac{f(\vec{x})+\vec{x}\cdot\vec{x}}{\hbar}} d\vec{x}\) is convergent for \(\hbar>0\) and admits the asymptotic expansion as in (1.4).
In the general case where \(f,g\) are germs of analytic functions, or even formal power series with coefficients in a field \({\bf k}\) of characteristic zero, by the formal integral we mean the expression (1.4) where all terms \(A_{i}\) are differential polynomials divided by an integer power of the Hessian and therefore make sense, and the front factors are considered as formal symbols.
Also, in the usual Fourier transform one uses \(\sqrt{-1}\) in the exponent. We omit it for our convenience, in order to simplify formulas.
In the sequel we will also replace \(\det\Big{(}-\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}\Big{)}_{i,j}\) by \(\det\Big{(}\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}\Big{)}_{i,j}\), which changes the result by a fourth root of unity.
**Remark 1.2.** Here we explain an explicit procedure producing coefficients \(A_{i}\) above, see Appendix for details. The calculation of the integral in the l.h.s. of (1.4) near non-degenerate critical point can be reduced (after some shift of variables \(x_{1},...,x_{n}\)) to the following case.
Let \(F=F_{2}+F_{3}+...\) be a formal power series in \(n\) variables \(x_{1},...,x_{n}\) with coefficients in a field \({\bf k}\supset\mathbb{Q}\) where \(F_{i}\) are homogeneous polynomials of degree \(i\), and \(F_{2}\) is a non-degenerate quadratic form. We denote by \(F_{2}^{\prime\prime}\) the corresponding symmetric matrix. Let \(g=1+...\in{\bf k}[[x_{1},...,x_{n}]]\) be another power series (series \(g\) starts with \(1\) just for convenience). The formal integral
\[\frac{(\det F_{2}^{\prime\prime})^{\frac{1}{2}}}{(2\pi\hbar)^{\frac{n}{2}}} \int g(\vec{x})e^{\frac{F(\vec{x})}{\hbar}}dx_{1}...dx_{n}=1+...\in{\bf k}[[ \hbar]]\]
can be defined in the following way:
First we rescale variables \(x_{i}=\sqrt{\hbar}\ \tilde{x}_{i}\). After that we have
\[\frac{F(\vec{x})}{\hbar}=F_{2}(\tilde{x}_{1},...,\tilde{x}_{n})+\hbar^{\frac{ 1}{2}}F_{3}(\tilde{x}_{1},...,\tilde{x}_{n})+\hbar^{\frac{2}{2}}F_{4}(\tilde{x }_{1},...,\tilde{x}_{n})+...\]
\[g(\vec{x})e^{\frac{F(\vec{x})}{\hbar}}=g(\hbar^{\frac{1}{2}}\vec{x})e^{F_{2}(\vec {x})}e^{\hbar^{\frac{1}{2}}F_{3}(\tilde{\vec{x}})+\hbar^{\frac{2}{2}}F_{4}( \tilde{\vec{x}})+...}\]
The r.h.s. of this formula can be expanded as power series in \(\hbar^{\frac{1}{2}}\) and \(\tilde{x}_{1},...\tilde{x}_{n}\). In this way we reduced to computing the integrals of the form \(\int he^{F_{2}}d\tilde{x}_{1}...d\tilde{x}_{n}\) where \(h\) is a monomial in \(\tilde{x}_{1},...\tilde{x}_{n}\). This formal integral is defined to be zero if degree of \(h\) is odd, and as \(\frac{1}{m!}\Delta_{F_{2}}^{m}(h)\int e^{F_{2}}d\tilde{x}_{1}...d\tilde{x}_{n}\) if degree of \(h\) is \(2m\). Here \(\Delta_{F_{2}}=-\frac{1}{2}\sum_{i,j}b_{ij}\partial_{\tilde{x}_{i}}\partial_{ \tilde{x}_{j}}\) where \((b_{ij})=(F_{2}^{\prime\prime})^{-1}\), and we formally declare \(\int e^{F_{2}}d\tilde{x}_{1}...d\tilde{x}_{n}:=\frac{(2\pi\hbar)^{\frac{n}{2} }}{(\det F_{2}^{\prime\prime})^{\frac{1}{2}}}\).
Notice that the final expression \(\sum_{i=0}^{\infty}A_{i}\hbar^{i}\) is a power series in \(\hbar\) because integrals with monomials of odd degree are zero.
**Remark 1.3.** If \(f,g\) is an admissible pair, then the pair \(\hat{f},\hat{g}\) of functions from the r.h.s. of (1.6) is also admissible, because the Fourier transform is essentially an involution. Moreover, we have isomorphism of vector spaces \(V_{f}\cong V_{\hat{f}}\) given by \(g\mapsto\hat{g}\). In particular, \(\dim V_{f}=\dim V_{\hat{f}}\) so \(f\) and \(\hat{f}\) have the same rank.
**Remark 1.4.** Let \(\Sigma\subset\mathbb{P}^{n+1}\) be projective hypersurface locally defined by
\[x_{n+1}=x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}. \tag{1.9}\]
Then its projective dual hypersurface \(\widehat{\Sigma}\subset\mathbb{P}^{n+1}\) is locally defined by
\[y_{0}=y_{n+1}\hat{f}\Big{(}\frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}} \Big{)} \tag{1.10}\]
where \(\hat{f}\) is the Legendre transform of \(f\) given by (1.7) and coordinates \((y_{0},...,y_{n+1})\) are dual to \((x_{0},...,x_{n+1})\). This projective duality plays an important role in this paper.
Let us describe the content of the paper.
In Section 2 we introduce the formalism of formal wave functions which allows to write expressions involving exponentials, integrals and delta functions in a purely algebraic context over an arbitrary field \({\bf k}\) of characteristic zero.
In Section 3 we reformulate our problem in terms of projectively dual hypersurfaces defined by (1.9) and (1.10). We show that admissibility condition does not depend on the choice of projective coordinates. In this way we deduce a projective invariance of the original problem: if \(f\) is admissible and we make an arbitrary linear change of variables
\[x_{i}=\sum_{j=0}^{n+1}a_{i,j}\tilde{x}_{j},\quad i=0,...,n+1\]
in (1.9), solve with respect to \(\tilde{x}_{n+1}\) writing the result in the form
\[\tilde{x}_{n+1}=\tilde{x}_{0}\tilde{f}\Big{(}\frac{\tilde{x}_{1}}{\tilde{x}_{0}},...,\frac{\tilde{x}_{n}}{\tilde{x}_{0}}\Big{)}, \tag{1.11}\]
then \(\tilde{f}\) is also admissible for arbitrary non-degenerate matrix \((a_{i,j})\in GL(n+2)\).
In Section 4 we explain how to reduce our problem in the case of algebraic hypersurfaces to a finite system of differential equations for \(f,g\). Notice that in general our system of differential equations (1.5) is infinite. In the special case when \(g\) is a solution of a regular holonomic system of differential equations, our question about admissible pairs essentially reduces to the following interesting problem concerning algebraic holonomic \(D\)-modules:
_find regular holonomic \(D\)-modules \(M\) on the affine space \(\mathbb{A}^{n+2}\) which are monodromic (i.e. the action of the Euler operator \(\sum_{i=0}^{n+1}x_{i}\partial_{x_{i}}\) is locally finite) and such that the singular support of \(M\) does not contain conormal bundles to \(\mathbb{A}^{n+2}\) and \(\{0\}\subset\mathbb{A}^{n+2}\)._
In Section 5 we study numerous examples. In particular, we prove that Kummer quartic surface in \(\mathbb{P}^{3}\) and Segre cubic in \(\mathbb{P}^{4}\) are both admissible. We also construct a huge family of admissible functions for arbitrary \(n\geq 2\). These functions are defined parametrically as
\[f=\frac{\phi_{n+1}}{\phi_{0}},\quad x_{i}=\frac{\phi_{i}}{\phi_{0}},\quad i=1,...,n,\]
\[\phi_{i}=\frac{1}{2}\sum_{j,k=0}^{n}a_{i,j,k}u_{j}u_{k},\quad i=0,...,n+1\]
where \(u_{0},...,u_{n}\) are coordinates on the corresponding hypersurface, \(a_{i,j,k}\in\mathbf{k}\) are constants and \(a_{i,k,j}=a_{i,j,k}.\) We prove that any non-degenerate function of this family (i.e. with the generically non-vanishing Hessian) is admissible and \(\dim V_{f}\geq n+1\) for \(n>2\). We also construct examples of such functions for arbitrary \(n\geq 2\) with the rank equal to \((n+1)!\).
In Section 6 we present the classification of admissible pairs of functions in one variable.
In Section 7 we show some classification results of admissible pairs of functions in two variables.
In Section 8 we outline a potential application of our studies to the construction of generalized Dirichlet series based on the Poisson summation formula and the Mellin transform.
In Section 9 we formulate several conjectures and open questions. Notice that some other conjectures and open questions are discussed in the main part of the paper.
In Appendix we recall how to write explicitly the infinite system of differential equations (1.5) for \(f,g\) based on the stationary phase method and the Feynman diagrams technique.
## 2 Formal wave functions
Here we introduce a rigorous language of wave functions which allows us to use expressions like exponential functions depending on small parameter \(\hbar\) and written as \(\exp(f(x_{1},\ldots,x_{n})/\hbar)\), or delta functions \(\delta(f(x_{1},\ldots,x_{n}))\) etc, in a purely algebraic situation, when neither coordinates nor functions take real or complex values. The whole calculus makes sense over any field \(\mathbf{k}\) of characteristic zero.
Let \(M=\mathbb{A}^{2n}\) be an affine space of dimension \(2n\) over \(\mathbf{k}\), endowed with a translationally-invariant symplectic structure. In other words, we have global coordinates4\((x_{1},\ldots,x_{2n})\) on \(M\) defined up to affine-symplectic transformations
Footnote 4: In the sequel we will often use notation \(y_{1}=x_{n+1},...,y_{n}=x_{2n}\).
\[x_{i}\mapsto\sum_{j=1}^{2n}a_{ij}x_{j}+b_{j}\]
where \((a_{ij})_{1\leq i,j,\leq 2n}\) is an invertible matrix preserving the standard symplectic 2-form
\[\omega=\sum_{i=1}^{n}dx_{i}\wedge dx_{i+n}\,.\]
Denote by \((\gamma_{ij})_{1\leq i,j\leq 2n}\) the tensor for the inverse bi-vector field:
\[\gamma_{ij}:=\begin{cases}1&\text{ if }j=i+n,\quad 1\leq i\leq n,\\ -1&\text{ if }i=j+n,\quad 1\leq j\leq n,\\ 0&\text{ otherwise}\end{cases}\]
Let \(\hbar\) be a formal variable. We define the canonical Moyal star-product on the vector space
\[\mathbf{k}[x_{1},\ldots,x_{2n},\hbar]\]
by the formula
\[f\star g:=\sum_{k=0}^{\infty}\frac{\hbar^{k}}{k!}\left[\left(\sum_{1\leq i,j \leq 2n}\frac{\gamma_{ij}}{2}\frac{\partial}{\partial x_{i}^{(1)}}\frac{ \partial}{\partial x_{j}^{(2)}}\right)^{k}\left(f((x_{i}^{(1)})_{1\leq i\leq 2n }\cdot g((x_{i}^{(2)})_{1\leq i\leq 2n})\right)\,\right|_{x_{i}^{(1)}=x_{i}^{(2)}=x _{i},\ i=1,\ldots,2n}\]
where in the square brackets we consider functions on \(M\times M\) endowed coordinates
\[(x_{1}^{(1)},\ldots,x_{2n}^{(1)}),\ (x_{1}^{(2)},\ldots,x_{2n}^{(2)})\]
(i.e. two copies of the original coordinates on \(M\)). One can rewrite the above formula as
\[f\star g=\left[\exp\left(\frac{1}{\hbar}\sum_{1\leq i,j\leq 2n}\frac{1}{2} \gamma_{ij}\frac{\partial}{\partial x_{i}^{(1)}}\boxtimes\frac{\partial}{ \partial x_{j}^{(2)}}\right)\left(f\boxtimes g\right)\right]\bigg{|}_{\text{ diagonal}}.\]
Footnote 4: We will see later that it is more natural to consider differential operators acting on _half-densities_ instead of functions.
Algebra \(({\bf k}[x_{1},\ldots,x_{2n},\hbar],\star)\) over \({\bf k}\) is an _associative unital_ algebra generated by \((2n+1)\) elements \((\hat{x}_{i})_{i=1,\ldots,2n},\hbar\) where \(\hat{x}_{i}\) correspond to \(x_{i}\in{\bf k}[x_{1},\ldots,x_{2n},\hbar]\), satisfying the relations
\[[\hat{x}_{i},\hat{x}_{j}]=\gamma_{ij}\hbar,\quad[\hat{x}_{i},\hbar]=0\]
and can be identified with the algebra5 of polynomial \(\hbar\)-differential operators in \(n\) variables \(x_{1},\ldots,x_{n}\) by
Footnote 5: We will see later that it is more natural to consider differential operators acting on _half-densities_ instead of functions.
\[\hat{x}_{i}\mapsto x_{i},\quad\hat{x}_{n+i}\mapsto\hbar\frac{\partial}{ \partial x_{i}},\qquad\forall i=1,\ldots,n\,.\]
The Moyal product is covariant with respect to the action of the group of affine-symplectic automorphisms of \(M\). It extends to the \({\bf k}[[\hbar]]\)-linear product on \(A[[\hbar]]\) where \(A\) is the algebra of functions on a Zariski open subset of \(M\), or analytic functions in a Stein open domain in \(M\) if \({\bf k}={\mathbb{C}}\), or \(C^{\infty}\)-functions in an open domain if \({\bf k}={\mathbb{R}}\). Also, for general field \({\bf k}\) of characteristic zero, one can take \(A\) to be the algebra of formal power series at a given point \(m\in M\).
Let \(L\) be a Lagrangian submanifold of \(M\) in a broad sense, i.e. an algebraic subvariety, or analytic/smooth in the case \({\bf k}={\mathbb{C}}\) or \({\bf k}={\mathbb{R}}\), or a germ of such subvariety, or even a _formal germ_ at some point of \(M\).
We assume that a spin-structure on \(L\) is given, which means that we are given a line bundle on \(L\) whose tensor square is identified with the canonical bundle \(K_{L}\), i.e. the bundle \(\wedge^{n}T_{L}^{*}\) of top-degree forms on \(L\). The chosen square root bundle we denote by \(K_{L}^{\otimes 1/2}\). In the case of a germ, or a formal germ at point \(m\in M\), it is sufficient to choose a square root of the fiber \((K_{L})_{Im}\) at the base point \(m\).
Our main goal is the following
**Construction 1**. _With a given pair \((L,K_{l}^{\otimes 1/2})\) we associate a module (or more, precisely, a sheaf of modules) \({\cal WF}_{\,\,\,L,K_{L}^{\otimes 1/2}}\) over the quantum algebra \(A[[\hbar]]\) where \(A\) is the algebra of functions in the formal completion of \(M\) at \(L\)._
Elements of the vector space \({\cal WF}_{\,\,\,L,K_{L}^{\otimes 1/2}}\) are called _formal wave functions supported on \(L\)_.
There are several approaches to this constructions. The one presented below is an explicit one in affine symplectic coordinates, but it is based on a nontrivial consistency check6.
Footnote 6: There is another approach (not described in this paper) based on Gelfand-Kazhdan type formal geometry.
First, we give the definition of the module \({\cal WF}_{\,\,\,L,K_{L}^{\otimes 1/2}}\) in local coordinates. Let \((x_{1},\ldots,x_{2n})\) be global affine symplectic coordinates on \(M\) such that the symplectic form is the standard one, and such that locally near a point of \(L\) the projection \(\pi\) to the coordinate space \({\mathbb{A}}^{n}\) by first \(n\) coordinates \((x_{1},\ldots,x_{n})\) has a _non-zero_ Jacobian. Hence, locally \(L\) is a graph of a closed
1-form on \(\mathbb{A}^{n}\):
\[x_{n+i}=\alpha_{i}(x_{1},\dots,x_{n})\quad\forall i=1,\dots,n,\qquad d\alpha=0, \quad\alpha:=\sum_{i=1}^{n}\alpha_{i}dx_{i}\,.\]
Let us also choose a generator of \(K_{L}^{\otimes 1/2}\) whose tensor square is \(\pi^{*}(dx_{1}\wedge\dots\wedge dx_{n})\). We will denote in short this generator by \((dx_{1}\wedge\dots\wedge dx_{n})^{1/2}\), it is well-defined up to a sign.
After we make the choices from above (i.e. affine symplectic coordinate system \((x_{1},\dots,x_{2n})\) and the generator \((dx_{1}\wedge\dots\wedge dx_{n})^{1/2}\)), we declare \({\cal W}F_{{}_{L,K_{L}^{\otimes 1/2}}}\) to be the \({\bf k}[[\hbar]]\)-module
\[{\cal O}(L)[[\hbar]]\simeq{\cal O}[\pi(L)][[\hbar]]\]
where \({\cal O}(L)\) is the algebra of functions on \(L\), identified with functions on open domain (or a formal germ) \(\pi(L)\subset\mathbb{A}^{n}\), with the action of generators \(\hat{x}_{i},\quad i=1,\dots,2n\) of the quantum algebra given by
\[\hat{x}_{i}\mapsto x_{i},\quad\hat{x}_{n+i}\mapsto\hbar\frac{\partial}{ \partial x_{i}}+\alpha_{i},\qquad\forall i=1,\dots,n\,.\]
It is a nontrivial fact that this action extends by continuity from polynomial elements to the functions on the affine space \(\mathbb{A}^{2n}\simeq M\) defined in the formal completion of \(L\subset M\).
Notationally, it is convenient to choose locally a primitive of closed 1-form \(\alpha\), i.e. a function \(f(x_{1},\dots,x)\) defined on the domain \(\pi(L)\subset\mathbb{A}^{n}\), such that
\[\alpha=df\iff\alpha_{i}=\frac{\partial f}{\partial x_{i}}\quad\forall i=1, \dots,n\,.\]
Then the element of \({\cal W}F_{{}_{L,K_{L}^{\otimes 1/2}}}\) corresponding to a series
\[g=g(x_{1},\dots,x_{n};\hbar)=\sum_{k\geq 0}g_{k}\hbar^{k}\in{\cal O}[\pi(L)][[ \hbar]]\]
we denote by the formal product
\[\psi=e^{\frac{f(x_{1},\dots,x_{n})}{\hbar}}g(x_{1},\dots,x_{n};\hbar)\cdot(dx _{1}\wedge\dots\wedge dx_{n})^{1/2}\,.\]
The choice of the primitive is irrelevant: if we shift it by a constant \(f\to f+const\) then we formally multiply the expression above by \(e^{\frac{const}{\hbar}}\) without affecting series \(g\).
Now we want to study the dependence of the description of \({\cal W}F_{{}_{L,K_{L}^{\otimes 1/2}}}\) under the change of choices made. First, for a given affine symplectic coordinate system \((x_{1},\dots,x_{2n})\), if we change the square root \((dx_{1}\wedge\dots\wedge dx_{n})^{1/2}\) by sign, then the isomorphism
\[{\cal W}F_{{}_{L,K_{L}^{\otimes 1/2}}}\stackrel{{\sim}}{{\longrightarrow}}O[ \pi(L)][[\hbar]]\]
will be also changed by sign.
Next, let us change the affine symplectic coordinate system \((x_{1},\ldots,x_{2n})\) preserving first \(n\) coordinates \(x_{1},\ldots,x_{n}\):
\[x_{i}\to x_{i},\quad x_{n+i}\to x_{n+i}+\sum_{j=1}^{n}b_{ij}x_{j}+c_{i},\qquad i =1,\ldots,n,\quad b_{ij}=b_{ji}\in{\bf k},\quad c_{i}\in{\bf k}\,. \tag{2.12}\]
Then we multiply the corresponding formal wave function \(\psi\) by
\[\psi\to e^{\frac{1}{\hbar}\left(\frac{1}{2}\sum_{i,j}b_{ij}x_{i}x_{j}+\sum_{i }c_{i}x_{i}\right)}\psi\]
If we apply an affine symplectic transformation associated with an invertible \((n\times n)\)-matrix \(a=(a_{ij})_{1\leq i,j\leq n}\):
\[x_{i}\to x_{i}^{\prime}=\sum_{j=1}^{n}a_{ij}x_{j},\quad x_{n+i}\to x_{n+i}^{ \prime}=\sum_{j=1}^{n}(a^{-1})_{ji}x_{n+j}\qquad\forall i=1,\ldots,n \tag{2.13}\]
then we change the wave function by
\[e^{\frac{1}{\hbar}f}g\to e^{\frac{1}{\hbar}f^{\prime}}g^{\prime},\qquad f^{ \prime}(\vec{x}):=f(\vec{x}\,^{\prime}(\vec{x})),\quad g^{\prime}(\vec{x}; \hbar):=g(\vec{x}\,^{\prime}(\vec{x});\hbar)\cdot\det(a)^{-1/2}\,.\]
We can now describe the dependence of the description of \({\cal W}\!F_{\,L,K_{L}^{\otimes 1/2}}\) under the (almost) general affine symplectic transformation. Namely, let us assume that \((x_{1},\ldots,x_{2n})\) and \((x_{1}^{\prime},\ldots,x_{2n}^{\prime})\) are two affine symplectic coordinate systems on \(M\) such that both projections from \(L\) to the affine space \(\mathbb{A}^{n}\) given by \((x_{1},\ldots,x_{n})\) and \((x_{1}^{\prime},\ldots,x_{n}^{\prime})\) are open embeddings. Let us make an assumption (which is an open condition) that \((x_{1},\ldots,x_{n},x_{1}^{\prime},\ldots,x_{n}^{\prime})\) form a system of coordinates on \(M\). By applying the above modifications (2.12),(2.13), we may assume that
\[x_{i}^{\prime}=x_{n+i},\quad x_{n+i}^{\prime}=-x_{i},\qquad\forall i=1,\ldots,n\]
Then we declare that the corresponding formal wave functions undergo the _formal Fourier transform_:
\[g(\vec{x};\hbar)e^{\frac{f(\vec{x})}{\hbar}}\to\tilde{g}(\vec{y};\hbar)e^{ \frac{\tilde{f}(\vec{x})}{\hbar}}:=\frac{1}{(2\pi\hbar)^{n/2}}\int g(\vec{x}; \hbar)e^{\frac{f(\vec{x})+\vec{x}\cdot\vec{x}}{\hbar}}d^{n}\vec{x}\]
where the integral in the r.h.s. is understood as the asymptotic expansion, calculated via the stationary phase method. In particular, the exponent \(\tilde{f}\) is the Legendre transform of \(f\):
\[\hat{f}(\vec{y})=\mbox{Critical value of }f(\vec{x})+\vec{x}\cdot\vec{y}\,.\]
The consistency check mentioned before, says that for _three_ affine symplectic coordinate systems on \(M\): \((x_{1},\ldots,x_{2n})\), \((x_{1}^{\prime},\ldots,x_{2n}^{\prime})\) and \((x_{1}^{\prime\prime},\ldots,x_{2n}^{\prime\prime})\), the passage from the first to the second, and then from the second to the third, coincides with the passage from the first to the third. Here we will give the sketch of the proof which is not purely algebraic and is based partially on analysis.
After making the assumption that the triple of coordinate systems under consideration is sufficiently generic, the consistency question can be reduced to the following equality
\[\int\Bigg{(}e^{-\frac{\vec{x}\cdot\vec{x}}{2\hbar}}\Big{(}\int g(\vec{x})e^{\frac {f(\vec{x})+\vec{x}\cdot\vec{x}}{\hbar}}d^{n}\vec{x}\Big{)}\Bigg{)}e^{\frac{\vec {x}\cdot\vec{x}}{\hbar}}d^{n}\vec{y}=(2\pi\hbar)^{n/2}e^{\frac{\vec{x}\cdot\vec {x}}{2\hbar}}\int e^{\frac{1}{\hbar}(f(\vec{x})+\frac{\vec{x}\cdot\vec{x}}{2}+ \vec{x}\cdot\vec{z})}d^{n}\vec{x} \tag{2.14}\]
where \(f(\vec{x})\) is a formal power series starting with quadratic terms
\[f(\vec{x})=-\frac{1}{2}\sum_{ij}b_{ij}x_{i}x_{j}+\ldots \tag{2.15}\]
where symmetric matrix \((b_{ij})_{1\leq i,j\leq n}\) has no eigenvalues equal to \(0\) or \(1\).
In the case \({\bf k}=\mathbb{R}\), function \(f\) being a global strictly concave smooth real-valued function on \(\mathbb{R}^{n}\) such that \(f(\vec{x})\leq-\frac{c}{2}\vec{x}\cdot\vec{x}\) for some \(c>1\) and any smooth function \(g\) of at most exponential growth at infinity, all the integrals in (2.14) are absolutely convergent. The equality follows because the l.h.s can be rewritten as
\[\int\int g(\vec{x})e^{\frac{1}{\hbar}(f(\vec{x})+\vec{x}\cdot\vec{y}+\vec{y} \cdot\vec{z}-\frac{\vec{y}\cdot\vec{y}}{2})}d^{n}\vec{x}\,d^{n}\vec{y}\]
and then identified with r.h.s. using the rewriting
\[\vec{x}\cdot\vec{y}+\vec{y}\cdot\vec{z}-\frac{\vec{y}\cdot\vec{y}}{2}=-\frac {\vec{w}\cdot\vec{w}}{2}+\frac{\vec{x}\cdot\vec{x}}{2}+\frac{\vec{z}\cdot\vec {z}}{2}+\vec{x}\cdot\vec{z},\quad\vec{w}:=\vec{y}-\vec{x}-\vec{z}\,.\]
Each term in the \(\hbar\)-expansion of the equality (2.14) is a polynomial identity with rational coefficients involving _finitely many_ Taylor coefficients of \(f,g\). The fact that it holds for real \(C^{\infty}\) examples as above giving Zariski dense subsets of possible Taylor coefficients of finite jets of \(f,g\) at \(0\) implies that (2.14) holds for arbitrary formal series \(f,g\) with coefficients in any field \({\bf k}\supset\mathbb{Q}\). This concludes the proof of the consistency check.
**Remark 2.1.** One can further generalize equality (2.14). Namely, each term in \(\hbar\)-expansion is an equality of certain finite sums of numbers obtained by contraction of upper and lower indices for certain symmetric tensors in \(n\)-dimensional space. The tensors under consideration are Taylor coefficients of series \(f,g\) and the inverses to symmetric matrices \(B\) and \(B-{\bf 1}_{n}\) where \(B=(b_{ij})_{1\leq i,j\leq n}\) is the (negative) Hessian of \(f\) at \(0\) (see (2.15)) and \({\bf 1}_{n}\) is the identity matrix (could be replaced by any non-degenerate quadratic form). The fact that the equality holds in _any_ positive dimension \(n\geq 0\) implies by the Weyl's fundamental theorem in invariant theory that it holds by purely formal reasons, as the cancellation of linear combination of oriented graphs controlling the contraction of indices. Therefore, the equality (2.14) makes sense and holds in arbitrary \(\mathbb{Q}\)-linear rigid symmetric monoidal category, like, e.g. finite-dimensional super vector spaces. Hence the construction
\[(L,K_{L}^{\otimes 1/2})\rightsquigarrow\mathcal{WF}_{L,K_{L}^{\otimes 1/2}}\]
can be extended to the case of super manifolds.
In what follows we will not specify the choice of \(K_{L}^{\otimes 1/2}\) and hence omit it from the notation.
**Remark 2.2.** Let \(f,g\) be an admissible pair. The formal product
\[\psi=g(x_{1},...,x_{n})e^{\frac{1}{\hbar}f(x_{1},...,x_{n})}(dx_{1}\wedge... \wedge dx_{n})^{\frac{1}{2}}\]
is an element of \(\mathcal{WF}_{L}\) where \(L\subset\mathbb{A}^{2n}\) is the graph of \(df\), a germ of Lagrangian submanifold in \(\mathbb{A}^{2n}=T^{*}\mathbb{A}^{n}\). Simultaneously, \(L\subset\mathbb{A}^{2n}=T^{*}(\mathbb{A}^{n})^{*}\) is the graph of \(d\hat{f}\) where \(\hat{f}(y_{1},...,y_{n})\) is the Legendre transform of \(f\), see (1.7).
The admissible pair \(\hat{f},\hat{g}\) obtained by the Fourier transform from \(f,g\) gives _the same_ element \(\hat{\psi}=\psi\in\mathcal{WF}_{L}\). Our initial question about finding admissible pairs can be reformulated as the question about finding elements \(\psi\in\mathcal{WF}_{L}\) such that in two descriptions of \(\mathcal{WF}_{L}\) corresponding to the projections either to coordinates \(x_{1},...,x_{n}\) or to \(x_{n+1},...,x_{2n}\), the functions \(g(x_{1},...,x_{n},\hbar)\) (and \(\hat{g}(x_{n+1},...,x_{2n},\hbar)\)) associated with \(\psi=\hat{\psi}\) do not depend on \(\hbar\).
Finally, we explain how to interpret in terms of formal wave functions expressions involving delta functions. Let us assume that we have a (germ) of \(k\)-dimensional submanifold in a \(n\)-dimensional affine space. After making an affine change of coordinates, we may assume that the submanifold under the consideration is given the graph of a map form an open domain in \(\mathbb{A}^{k}\) to \(\mathbb{A}^{n-k}\), i.e. given by
\[x_{k+1}=\phi_{1}(x_{1},\ldots,x_{k})\] \[x_{k+2}=\phi_{2}(x_{1},\ldots,x_{k})\] \[\ldots\] \[x_{n}=\phi_{n-k}(x_{1},\ldots,x_{k})\,.\]
Assume that we are also given two functions \(f(x_{1},\ldots,x_{k})\) and \(g(x_{1},\ldots,x_{k})\). We would like to make sense of the following expression:
\[g(x_{1},\ldots,x_{k})e^{\frac{1}{\hbar}f(x_{1},\ldots,x_{k})}\prod_{i=1}^{n-k} \delta(x_{k+i}-\phi_{i}(x_{1},\ldots,x_{k}))\,.\]
which is an element of \(\mathcal{WF}_{L}\) where \(L\subset M\simeq\mathbb{A}^{2n}=T^{*}\mathbb{A}^{n}\) is the conormal bundle to the \(k\)-dimensional submanifold in \(\mathbb{A}^{n}\) defined above. This can be achieved by making the Fourier transform in variables \((x_{k+1},\ldots,x_{n})\):
\[\int g(x_{1},\ldots,x_{k})e^{\frac{1}{\hbar}(f(x_{1},\ldots,x_{k} )+\sum_{i=1}^{n-k}x_{k+i}y_{i})}\prod_{i=1}^{n-k}\delta(x_{k+i}-\phi_{i}(x_{1},\ldots,x_{k}))\prod_{i=1}^{n-k}dx_{k+i}=\\ =g(x_{1},\ldots,x_{k})e^{\frac{1}{\hbar}(f(x_{1},\ldots,x_{k})+ \sum_{i=1}^{n-k}\phi_{i}(x_{1},\ldots,x_{k})y_{i})}\,.\]
So, we see that new exponent which is function in variables \(x_{1},\ldots,x_{k};y_{1},\ldots,y_{n-k}\)
\[f(x_{1},\ldots,x_{k})+\sum_{i=1}^{n-k}\phi_{i}(x_{1},\ldots,x_{k})y_{i}\]
which happen to be a linear function in \((n-k)\) variables \(y_{1},\ldots,y_{n-k}\).
In our formalism we have
\[\delta(x)=\frac{1}{2\pi\hbar}\int e^{\frac{xy}{\hbar}}dy. \tag{2.16}\]
Recall that all our identities hold up to fourth root of unity (see Introduction). In the case of the actual Dirac distribution \(\delta(x)\) on \(\mathbb{R}\), the exact formula is
\[\delta(x)=\frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}e^{\frac{xy}{\hbar}}dy, \quad\hbar>0.\]
**Remark 2.3.** Algebra \(\mathbf{k}[x_{1},...,x_{2n},\hbar]\) with Moyal star product (as well as its completions associated with open domain in \(\mathbb{A}^{2n}\) or formal completions) has a canonical derivation over \(\mathbf{k}\) given by
\[\tau(x_{i})=\frac{1}{2}x_{i},\quad\tau(\hbar)=\hbar.\]
If \(L\subset\mathbb{A}^{2n}\) is conical, then \(\tau\) admits a natural extension \(\tau_{L}\) to \(\mathcal{WF}_{L}\). Locally, if \(x_{1},...,x_{n}\) are coordinates, then
\[L=\text{graph }dF_{L}(x_{1},...,x_{n})\]
where \(F_{L}\) is homogeneous of degree \(2\). Let
\[\psi=G_{L}(x_{1},...,x_{n},\hbar)e^{\frac{F_{L}(x_{1},...,x_{n})}{\hbar}}(dx_{ 1}...dx_{n})^{\frac{1}{2}}\in\mathcal{WF}_{L}.\]
We define
\[\tau_{L}(\psi)=(\hbar\partial_{\hbar}G_{L}(x_{1},...,x_{n},\hbar)+\frac{1}{2} \sum_{i=1}^{n}x_{i}\partial_{x_{i}}G_{L}(x_{1},...,x_{n},\hbar))e^{\frac{F_{L }(x_{1},...,x_{n})}{\hbar}}(dx_{1}...dx_{n})^{\frac{1}{2}}.\]
## 3 Reformulation of the problem in terms of conical germs
### Admissible pairs and projectively dual hypersurfaces
**Definition 3.1.1.** A germ of smooth hypersurface \(\Sigma\) in projective space \(\mathbb{P}(V)\) is called non-degenerate if the Gauss map \(\Sigma\rightarrow\mathbb{P}(V^{*})\) given by \(x\mapsto T_{x}\Sigma\) is an immersion.
Projective duality identifies germs of smooth non-degenerate hypersurfaces in \(\mathbb{P}(V)\) and \(\mathbb{P}(V^{*})\).
**Theorem 3.1.1.** There is one to one correspondence between admissible pairs of germs of functions \(f,g\) in \(n\) variables and germs of distributions in \(n+2\) variables which are smooth densities on a conical germ of a hypersurface in \(\mathbb{A}^{n+2}\), independent of \(\hbar\), satisfying certain genericity constraints explained below, homogeneous of degree \(-\frac{n+2}{2}\) and such that their Fourier transform is (up to the formal factor \((2\pi\hbar)^{\frac{n+2}{2}}\)) a distribution with the same properties on the dual space. This correspondence is given by
\[(f,g)\quad\leftrightsquigarrow\quad G(x_{0},...,x_{n+1})=\delta\Big{(}\frac{x_{n +1}}{x_{0}}-f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\Big{)} \right.\,g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\,\,x_{0}^{ -\frac{n+2}{2}} \tag{3.17}\]
and we have
\[\int G(x_{0},...,x_{n+1})e^{\frac{1}{\hbar}(x_{0}y_{0}+...+x_{n+1}y_{n+1})}dx _{0}...dx_{n+1}=(2\pi\hbar)^{\frac{n+2}{2}}\hat{G}(y_{0},...,y_{n+1}) \tag{3.18}\]
where \(\hat{G}\) is given by (3.19) for some germs of functions \(\hat{f},\hat{g}\).
The genericity constraints7 for a smooth conical germ \(C\) at point \(p\in\mathbb{A}^{n+2}\backslash 0\) are the following:
Footnote 7: For any non-degenerate hypersurface in \(\mathbb{P}(\mathbb{A}^{n+2})\) the corresponding germ of cones satisfies above constraints at generic point.
**1.** The projectivization of \(C\) is non-degenerate,
**2.** Coordinate \(x_{0}\) of point \(p\) is non-zero,
**3.** The tangent space \(T_{x_{0}}C\) does not contain vector \((0,...,0,1)\).
**Proof** Let \(f,g\) be admissible. Notice that the formula (3.17) for \(G\) describes a germ of a general smooth distribution in \(n+2\) variables supported on a conical germ of a hypersurface in \(\mathbb{A}^{n+2}\), and homogeneous of degree \(-\frac{n+2}{2}\). Let us check that Fourier transform of \(G\) can be represented by a similar formula multiplied by \((2\pi\hbar)^{\frac{n+2}{2}}\). Indeed, we have
\[\int G(x_{0},...,x_{n+1})e^{\frac{1}{\hbar}(x_{0}y_{0}+...+x_{n+1}y_{n+1})}dx _{0}...dx_{n+1}\stackrel{{(1)}}{{=}}\]
\[\int\delta\Big{(}x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_ {0}}\Big{)}\Big{)}\,\,g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}} \Big{)}\,\,x_{0}^{-\frac{n}{2}}e^{\frac{1}{\hbar}(x_{0}y_{0}+...+x_{n+1}y_{n+1 })}dx_{0}...dx_{n+1}\stackrel{{(2)}}{{=}}\]
\[\int g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\,\,x_{0}^{- \frac{n}{2}}e^{\frac{1}{\hbar}(x_{0}y_{n+1}f\big{(}\frac{x_{1}}{x_{0}},..., \frac{x_{n}}{x_{0}}\big{)}+x_{0}y_{0}+...+x_{n}y_{n})}dx_{0}...dx_{n} \stackrel{{(3)}}{{=}}\]
\[\int g\big{(}x_{1},...,x_{n}\big{)}\,\,x_{0}^{\frac{n}{2}}e^{\frac{x_{0}y_{0}}{ \hbar}+\frac{x_{0}y_{n+1}}{\hbar}\big{(}f(x_{1},...,x_{n})+x_{1}\frac{y_{1}}{ y_{n+1}}+...+x_{n}\frac{y_{n}}{y_{n+1}}\big{)}}dx_{0}...dx_{n}\stackrel{{ (4)}}{{=}}\]
\[\int x_{0}^{\frac{n}{2}}e^{\frac{x_{0}y_{0}}{\hbar}}\,\,\Bigg{(}\frac{2\pi \hbar}{y_{n+1}x_{0}}\Bigg{)}^{\frac{n}{2}}\hat{g}\Big{(}\frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}e^{\frac{y_{n+1}x_{0}}{\hbar}\hat{f}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}}dx_{0}\stackrel{{ (5)}}{{=}}\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Big{(}y_{0}+y_{n+1}\hat{f}\Big{(}\frac{y_{1}} {y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\Big{)}\hat{g}\Big{(}\frac{y_{1}}{y_{ n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{-\frac{n}{2}}\stackrel{{ (6)}}{{=}}\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Big{(}\frac{y_{0}}{y_{n+1}}+\hat{f}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\Big{)}\hat{g}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{-\frac{n+2}{2}}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\hat{G}(y_{0},...,y_{n+1})\]
where
\[\hat{G}(y_{0},...,y_{n+1})=\delta\Big{(}\frac{y_{0}}{y_{n+1}}+\hat{f}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\Big{)}\hat{g}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{-\frac{n+2}{2}}\]
and this formula is indeed similar to (3.17).
Here is an explanation of all steps in this calculation:
(1) We use the homogeneous property of delta function.
(2) We integrate by \(x_{n+1}\) removing delta function.
(3) We make a change of variables \(x_{i}\mapsto x_{0}x_{i},\ i=1,...,n\).
(4) We integrate by \(x_{1},...,x_{n}\) using our assumption that \(f,g\) is admissible, see (1.6).
(5) We integrate by \(x_{0}\) using an integral representation of delta function, see (2.16).
(6) We use the homogeneous property of delta function again.
Conversely, let us assume that the Fourier transform of \(G\) is supported on a conical germ and also is homogeneous of degree \(-\frac{n+2}{2}\), i.e.
\[\int\delta\Big{(}x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_ {0}}\Big{)}\Big{)}\ g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)} \ x_{0}^{-\frac{n}{2}}e^{\frac{1}{\hbar}(x_{0}y_{0}+...+x_{n+1}y_{n+1})}dx_{0 }...dx_{n+1}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Big{(}y_{0}+y_{n+1}\hat{f}\Big{(}\frac{y_{1 }}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\Big{)}\hat{g}\Big{(}\frac{y_{1}} {y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{-\frac{n}{2}}\]
where \(f,g,\hat{f},\hat{g}\) are some germs of functions in \(n\) variables independent of \(\hbar\). Integrating the l.h.s. by \(x_{n+1}\) and making change of variables \(x_{i}\mapsto x_{0}x_{i},\ \ y_{i}\mapsto y_{n+1}y_{i},\ i=1,...,n\) we obtain:
\[\int g\big{(}x_{1},...,x_{n}\big{)}\ x_{0}^{\frac{n}{2}}e^{\frac{x_{0}y_{0}}{ \hbar}+\frac{x_{0}y_{n+1}}{\hbar}(f(x_{1},...,x_{n})+x_{1}y_{1}+...+x_{n}y_{n} )}dx_{0}...dx_{n}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Big{(}y_{0}+y_{n+1}\hat{f}\Big{(}y_{1},...,y _{n}\Big{)}\Big{)}\hat{g}\Big{(}y_{1},...,y_{n}\Big{)}y_{n+1}^{-\frac{n}{2}}.\]
Finally, we multiply this equation by \(\frac{1}{2\pi\hbar}e^{-\frac{y_{0}}{\hbar y_{n+1}}}\), integrate by \(y_{0}\), and after subsequent integration of the l.h.s. by \(x_{0}\) we obtain (1.6). \(\square\)
**Remark 3.1.1.** One can check that if the densities \(G\) and \(\hat{G}\) are related by the Fourier transform (3.18), then their supports are projectively dual. See also Remark 1.4.
### Projective invariance of the problem
Notice that the conditions on distribution \(G\) from the Theorem 3.1.1 (ignoring the genericity constraints **2**, **3** from Section 3.1) are invariant under \(GL(n+2)\) acting on \(\mathbb{A}^{n+2}\).
**Corollary 3.2.1.**\(GL(n+2)\) acts on germs of admissible pairs at generic point.
**Definition 3.2.1.** A cone \(C\) (as well as its projectivization) is called admissible if it is locally defined by (1.9) where function \(f\) is admissible. The rank of \(C\) is the rank of any such \(f\). We denote the rank of \(C\) by \(rk(C)\). A cone \(C\) is admissible iff \(rk(C)>0\). We will also use notation \(rk(\Sigma)\) for \(rk(C)\) where \(\Sigma\subset\mathbb{P}^{n+1}\) is the projectivization of \(C\).
The admissibility of cones defined above does not depend on the choice of projective coordinates.
In order to write explicit formulas for \(GL(n+2)\) action (see also (1.11)), it is convenient to describe \(f,g\) parametrically as
\[f=\phi_{0}(u_{1},...,u_{n}),\quad x_{i}=\phi_{i}(u_{1},...,u_{n}),\ i=1,...,n, \tag{3.20}\]
\[g=\psi(u_{1},...,u_{n}).\]
Then the pair \(\tilde{f}(x_{1},...,x_{n}),\tilde{g}(x_{1},...,x_{n})\) defined parametrically by
\[\tilde{f}=\tilde{\phi}_{0}(u_{1},...,u_{n}),\quad x_{i}=\tilde{\phi}_{i}(u_{1 },...,u_{n}),\ i=1,...,n, \tag{3.21}\]
\[\tilde{g}=\tilde{\psi}(u_{1},...,u_{n})\]
is also admissible, where
\[\tilde{\phi}_{i}=\frac{\sum_{j=0}^{n}a_{i,j}\phi_{j}+a_{i,n+1}}{\sum_{j=0}^{n }a_{n+1,j}\phi_{j}+a_{n+1,n+1}},\ i=0,...,n, \tag{3.22}\]
\[\tilde{\psi}=\psi\cdot\frac{\det\left(\frac{\partial\phi_{i}}{\partial u_{j} }\right)_{1\leq i,j\leq n}}{\det\left(\frac{\partial\tilde{\phi}_{i}}{\partial u _{j}}\right)_{1\leq i,j\leq n}}\cdot\Big{(}\sum_{j=0}^{n}a_{n+1,j}\phi_{j}+a_ {n+1,n+1}\Big{)}^{-\frac{n+2}{2}}.\]
Here \((a_{i,j})_{0\leq i,j\leq n+1}\) is an arbitrary non-degenerate constant matrix.
**Remark 3.2.1.** The projective invariance of admissible pairs is proven by a direct but not very transparent calculation. It can be explained in another way. First, notice that the group \(Aff(n+1)\) of affine transformations of \(\mathbb{A}^{n+1}\) acts on admissible pairs as
\[\tilde{\phi}_{i}=\sum_{j=0}^{n}a_{i,j}\phi_{j}+b_{i},\ i=0,...,n, \tag{3.23}\]
\[\tilde{\psi}=\psi\cdot\frac{\det\left(\frac{\partial\phi_{i}}{\partial u_{j} }\right)_{1\leq i,j\leq n}}{\det\left(\frac{\partial\tilde{\phi}_{i}}{\partial u _{j}}\right)_{1\leq i,j\leq n}}.\]
Here \((a_{i,j})_{0\leq i,j\leq n}\) is an arbitrary non-degenerate constant matrix and \(b_{i}\) are arbitrary constants. Notice that the last equation can be also written as8
Footnote 8: In invariant terms, we have a hypersurface in affine space \(\mathbb{A}^{n+1}\), endowed with a volume element.
\[\tilde{\psi}\ d\tilde{\phi}_{1}\wedge...d\tilde{\phi}_{n}=\psi\ d\phi_{1}\wedge...\wedge d\phi_{n}.\]
Indeed, after the change of variables
\[\hbar\mapsto\frac{\hbar}{y_{0}},\ y_{i}\mapsto\frac{y_{i}}{y_{0}}\]
the equation (1.6) can be written as
\[\int e^{\frac{\phi_{0}y_{0}+\phi_{1}y_{1}+...+\phi_{n}y_{n}}{\hbar}}\ \psi\ d\phi_{1}\wedge...d\phi_{n}=(2\pi\hbar)^{\frac{n}{2}}\cdot y_{0}^{-\frac{ n}{2}}\ \hat{g}\Bigg{(}\frac{y_{1}}{y_{0}},...,\frac{y_{n}}{y_{0}}\Bigg{)}\cdot e^{\frac{y_{ 0}}{\hbar}\hat{f}\left(\frac{y_{1}}{y_{0}},...,\frac{y_{n}}{y_{0}}\right)}. \tag{3.24}\]
The l.h.s. of the equation (3.24) is manifestly invariant (up to multiplication by a constant independent of \(u_{1},...,u_{n}\)) with respect to the affine action (3.23) and the dual action of \(GL(n+1)\) on variables \(y_{0},...,y_{n}\), so that the form \(\phi_{0}y_{0}+\phi_{1}y_{1}+...+\phi_{n}y_{n}\) is invariant. This gives the action (3.23) of the group \(Aff(n+1)\) on the set of admissible pairs.
By definition, the Fourier transform acts as an involution (up to the reflection \(x_{i}\rightarrow-x_{i},i=1,\ldots,n\)) on the set of admissible pairs. Conjugating by the Fourier transform the action of the group \(Aff(n+1)\) described above, we obtain the _second action_ of the same group on the set of admissible pairs. One can check that these two actions generate the action of \(GL(n+2)\).
### Generalization to the projectively dual lower-dimensional cones
It seems to be natural to generalize previous considerations in the following way:
1. Densities \(G(x_{0},...,x_{n+1}),\ \hat{G}(y_{0},...,y_{n+1})\) are homogeneous, related by the Fourier transform but supported on projectively dual cones \(C,\hat{C}\) of lower dimensions, not necessarily hypersurfaces.
2. Densities \(G,\hat{G}\) are both finite linear combinations of derivatives of delta functions, not necessarily just proportional to delta functions.
We want to write explicitly conditions for \(G,\hat{G}\). To simplify formulas, we assume that both \(G,\hat{G}\) are proportional to delta functions, in the general case computations are similar.
Suppose that our cone \(C\subset\mathbb{A}^{n+2}\) is defined parametrically by
\[x_{i}=\phi_{i}(u_{0},...,u_{m_{1}}),\quad i=0,...,n+1\]
where \(u_{0},...,u_{m_{1}}\) are coordinates on \(C\), functions \(\phi_{i}\) are homogeneous of degree \(d_{1}\), and \(\dim C=m_{1}+1\).
Similarly, suppose that \(\hat{C}\subset(\mathbb{A}^{n+2})^{*}\) is defined parametrically by
\[y_{i}=\psi_{i}(v_{0},...,v_{m_{2}}),\quad i=0,...,n+1\]
where \(v_{0},...,v_{m_{1}}\) are coordinates on \(\hat{C}\), functions \(\psi_{i}\) are homogeneous of degree \(d_{2}\), and \(\dim\hat{C}=m_{2}+1\).
We can write our densities \(G,\hat{G}\) as
\[G(x_{0},...,x_{n+1})=\int\prod_{i=0}^{n+1}\delta(x_{i}-\phi_{i}(u_{0},...,u_{m _{1}}))g(u_{0},...,u_{m_{1}})du_{0}...du_{m_{1}},\]
\[\hat{G}(y_{0},...,y_{n+1})=\int\prod_{i=0}^{n+1}\delta(y_{i}-\psi_{i}(v_{0},..., v_{m_{2}}))\hat{g}(v_{0},...,v_{m_{2}})dv_{0}...dv_{m_{2}}\]
where \(g,\hat{g}\) are also homogeneous with certain homogeneous degrees. The condition (3.18) after the integration with respect to variables \(x_{0},...,x_{n+1}\) in the l.h.s. reads
\[\int g(u_{0},...,u_{m_{1}})e^{\frac{1}{h}\sum_{i=0}^{n+1}\phi_{i}(u_{0},...,u_ {m_{1}})y_{i}}du_{0}...du_{m_{1}}= \tag{3.25}\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\int\prod_{i=0}^{n+1}\delta(y_{i}-\psi_{i}(v_{0},...,v_{m_{2}}))\hat{g}(v_{0},...,v_{m_{2}})dv_{0}...dv_{m_{2}}.\]
The homogeneity properties of the Fourier transform imply
\[\deg g+m_{1}+1=\frac{n+2}{2}\ d_{1},\quad\deg\hat{g}+m_{2}+1=\frac{n+2}{2}\ d_{2}.\]
One can deal with condition (3.25) in the following way.
a) Choose a partition \(\{0,...,n+1\}=I_{1}\sqcup I_{2}\) such that \(|I_{1}|=m_{2}+1\), such that \((y_{i})_{i\in I_{1}}\) form a local system of coordinates on \(\hat{C}\).
b) Perform integration in the r.h.s. with respect to \(v_{0},...,v_{m_{2}}\) removing delta functions \(\delta(y_{i}-\psi_{i}(v_{0},...,v_{m_{2}})),\ i\in I_{1}.\) This reduces the number of delta functions in the r.h.s. by \(m_{2}+1\).
c) Multiply the equation (3.25) by \(\frac{1}{(2\pi\hbar)^{n-m_{2}+1}}e^{\sum_{i\in I_{2}}\frac{y_{i}x_{i}}{\hbar}}\) and integrate with respect to \(y_{i},\ i\in I_{2}\). After doing that the delta functions of the form \(\prod_{i\in I_{2}}\delta(z_{i}+\phi_{i}(u_{0},...,u_{m_{1}}))\) appear in the l.h.s., and we can remove them by integrating with respect to variables \(u_{j},\ j\in J\) where \(J\subset\{0,...,m_{1}\}\) and \(|J|=n-m_{2}+1\). In this way we remove all delta functions. Notice that we should have \(n\leq m_{1}+m_{2}\), this is always true for projectively dual cones.
d) After the removal of all delta functions from (3.25) one can take an expansion of the l.h.s. at a critical point.
**Example 3.3.1.** Let \(C\subset\mathbb{A}^{5}\) be defined parametrically by
\[C=\{(x_{0},x_{1},x_{2},x_{3}.x_{4})\mid x_{2}=h_{1}(u)x_{0}+h_{2}(u)x_{1},x_{3 }=h_{3}(u)x_{0}+h_{4}(u)x_{1},x_{4}=h_{5}(u)x_{0}+h_{6}(u)x_{1}\}\]
where \(u\) is a coordinate on \(C\) and functions \(h_{1},...,h_{6}\) satisfy conditions
\[h_{4}^{\prime}(u)h_{1}^{\prime}(u)=h_{2}^{\prime}(u)h_{3}^{\prime}(u),\quad h_{6} ^{\prime}(u)h_{1}^{\prime}(u)=h_{2}^{\prime}(u)h_{5}^{\prime}(u)\]
and certain genericity constraints. One can check that the dual cone \(\hat{C}\) can also be defined parametrically as
\[\hat{C}=\{(y_{0},y_{1},y_{2},y_{3},y_{4})\ |\ y_{0}=(p_{1}(v)h_{1}(v)-h_{3}(v))y_{ 3}+(p_{2}(v)h_{1}(v)-h_{5}(v))y_{4},\]
\[y_{1}=(p_{1}(v)h_{2}(v)-h_{4}(v))y_{3}+(p_{2}(v)h_{2}(v)-h_{6}(v))y_{4},y_{2}= -p_{1}(v)y_{3}-p_{2}(v)y_{4}\}\]
where \(v\) is a coordinate on \(\hat{C}\) and
\[p_{1}(v)=\frac{h_{3}^{\prime}(v)}{h_{1}^{\prime}(v)}=\frac{h_{4}^{\prime}(v)} {h_{2}^{\prime}(v)},\quad p_{2}(v)=\frac{h_{5}^{\prime}(v)}{h_{1}^{\prime}(v) }=\frac{h_{6}^{\prime}(v)}{h_{2}^{\prime}(v)}.\]
We have \(\dim C=\dim\hat{C}=3\) and therefore both \(C\) and its dual \(\hat{C}\) have codimension 2 in \(\mathbb{A}^{5}\).
Admissibility condition (3.25) in this case can be written as
\[\int g(x_{0},x_{1},u)e^{\frac{1}{\hbar}(x_{0}y_{0}+x_{1}y_{1}+(h_{1}(u)x_{0}+ h_{2}(u)x_{1})y_{2}+(h_{3}(u)x_{0}+h_{4}(u)x_{1})y_{3}+(h_{5}(u)x_{0}+h_{6}(u)x_{1 })y_{4}}dx_{0}dx_{1}du\]
\[=(2\pi\hbar)^{\frac{5}{2}}\int\hat{g}(y_{3},y_{4},v)\delta(y_{2}+p_{1}(v)y_{3 }+p_{2}(v)y_{4})\]
\[\delta((p_{1}(v)h_{1}(v)-h_{3}(v))y_{3}+(p_{2}(v)h_{1}(v)-h_{5}(v))y_{4}-y_{0})\]
\[\delta((p_{1}(v)h_{2}(v)-h_{4}(v))y_{3}+(p_{2}(v)h_{2}(v)-h_{6}(v))y_{4}-y_{1} )dv.\]
Multiplying both sides by \(\frac{1}{(2\pi\hbar)^{2}}e^{-\frac{1}{\hbar}(y_{0}z_{0}+y_{1}z_{1})}\), integrating by \(y_{0},y_{1}\), and in the l.h.s. integrating also by \(x_{0},x_{1}\) we obtain
\[\int g(z_{0},z_{1},u)e^{\frac{1}{\hbar}((h_{1}(u)z_{0}+h_{2}(u)z_{1})y_{2}+(h_ {3}(u)z_{0}+h_{4}(u)z_{1})y_{3}+(h_{5}(u)z_{0}+h_{6}(u)z_{1})y_{4})}du=\]
\[(2\pi\hbar)^{\frac{1}{2}}\int\hat{g}(y_{3},y_{4},v)\delta(y_{2}+p_{1}(v)y_{3}+ p_{2}(v)y_{4})\]
\[e^{-\frac{1}{\hbar}(((p_{1}(v)h_{1}(v)-h_{3}(v))y_{3}+(p_{2}(v)h_{1}(v)-h_{5}( v))y_{4})z_{0}+((p_{1}(v)h_{2}(v)-h_{4}(v))y_{3}+(p_{2}(v)h_{2}(v)-h_{6}(v))y_{4})z_{1 })}dv.\]
We can integrate by \(v\) in the r.h.s. removing delta function, and integrate by \(u\) in the l.h.s. by taking expansion at a critical point. In this way we obtain admissibility conditions for \(C\) explicitly. It would be interesting to classify such admissible cones and generalize these for higher dimensions.
## 4 Reformulation of the problem in terms of constraints on wave functions
### Abstract formalism
In the previous Section we reformulated our original problem of finding admissible pairs \(f,g\) in \(n\) variables in terms of finding distributions \(G,\hat{G}\) in \(n+2\) variables, given by (3.17), (3.19), which are related by the Fourier transform (3.18). Let us apply the formalism of wave functions from Section 2 to these distributions.
Conical germs \(C\) and \(\hat{C}\) on which distributions \(G\), \(\hat{G}\) are supported, are given by equations
\[\frac{x_{n+1}}{x_{0}}-f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)} =0,\hskip 28.452756pt\frac{y_{0}}{y_{n+1}}+\hat{f}\Big{(}\frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}=0. \tag{4.26}\]
Denote by \(L\) the conormal bundle to cone \(C\subset\mathbb{A}^{n+2}\). It is a Lagrangian subvariety of \(\mathbb{A}^{2(n+2)}=T^{*}\mathbb{A}^{n+2}\) invariant under the action of \(GL(1)\times GL(1)\) on \(\mathbb{A}^{2(n+2)}\) where the first \(GL(1)\) rescales coordinates \(x_{0},...,x_{n+1}\) and the second \(GL(1)\) rescales coordinates \(y_{0},...,y_{n+1}\). In particular, \(L\) is conical9.
Footnote 9: Generically, the invariance under the second copy of \(GL(1)\) means that \(L\) is a conormal bundle to a subvariety in \(\mathbb{A}^{n+2}\) of arbitrary dimension. The invariance under the first copy of \(GL(1)\) means that this subvariety is conical.
Nonvanishing of the Hessian of germ \(f\) (and hence of \(\hat{f}\)) can be reformulated as the following genericity condition on the smooth \(GL(1)\times GL(1)\) invariant germ \(L\) of Lagrangian submanifold at point \(p\in\mathbb{A}^{2(n+2)}\), similar to conditions **1, 2, 3** in Section 3.1:
1) coordinates \(x_{0}\) and \(y_{n+1}\) of \(p\) are non-zero.
2) projections from \(T_{p}L\) to \(\mathbb{A}^{n+2}_{x_{0},...,x_{n},y_{n+1}}\) and to \(\mathbb{A}^{n+2}_{y_{0},...,y_{n},x_{n+1}}\) are one-to-one.
The space \(\mathbb{A}^{2(n+2)}\) with the symplectic structure \(\sum_{i=0}^{n+1}dx_{i}\wedge dy_{i}\) is simultaneously the cotangent space to \(\mathbb{A}^{n+2}_{x_{0},...,x_{n+1}}\) and to \(\mathbb{A}^{n+2}_{y_{0},...,y_{n+1}}\).
Projective duality between the projectivizations of cones \(C\) and \(\hat{C}\) can be reformulated as the property of \(L\) to be simultaneously the conormal bundle to \(C\) and \(\hat{C}\).
Let
\[F_{1}(x_{0},...,x_{n+1})=0,\hskip 28.452756ptF_{2}(y_{0},...,y_{n+1})=0 \tag{4.27}\]
be equations for the cones \(C\) and \(\hat{C}\) respectively. Here \(F_{1},\ F_{2}\) are germs at non-zero points of homogeneous functions of some homogeneous degrees which are proportional with an invertible factors to l.h.s. of equations (4.26).
Distributions \(G,\hat{G}\) can be understood as elements \(\psi,\hat{\psi}\in\mathcal{WF}_{L}\). Similarly to Remark 2.2 the fact that \(\hat{G}\) is the Fourier transform of \(G\) can be written as equality \(\psi=\hat{\psi}\).
Let us interpret \(F_{1},\ F_{2}\) as elements of the quantum algebra acting on \(\mathcal{WF}_{L}\). The property
of \(G\) to be a smooth density on \(C\) can be rewritten as10
Footnote 10: Recall that \(C\) is defined by equation \(F_{1}=0\) and therefore, \(G\) is proportional to \(\delta(F_{1})\). For example, we have \(P(x_{0},...,x_{n+1})\delta(P(x_{0},...,x_{n+1}))=0\) for an arbitrary polynomial \(P\).
**1.**\(F_{1}\cdot\psi=0\).
Similarly the dual condition on \(\hat{G}\) gives
\({\bf 1^{\prime}}.\)\(F_{2}\cdot\psi=0\).
The condition on \(G,\hat{G}\) of being homogeneous of degree \(-\frac{n+2}{2}\) can be rewritten as
**2.**\(\Big{(}\sum_{i=0}^{n+1}y_{i}\star x_{i}\Big{)}\cdot\psi=0\).
Indeed, the homogeneity of \(\psi\) (considered as a distribution in variables \(x_{0},...,x_{n+1}\)) of degree \(-\frac{n+2}{2}\) means
\[\sum_{i=0}^{n+1}x_{i}\partial_{x_{i}}\psi+\frac{n+2}{2}\psi=0,\]
therefore \(\Big{(}\sum_{i=0}^{n+1}y_{i}\star x_{i}\Big{)}\cdot\psi=\frac{1}{2}\Big{(} \sum_{i=0}^{n+1}\hbar\partial_{x_{i}}x_{i}+\hbar x_{i}\partial_{x_{i}}\Big{)} \cdot\psi=\frac{\hbar}{2}\sum_{i=0}^{n+1}\Big{(}2x_{i}\partial_{x_{i}}+1 \Big{)}\psi=0\).
Finally, the property that \(G,\hat{G}\) are given by densities independent of \(\hbar\) can be rewritten as
**3.**\(\tau_{L}\psi=-\frac{n}{4}\psi\) where \(\tau_{L}\) is defined in Remark 2.3 in Section 2. The eigenvalue \(-\frac{n}{4}\) can be seen from the formula (4.29) in Section 4.2.
The discussion above can be summarized as
**Proposition 4.1.1.** Fix functions \(f,\hat{f}\) and the corresponding cones \(C,\hat{C}\) given by equations (4.27). There is one-to-one correspondence between the space of \(g\) such that pair \(f,g\) is admissible and non-zero elements \(\psi\in{\cal WF}_{L}\) satisfying properties \({\bf 1},\)\({\bf 1^{\prime}},{\bf 2},\)\({\bf 3}\) where \(L\) is a \(GL(1)\times GL(1)\)-invariant Lagrangian germ satisfying the genericity condition above. \(\Box\)
### Explicit formulas in general case
Let us write equations on \(\psi\) explicitly. Conditions **1, 2, 3** give
\[\psi=g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\ x_{0}^{- \frac{n+2}{2}}\delta\Big{(}\frac{x_{n+1}}{x_{0}}-f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\Big{)} \tag{4.28}\]
for some function \(g\). Indeed, condition **1** means that \(\psi\) is proportional to the delta function of cone \(C\), condition **2** gives homogeneity condition for the coefficient of proportionality, and condition **3** means that function \(g\) does not depend on \(\hbar\). However, condition \({\bf 1^{\prime}}\) becomes singular in these coordinates because the action of \(F_{2}(\hbar\partial_{x_{0}},...,\hbar\partial_{x_{n+1}})\) in general is not defined on \(\psi\) given by (4.28). In order to overcome this problem, let us make the Fourier transform with respect to the coordinate \(x_{n+1}\). Geometrically this means that we choose the projection
of our cone \(C\) to coordinates \(x_{0},...,x_{n},y_{n+1}\). After making the Fourier transform in \(x_{n+1}\) the element \(\psi\) given by (4.28) takes the form
\[\psi=x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\right)}g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x _{0}}\Big{)} \tag{4.29}\]
and condition \(\mathbf{1}^{\prime}\) means that
\[F_{2}(\hbar\partial_{x_{0}},...\hbar\partial_{x_{n}},y_{n+1})\cdot\psi=0\]
and can be rewritten as an equation on \(g\)
\[F_{2}\Big{(}\hbar\partial_{x_{0}}-\frac{n\hbar}{2x_{0}}+Q_{0},\hbar\partial_{ x_{1}}+Q_{1},...,\hbar\partial_{x_{n}}+Q_{n},y_{n+1}\Big{)}\cdot g\Big{(} \frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}=0 \tag{4.30}\]
where operator \(F_{2}\) in (4.30) is equal to
\[\left(x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x _{0}},...,\frac{x_{n}}{x_{0}}\right)}\right)^{-1}\cdot F_{2}(\hbar\partial_{ x_{0}},...\hbar\partial_{x_{n}},y_{n+1})\cdot x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1} }{\hbar}f\left(\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\right)}\]
and, therefore, \(Q_{0},...,Q_{n}\) are defined by
\[\left(x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x _{0}},...,\frac{x_{n}}{x_{0}}\right)}\right)^{-1}\cdot\hbar\partial_{x_{0}} \cdot x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x_{ 0}},...,\frac{x_{n}}{x_{0}}\right)}=\hbar\partial_{x_{0}}-\frac{n\hbar}{2x_{0} }+Q_{0},\]
\[\left(x_{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x_{ 0}},...,\frac{x_{n}}{x_{0}}\right)}\right)^{-1}\cdot\hbar\partial_{x_{i}}\cdot x _{0}^{-\frac{n}{2}}e^{\frac{x_{0}y_{n+1}}{\hbar}f\left(\frac{x_{1}}{x_{0}},...\frac{x_{n}}{x_{0}}\right)}=\hbar\partial_{x_{i}}+Q_{i},\quad i=1,...,n.\]
Explicitly, we have
\[Q_{0}=y_{n+1}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}-\frac {y_{n+1}}{x_{0}}\sum_{i=1}^{n}x_{i}f_{i}\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x _{n}}{x_{0}}\Big{)},\]
\[Q_{i}=y_{n+1}f_{i}\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}, \quad i=1,...,n\]
where by \(f_{i}\) we denote partial derivative of \(f\) with respect to its \(i-\)th argument.
Notice that \(Q_{0},...,Q_{n}\) are independent of \(\hbar\).
The l.h.s. of (4.30) makes sense as a power series in \(\hbar\). For example, if we choose \(F_{2}\) in the simplest form
\[F_{2}(y_{0},...,y_{n+1})=\frac{y_{0}}{y_{n+1}}+\hat{f}\Big{(}\frac{y_{1}}{y_{n +1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\]
as in (4.26), and take Taylor expansion11 of the l.h.s. of (4.30) at \(\hbar=0\) we obtain a power series in \(\hbar\) where each term is a differential operator (with coefficients written in terms of \(f\)) applied to the function \(g\). This can be written as a system of partial differential equations for unknown functions \(f,g\) linear in \(g\). Notice that this system is infinite in general.
Footnote 11: Computation of this Taylor series is not a straightforward problem because while arguments of \(F_{2}\) pairwise commute, parts of these arguments at different powers of \(\hbar\) does not, for example \(\hbar\partial_{x_{i}}\) does not commute with \(Q_{i}\).
### Algebraic case
Consider the _algebraic_ case where \(f\) is an algebraic function and, therefore, \(\hat{f},\ C,\ \hat{C}\) are algebraic. Let us choose \(F_{1}(x_{0},...,x_{n+1}),\ F_{2}(y_{0},...,y_{n+1})\) to be homogeneous polynomials of certain degrees, such that
\[F_{1}(1,x_{1},...,x_{n},f(x_{1},...,x_{n}))=0,\ \ \ \ \ \ \ F_{2}(1,y_{1},...,y_{n},\hat{f}(y_{1},...,y_{n}))=0.\]
In this special case one can eliminate parameter \(\hbar\), and the problem of finding an element \(\psi\) satisfying to properties \({\bf 1},\ {\bf 1^{\prime}},\ {\bf 2},\ {\bf 3}\) turns to the following purely algebraic question.
Find homogeneous polynomials \(F_{1}(x_{0},...,x_{n+1}),\ F_{2}(y_{0},...,y_{n+1})\) in two dual groups of variables (or more abstractly, in dual vector spaces) such that there exists a non-zero cyclic module \(M\) over the ring of polynomial differential operators
\[{\bf k}[x_{0},...,x_{n+1}][\partial_{x_{0}},...,\partial_{x_{n+1}}]\]
generated by an element \(\psi\in M\) such that
\[\begin{array}{l}F_{1}(x_{0},...,x_{n+1})\cdot(\psi)=0,\\ F_{2}(\partial_{x_{0}},...,\partial_{x_{n+1}})\cdot(\psi)=0,\\ \Big{(}\sum_{i=0}^{n+1}x_{i}\partial_{x_{i}}+\frac{n+2}{2}\Big{)} \psi=0.\end{array} \tag{4.31}\]
Notice that the first equation means that
\[\psi=G_{1}\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}x_{0}^{m} \delta(F_{1}(x_{0},...,x_{n+1}))\]
where the factor in front of the delta function is a homogeneous function of degree \(m\). The third equation means that
\[m-\deg F_{1}=-\frac{n+2}{2}\,.\]
The second equation gives a finite system of partial differential equations on \(F_{1},G_{1}\) linear in \(G_{1}\).
One has a universal module \(M_{\psi}\) generated by the cyclic vector \(\psi\) satisfying the above equations.
Roughly speaking, the study of admissible pairs with algebraic function \(f\) (or equivalently, \(\hat{f}\)) can be rephrased as the study of projectively dual algebraic hypersurfaces in \(\mathbb{P}^{n+1}\), \((\mathbb{P}^{n+1})^{*}\) such that the corresponding finitely generated module \(M_{\psi}\) is not zero.
We will see in the sequel both holonomic and nonholonomic12 examples of modules \(M_{\psi}\).
Footnote 12: Holonomicity at generic point means that the space of \(G_{1}\) is finite-dimensional.
**Remark 4.3.1.** Let \(\Sigma\subset\mathbb{P}^{n+1}\) and \(\widehat{\Sigma}\subset(\mathbb{P}^{n+1})^{*}\) be projectively dual hypersurfaces. Let \(\Sigma\) (resp, \(\widehat{\Sigma}\)) be defined by a homogeneous irreducible polynomial \(F_{1}(x_{0},...,x_{n+1})\) (resp. \(F_{2}(y_{0},...,y_{n+1})\)). Given polynomial \(F_{1}\) one can determine polynomial \(F_{2}\) (up to multiplication by a non-zero constant) from the condition: \(F_{2}\) is a homogeneous polynomial of smallest degree such that
\[F_{2}(\partial_{x_{0}}F_{1},...,\partial_{x_{n+1}}F_{1})=0\mod F_{1},\]
and similarly one can find polynomial \(F_{1}\) if polynomial \(F_{2}\) is given. We can define polynomials \(H_{1}\in\mathbf{k}[x_{0},...,x_{n+1}]\) and \(H_{2}\in\mathbf{k}[y_{0},...,y_{n+1}]\) by
\[F_{1}(\partial_{y_{0}}F_{2},...,\partial_{y_{n+1}}F_{2})=(f_{2}-1)F_{2}(y_{0},...,y_{n+1})H_{2}(y_{0},...,y_{n+1}),\]
\[F_{2}(\partial_{x_{0}}F_{1},...,\partial_{x_{n+1}}F_{1})=(f_{1}-1)F_{1}(x_{0},...,x_{n+1})H_{1}(x_{0},...,x_{n+1})\]
where \(f_{1}=\deg F_{1},\ f_{2}=\deg F_{2}\). Notice that
\[\deg H_{1}=\deg H_{2}=f_{1}f_{2}-f_{1}-f_{2}.\]
The mappings
\[x_{i}\mapsto\partial_{y_{i}}F_{2},\quad y_{i}\mapsto\partial_{x_{i}}F_{1}, \quad i=1,...,n\]
define birational isomorphisms between projective hypersurfaces \(\Sigma,\ \widehat{\Sigma}\). They are mutually inverse as birational mappings of projective varieties. This can be written algebraically as
\[\partial_{y_{i}}F_{2}\ \Big{|}_{y_{0}=\partial_{x_{0}}F_{1},...,y_{n+1}= \partial_{x_{n+1}}F_{1}}=x_{i}H_{1}\mod\ F_{1},\quad i=1,...,n\]
and
\[\partial_{x_{i}}F_{1}\ \Big{|}_{x_{0}=\partial_{y_{0}}F_{2},...,x_{n+1}= \partial_{y_{n+1}}F_{2}}=y_{i}H_{2}\mod\ F_{2},\quad i=1,...,n.\]
These formulas can be verified by computing derivatives of equations (4.32), reducing modulo \(F_{1}\) or \(F_{2}\) and using Euler's homogeneous function theorem.
**Remark 4.3.2.** Projective duality interchanges polynomials \(F_{1}(x_{0},...,x_{n+1}),\ F_{2}(y_{0},...,y_{n+1})\). Dual element \(\hat{\psi}\) has a form (up to a constant)
\[\hat{\psi}=G_{1}\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}x_{0} ^{m}\ \Big{|}_{x_{0}=\partial_{y_{0}}F_{2},...,x_{n+1}=\partial_{y_{n+1}}F_{2}}\det \Big{(}\partial_{y_{i}}\partial_{y_{j}}F_{2}\Big{)}^{\frac{1}{2}}H_{2}^{-1} \delta(F_{2}(y_{0},...,y_{n+1}))\]
where \(H_{2}\in\mathbf{k}[y_{1},...,y_{n}]\) (and similar polynomial \(H_{1}\)) are defined above in Remark 4.3.1.
**Remark 4.3.3.** The question of finding pairs of polynomials
\[F_{1}\in\mathbf{k}[x_{0},...,x_{n+1}],\quad F_{2}\in\mathbf{k}[\partial_{x_{0} },...,\partial_{x_{n+1}}]\]
generating a non-zero cyclic module \(M\) over the ring of differential operators, can be generalized to an associative algebra with two maximal commutative subalgebras. In the case of differential operators homogeneity of \(F_{1},\ F_{2}\) follows automatically from the nonvanishing of \(M\) (assuming that both \(F_{1},\ F_{2}\) are irreducible). In other natural cases, such as quantum torus or ring of difference operators we are not aware of any nontrivial example.
**Remark 4.3.4.** The condition that both \(F_{1},\ F_{2}\) are irreducible polynomials defining projectively dual non-degenerate hypersurfaces can be generalized.
First, one can consider the case where \(F_{1}\) is a power of irreducible polynomial. In terms of distributions \(G,\hat{G}\) it means that they are finite linear combinations of derivatives of delta functions supported on corresponding coves. In the original formulation it means that \(g,\hat{g}\) are polynomials in \(\hbar\).
Second, for a pair of projectively dual projective manifolds one (or even both) of them could have codimension larger than one, see Section 3.3.
### Explicit formulas in the algebraic case
Let us choose \(F_{1}\) in the non-polynomial form
\[F_{1}=x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\]
where \(f\) is an algebraic function, and
\[F_{2}\in\mathbf{k}[\partial_{x_{0}},...,\partial_{x_{n+1}}]\]
is an irreducible polynomials as above. In this case properties **1, 2, 3** give (see (4.28))
\[\psi=g\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\ x_{0}^{- \frac{n}{2}}\delta\Big{(}x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x _{n}}{x_{0}}\Big{)}\Big{)} \tag{4.33}\]
and the property \(\mathbf{1}^{\prime}\) reads
\[F_{2}(\partial_{x_{0}},...,\partial_{x_{n+1}})\cdot(\psi)=0 \tag{4.34}\]
where \(\psi\) is given by (4.33). Cyclic \(D\)-module over the ring \(\mathbf{k}[\partial_{x_{0}},...,\partial_{x_{n+1}}]\) generated by \(\psi\) is contained in the bigraded vector space whose \((a,b)-\)graded component is spent by expressions of the form
\[h\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\ x_{0}^{a}\delta^{( b)}\Big{(}x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}} \Big{)}\Big{)}\]
where \(h\) is a function in \(n\) variables, \(a\in-\frac{n}{2}-\mathbb{Z}_{\geq 0}\), \(b\in\mathbb{Z}_{\geq 0}\) We can identify the above bigraded vector space as the module over \(\mathbf{k}[\partial_{x_{0}},...,\partial_{x_{n+1}}]\) with another one spent by
\[h\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\ \frac{x_{0}^{a}}{ \Big{(}x_{n+1}-x_{0}f\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)} \Big{)}^{b+1}}\]
where \(h,a,b\) are as above. The isomorphism is given by \(s\delta^{(b)}(t)\mapsto s\frac{(-1)^{b}b!}{t^{b+1}}\).
The l.h.s. of (4.34) lies in the direct sum of bigraded components with \(a=-\frac{n}{2}-k,\ b=\deg F_{2}-k\) where \(k=0,...,\deg F_{2}\).
Introduce new variables \(v_{0},...,v_{n+1}\) such that
\[v_{i}=\frac{x_{i}}{x_{0}},\ i=1,...,n,\ \ \ \ v_{0}=x_{0},\ \ \ \ v_{n+1}=x_{n+1}-x_{0}f \Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}.\]
In these variables element \(\psi\) given by (4.33) reads as
\[\psi=g\big{(}v_{1},...,v_{n}\big{)}\ v_{0}^{-\frac{n}{2}}v_{n+1}^{-1}. \tag{4.35}\]
Partial derivatives \(\partial_{x_{0}},...,\partial_{x_{n+1}}\) in new variables become more complicated differential operators
\[\partial_{x_{i}}\mapsto D_{i},\ i=0,...,n+1\]
where
\[D_{0}=-\frac{1}{v_{0}}\sum_{j=1}^{n}v_{j}\partial_{v_{j}}+\partial_{v_{0}}+ \Big{(}\sum_{j=1}^{n}v_{j}f_{v_{j}}-f\Big{)}\partial_{v_{n+1}}\]
\[D_{i}=\frac{1}{v_{0}}\partial_{v_{i}}-f_{v_{i}}\partial_{v_{n+1}},\ i=1,...,n, \ \ \ \ D_{n+1}=\partial_{v_{n+1}}\]
where \(f=f(v_{1},...,v_{n}),\ f_{v_{i}}=\frac{\partial f}{\partial v_{i}}\).
The final conclusion is that in coordinates \(v_{0},...,v_{n+1}\) the system of equations for \(f,g\) can be written as
\[F_{2}(D_{0},...,D_{n+1})\cdot\Big{(}g(v_{1},...,v_{n})v_{0}^{-\frac{n}{2}}v_{ n+1}^{-1}\Big{)}=0. \tag{4.36}\]
To obtain a system of partial differential equations for \(f,g\) one should equate coefficients at all powers of \(v_{0},v_{n+1}\) in (4.36) to zero. In this way we get a system of \(\deg F_{2}\) differential equations on \(f,g\).
Notice that if our cone \(C\) is given parametrically by (3.20), we can rewrite our system (4.36) in parametric form by doing the corresponding change of variables.
### A question about monodromic regular holonomic \(D\)-modules
One can try to look for admissible pairs of a special type, when both homogeneous distributions \(G,\ \hat{G}\) generate _regular holonomic_\(D\)-modules.
Recall the classical result by J. L. Brylinski [2] which says that regular holonomic \(D\)-modules on a vector space \(V\) such that their Fourier transforms are also regular holonomic are exactly those for which the action of Euler vector field \(E=\sum_{i}x_{i}\partial_{x_{i}}\) is locally finite. Such \(D\)-modules are called _monodromic regular holonomic_.
The singular support \(SS(M)\) of such a \(D\)-module \(M\) is \(GL(1)\times GL(1)\)-invariant (possibly reducible) Lagrangian cone \(L\subset T^{*}V=V\oplus V^{*}\), and it coincides with the singular support of its Fourier transform \(SS({\cal F}(M))\) under the identification \(T^{*}V=V\oplus V^{*}=T^{*}V^{*}\).
Thus, we arrive to the following problem: study monodromic regular holonomic \(D\)-modules \(M\) such that \(SS(M)\) does not contain \(V\times\{0\}\) and \(\{0\}\times V^{*}\). Indeed, in this case any non-zero element \(\psi\in M\) homogeneous with respect to \(E\) is killed by some non-trivial homogeneous polynomial \(F_{1}(x_{i}),\ F_{2}(\partial_{x_{i}})\).
Riemann-Hilbert correspondence identifies monodromic regular holonomic \(D\)-modules with so-called monodromic perverse sheaves on \(V\). Thus, one can reformulate the above problem in purely topological terms, concerning finite-dimensional representations of the fundamental group of \(L\setminus L^{sing}\).
Finally, for an irreducible algebraic cone \(C\subset V\), such that \(C\neq 0,V\) there is a natural \(GL(1)\times GL(1)\) invariant irreducible Lagrangian cone \(L\) which is the conormal bundle to \(C\). Notice that \(L=L_{C}\neq V\times\{0\},\ \{0\}\times V^{*}\). So \(L_{C}\) is a natural candidate for the singular support.
**Question 4.5.1.** For which \(C\) there exists a monodromic regular holonomic \(D\)-module whose singular support is \(L_{C}\), may be with multiplicities?
## 5 Examples of admissible hypersurfaces and corresponding pairs
In this Section we assume that \({\bf k}=\bar{\bf k}\).
### Quadratic hypersurfaces
**Lemma 5.1.1.** Let \(f(x_{1},...,x_{n})=-\frac{1}{2}(x_{1}^{2}+...+x_{n}^{2})\). Then pair \(f,g\) is admissible iff the function \(g(x_{1},...,x_{n})\) is harmonic, i.e.
\[\frac{\partial^{2}g}{\partial x_{1}^{2}}+...+\frac{\partial^{2}g}{\partial x_ {n}^{2}}=0.\]
We have in this case
\[\int g(x_{1},...,x_{n})e^{\frac{1}{\hbar}(-\frac{1}{2}x_{1}^{2}-...-\frac{1}{ 2}x_{n}^{2}+x_{1}y_{1}+...+x_{n}y_{n})}dx_{1}...dx_{n}=(2\pi\hbar)^{\frac{n}{2 }}g(y_{1},...,y_{n})e^{\frac{1}{2\hbar}(y_{1}^{2}+...+y_{n}^{2})}.\]
**Proof.** For arbitrary function \(g\) we have
\[\int g(x_{1},...,x_{n})e^{\frac{1}{\hbar}(-\frac{1}{2}x_{1}^{2}-...-\frac{1}{ 2}x_{n}^{2}+x_{1}y_{1}+...+x_{n}y_{n})}dx_{1}...dx_{n}=(2\pi\hbar)^{\frac{n}{2 }}e^{\frac{1}{2\hbar}(y_{1}^{2}+...+y_{n}^{2})}\sum_{i=0}^{\infty}\frac{\hbar ^{i}}{i!}\Delta^{i}g(y_{1},...,y_{n})\]
where \(\Delta=\frac{\partial^{2}}{\partial y_{1}^{2}}+...+\frac{\partial^{2}}{ \partial y_{n}^{2}}\). The r.h.s. consists only of the first term iff \(\Delta g=0\). \(\square\)
**Theorem 5.1.1.** Let \(Q=\sum_{i,j=0}^{n+1}a_{i,j}x_{i}x_{j}\) be a non-degenerate quadric. Here \(a_{i,j}=a_{j,i}\) and \(\det(a_{i,j})\neq 0\). Then the hypersurface in \(\mathbb{P}^{n+1}\) defined by \(Q=0\) is admissible. Its rank is two if \(n=1\) and infinity otherwise.
**Proof.** Any such quadric is \(GL(n+2)\) equivalent to \(Q_{0}=2x_{0}x_{n+1}+x_{1}^{2}+...+x_{n}^{2}\). The projective hypersurface \(Q_{0}=0\) is a projectivization of affine hypersurface defined by the equation \(x_{n+1}=-\frac{1}{2}(x_{1}^{2}+...+x_{n}^{2})\) which is admissible by Lemma 1. Therefore, the hypersurface \(Q=0\) is also admissible by projective invariance. \(\square\)
**Remark 5.1.1.** Solving the equation
\[Q(1,x_{1},...,x_{n},f)=0\]
for an arbitrary non-degenerate quadric \(Q\) we obtain a family of functions \(f\) which depends on a lot of parameters \(a_{i,j}\). One can see that by applying group of transformations
\[x_{i}\mapsto\sum_{j=1}^{n}q_{i,j}x_{j}+b_{j},\quad f\mapsto\lambda f+\sum_{j=1 }^{n}\mu_{j}x_{j}+\nu\]
one can reduce \(f\) to one of the following forms:
**a)**\(f(x_{1},...,x_{n})=(x_{1}^{2}+...+x_{n}^{2}+1)^{\frac{1}{2}}\),
**b)**\(f(x_{1},...,x_{n})=(x_{1}^{2}+...+x_{n-1}^{2}+x_{n})^{\frac{1}{2}}\) for \(n>1\), and \(f(x_{1})=x_{1}^{\frac{1}{2}}\) for \(n=1\),
**c)**\(f(x_{1},...,x_{n})=\frac{x_{1}^{2}+...+x_{n-1}^{2}+1}{x_{n}}\) for \(n>1\), and \(f(x_{1})=\frac{1}{x_{1}}\) for \(n=1\),
**d)**\(f(x_{1},...,x_{n})=-x_{1}^{2}-...-x_{n}^{2}\).
In each of the cases above pair \(f,g\) is admissible iff \(g\) satisfies a certain second order partial differential equation which can be obtained from harmonic equation in Lemma 5.1.1 by the action of \(GL(n+2)\), see Section 3.2.
It would be interesting to lift, if possible, the corresponding formal integrals of the form (1.6) to actual convergent integral identities. For example, it is known [3] that if
\[g(\vec{x})=\frac{1}{\sqrt{1+\vec{x}^{2}}\cdot(1+\sqrt{1+\vec{x}^{2}})^{\frac{ n-2}{2}}}\]
then
\[\int_{\mathbb{R}^{n}}g(\vec{x})e^{-\frac{1}{\hbar}\sqrt{1+\vec{x}^{2}}+\frac{i }{\hbar}\vec{x}\vec{y}}d\vec{x}=(2\pi\hbar)^{\frac{n}{2}}g(\vec{y})e^{-\frac{1 }{\hbar}\sqrt{1+\vec{y}^{2}}}\]
### Hypersurfaces admitting quadratic parametrization
Let \(C\subset\mathbb{A}^{n+2}\) be a cone given parametrically by
\[x_{i}=\frac{1}{2}\sum_{j,k=0}^{n}a_{i,j,k}u_{j}u_{k},\quad i=0,...,n+1\]
where \(a_{i,j,k}\in\mathbf{k}\) are constants and \(a_{i,k,j}=a_{i,j,k}.\) The dual cone \(\hat{C}\) is given by
\[\det\Bigg{(}\sum_{i=0}^{n+1}a_{i,j,k}y_{i}\Bigg{)}_{0\leq j,k\leq n}=0.\]
Indeed, let \(Q=x_{0}y_{0}+...+x_{n+1}y_{n+1}=\frac{1}{2}\sum_{i=0}^{n+1}\sum_{j,k=0}^{n}a_{ i,j,k}y_{i}u_{j}u_{k}.\) The point \((y_{0},...,y_{n+1})\in\mathbb{A}^{n+2}\) belongs to the dual cone \(\hat{C}\) iff the linear system \(\partial_{u_{i}}Q=0,\ i=0,...,n\) for \(u_{0},...,u_{n}\) has a non-zero solution.
It follows from Theorem 3.1.1 that cone \(C\) is admissible iff
\[\int g_{1}\Big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\Big{)}\ x_{0}^{- \frac{n}{2}}e^{\frac{1}{\hbar}(x_{0}y_{n+1}f\big{(}\frac{x_{1}}{x_{0}},..., \frac{x_{n}}{x_{0}}\big{)}+x_{0}y_{0}+...+x_{n}y_{n})}dx_{0}...dx_{n}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Big{(}\frac{y_{0}}{y_{n+1}}+\hat{f}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}\Big{)}g_{2}\Big{(} \frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{-\frac{n+2}{2}}\]
for some functions \(g_{1},g_{2},\) where \(x_{0}y_{n+1}f\big{(}\frac{x_{1}}{x_{0}},...,\frac{x_{n}}{x_{0}}\big{)}=x_{n+1}\) and \(\hat{f}\) corresponds to the dual cone \(\hat{C}\). Substituting our quadratic parametrization in the l.h.s. and taking into account the equation for the dual cone we get
\[\int g(u_{0},...,u_{n})e^{\frac{1}{2\hbar}\sum_{i=0}^{n+1}\sum_{j,k=0}^{n}a_{ i,j,k}y_{i}u_{j}u_{k}}du_{0}...du_{n}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\delta\Bigg{(}\det\Bigg{(}\sum_{i=0}^{n+1}a_{i,j,k }y_{i}\Bigg{)}_{0\leq j,k\leq n}\Bigg{)}\hat{g}\Big{(}\frac{y_{1}}{y_{n+1}},...,\frac{y_{n}}{y_{n+1}}\Big{)}y_{n+1}^{\frac{n}{2}}\]
where \(g\) is a homogeneous function of degree 1.
**Theorem 5.2.1.** Any non-degenerate cone \(C\) admitting a quadratic parametrization is admissible with the rank larger or equal than \(n+1\). Moreover, any linear homogeneous function \(g\) is admissible.
**Proof.** Without any loss of generality we can set \(g(u_{0},...,u_{n})=u_{0}\) because of the \(GL(n+1)\)-action on variables \(u_{0},...,u_{n}\). In order to compute the integral
\[\int u_{0}e^{\frac{1}{2\hbar}\sum_{i=0}^{n+1}\sum_{j,k=0}^{n}a_{i,j,k}y_{i}u_{ j}u_{k}}du_{0}...du_{n} \tag{5.37}\]
in our formalism we need the following
**Lemma 5.2.1.** The following identities hold
\[\int e^{\frac{1}{\hbar}(\frac{1}{2}\sum_{i,j=1}^{n}a_{i,j}u_{i}u_{j}+\sum_{i=1}^{ n}b_{i}u_{i})}du_{1}...du_{n}=\frac{(2\pi\hbar)^{\frac{n}{2}}}{(\det A)^{\frac{ 1}{2}}}e^{-\frac{1}{2\hbar}\vec{b}A^{-1}\vec{b}}\]
\[\vec{b}A^{-1}\vec{b}^{t}-c=-\frac{\det\left(\begin{matrix}A&\vec{b}^{t}\\ \vec{b}&c\end{matrix}\right)}{\det A}\]
where \(A=(a_{i,j})\) is a non-degenerate symmetric \(n\times n\) matrix, \(\vec{b}=(b_{1},...,b_{n})\) is a vector and \(c\) is a constant.
**Proof.** The first identity is a standard Gaussian integral up to multiplication by \(\pm i\). The second identity is obtained by the last row and the last column expansion of the determinant in numerator in the r.h.s. \(\square\)
**Lemma 5.2.2.** The following identity holds
\[\frac{1}{2\pi\hbar}\int we^{\frac{au^{2}}{\hbar}}du=\delta(a).\]
**Proof.** Multiply both sides of this identity by \(e^{\frac{au}{\hbar}}\) and integrate by \(a\). In the r.h.s. we get 1, and in the l.h.s.
\[\frac{1}{2\pi\hbar}\int we^{\frac{au^{2}+av}{\hbar}}duda=\int\delta(u^{2}+v)udu =1.\]
\(\square\)
To compute the integral (5.37) we first integrate with respect to \(u_{1},...,u_{n}\) using identities from Lemma 5.2.1 and get
\[\frac{(2\pi\hbar)^{\frac{n}{2}}}{\Big{(}\det\Big{(}\sum_{i=0}^{n+1}a_{i,j,k}y_ {i}\Big{)}_{1\leq j,k\leq n}\Big{)}^{\frac{1}{2}}}\int u_{0}\ e^{\frac{u_{0}^{2}}{2 \hbar}\frac{\det\left(\sum_{i=0}^{n+1}a_{i,j,k}y_{i}\right)_{0\leq j,k\leq n} }{\det\left(\sum_{i=0}^{n+1}a_{i,j,k}y_{i}\right)_{1\leq j,k\leq n}}}\ du_{0}.\]
Finally, we integrate by \(u_{0}\) using Lemma 5.2.2 and get
\[\int u_{0}e^{\frac{1}{2\hbar}\sum_{i=0}^{n+1}\sum_{j,k=0}^{n}a_{i,j,k}y_{i}u_{ j}u_{k}}du_{0}...du_{n}=\]
\[(2\pi\hbar)^{\frac{n+2}{2}}\Big{(}\det\Big{(}\sum_{i=0}^{n+1}a_{i,j,k}y_{i} \Big{)}_{1\leq j,k\leq n}\Big{)}^{\frac{1}{2}}2\delta\Bigg{(}\det\Bigg{(}\sum _{i=0}^{n+1}a_{i,j,k}y_{i}\Bigg{)}_{0\leq j,k\leq n}\Bigg{)}.\]
\(\square\)
One can reformulate the previous theorem using formalism from Section 3.3. Namely, consider projectively dual cones
\[C_{univ}=\{X\in Mat((n+1)\times(n+1))\ |\ X^{t}=X,\ rkX\leq 1\}\subset\mathbb{A}^{ \frac{(n+1)(n+2)}{2}},\]
\[\hat{C}_{univ}=\{Y\in Mat((n+1)\times(n+1))\ |\ Y^{t}=Y,\ \det(Y)=0\}\subset( \mathbb{A}^{\frac{(n+1)(n+2)}{2}})^{*}.\]
Notice that \(\dim C_{univ}=n+1\) while \(\hat{C}_{univ}\) is a hypersurface. Cone \(C_{univ}\) admits quadratic parametrization \(X=(x_{i,j})_{0\leq i,j\leq n}\) where \(x_{i,j}=u_{i}u_{j}\) (passing to the projectivization we obtain the Veronese map \(\mathbb{P}^{n}\rightarrow\mathbb{P}^{\frac{(n+1)(n+2)}{2}-1}\) from the classical algebraic geometry).
**Theorem 5.2.2.** The Fourier transform of the density \(u_{0}du_{0}...du_{n}\) on \(C_{univ}\) is supported on \(\hat{C}_{univ}\). More precisely
\[\int e^{\frac{1}{2\hbar}\sum_{j,k=0}^{n}y_{j,k}u_{j}u_{k}}u_{0}du_{0}...du_{n} =(2\pi\hbar)^{\frac{n+2}{2}}(\det(y_{j,k})_{1\leq j,k\leq n})^{\frac{1}{2}}2 \delta(\det(y_{j,k})_{0\leq j,k\leq n}).\]
The proof is omitted as it is essentially the same as for Theorem 5.2.1.
One can deduce geometrically Theorem 5.2.1 from Theorem 5.2.2 in the following way. The choice of \((a_{i,j,k})\) gives a linear projection \(\mathbb{A}^{\frac{(n+1)(n+2)}{2}}\twoheadrightarrow\mathbb{A}^{n+2}\). The image of \(C_{univ}\) becomes a conical hypersurface in \(\mathbb{A}^{n+2}\). The Fourier transform of the density \(u_{0}du_{0}...du_{n}\) on the image is the restriction to the subspace \((\mathbb{A}^{n+2})^{*}\hookrightarrow(\mathbb{A}^{\frac{(n+1)(n+2)}{2}})^{*}\) of the corresponding density on \(\hat{C}_{univ}\) and is supported on \((\mathbb{A}^{n+2})^{*}\cap\hat{C}_{univ}=\hat{C}\).
**Remark 5.2.1.** It would be interesting to study homogeneous densities supported on conical orbit of reductive groups in finite-dimensional representations such that their Fourier transforms are supported on hypersurfaces (or cones of lower dimensions). By a similar consideration as above, this could lead to a new class of admissible conical hypersurfaces beyond those admitting quadratic parametrization.
**Remark 5.2.2.** One can write system of differential equations for admissible functions \(g(u_{0},...,u_{n})\) using results of Section 4.3, see for example (4.36). This system depends on tensor \(a_{i,j,k}\). It could be holonomic or nonholonomic depending on \(a_{i,j,k}\). If it is holonomic, then \(rk(C)\) is finite, otherwise it is infinite.
**Conjecture 5.2.1.** If \(rk(C)\) is finite, then any admissible \(g\) has a form
\[g(u_{0},...,u_{n})=\frac{Q}{P_{1}...P_{k}},\quad k=0,1,...\]
where \(Q\) is a homogeneous polynomial in \(u_{0},...u_{n}\) of degree \(k+1\), and \(P_{1},...,P_{k}\) are homogeneous linear polynomials in \(u_{0},...,u_{n}\).
**Remark 5.2.3.** Theorem 5.2.1 gives the lower bound \(n+1\) for the rank \(rk(C)\) of cone \(C\) admitting quadratic parametrization. Experiments show that if \(n\geq 3\) and tensor \(a_{i,j,k}\) is generic, then this bound is attained and \(rk(C)=n+1\). If \(n=2\) and tensor \(a_{i,j,k}\) is generic, then \(rk(C)=6\), see Section 5.4. On the other hand, \(rk(C)\) could be infinite for \(n\geq 2\). For example, the standard smooth quadric hypersurface admits a quadratic parametrization, and its rank is infinite for \(n\geq 2\). It would be interesting to find \(rk(C)\) in terms of algebraic properties of tensor \(a_{i,j,k}\). In particular, it would be interesting to find conditions for tensor \(a_{i,j,k}\) equivalent to the property \(rk(C)>n+1\).
**Conjecture 5.2.2.** If \(rk(C)\) is finite, then \(rk(C)\leq(n+1)!\).
It would be interesting to prove this Conjecture and classify all tensors \(a_{i,j,k}\) such that \(rk(C)=(n+1)!\). See Section 5.4 where we construct an example of such cone for arbitrary \(n\).
### Ruled surfaces in \(\mathbb{P}^{3}\)
**Theorem 5.3.1.** Let \(\Sigma\subset\mathbb{P}^{3}\) be a surface defined parametrically by
\[x_{0}=1,\quad x_{1}=p_{1}(u_{2})+q_{1}(u_{2})u_{1},\quad x_{2}=p_{2}(u_{2})+q_ {2}(u_{2})u_{1},\quad x_{3}=p_{3}(u_{2})+q_{3}(u_{2})u_{1}\]
where \(p_{1},p_{2},p_{3},q_{1},q_{2},q_{3}\) are arbitrary generic functions in one variable. Then \(\Sigma\) is admissible and its rank is infinity.
The corresponding admissible pairs \(f(x_{1},x_{2}),g(x_{1},x_{2})\) have the following parametrization
\[f=p_{3}(u_{2})+q_{3}(u_{2})u_{1},\quad x_{1}=p_{1}(u_{2})+q_{1}(u_{2})u_{1}, \quad x_{2}=p_{2}(u_{2})+q_{2}(u_{2})u_{1},\]
\[g=\frac{h(u_{2})}{q_{1}(u_{2})\big{(}p_{2}^{\prime}(u_{2})+q_{2}^{\prime}(u_{2 })u_{1}\big{)}-q_{2}(u_{2})\big{(}p_{1}^{\prime}(u_{2})+q_{1}^{\prime}(u_{2}) u_{1}\big{)}}\]
where \(h(u_{2})\) is an arbitrary function.
**Proof.** Substituting the parametric form (5.38) in the l.h.s. of (1.6) we get
\[\int h(u_{2})e^{\frac{1}{h}(p_{3}(u_{2})+q_{3}(u_{2})u_{1}+(p_{1}(u_{2})+q_{1 }(u_{2})u_{1})y_{1}+(p_{2}(u_{2})+q_{2}(u_{2})u_{1})y_{2})}du_{1}du_{2}\]
After integrating by \(u_{1}\) we obtain
\[2\pi\hbar\int\delta(q_{3}(u_{2})+q_{1}(u_{2})y_{1}+q_{2}(u_{2})y_{2})h(u_{2})e ^{\frac{1}{h}(p_{3}(u_{2})+p_{1}(u_{2})y_{1}+p_{2}(u_{2})y_{2})}du_{2}\]
and integrating by \(u_{2}\) we obtain an expression of a form of the r.h.s. of (1.6). \(\square\)
**Remark 5.3.1.** Notice that the parametric representation (5.38) admits the change of variables \(u_{1}\to a(u_{2})u_{1}+b(u_{2}),\ u_{2}\to c(u_{2})\), so the family of admissible functions (5.38) is
parameterized by four functions in one variable. We can set, for example, \(p_{1}(u_{2})=0,q_{1}(u_{2})=1,q_{2}(u_{2})=u_{2}\).
**Remark 5.3.2.** For a generic ruled surface the whole space \(V_{f}\) of admissible functions \(g\) is given by (5.38). However, in special cases the space \(V_{f}\) of admissible functions \(g\) can be bigger. For example it is bigger if \(f\) corresponds to a quadric. If \(f(x_{1},x_{2})=x_{1}x_{2}\), then \(V_{f}\) consists of functions of the form \(g(x_{1},x_{2})=h_{1}(x_{1})+h_{2}(x_{2})\) where \(h_{1},h_{2}\) are arbitrary functions in one variable. It would be interesting to study in details for which ruled surfaces the space \(V_{f}\) is bigger than those given by (5.38).
### Steiner Roman surface
Hypersurfaces admitting quadratic parametrization in the case \(n=2\) are equivalent to each other for the generic tensor \(a_{i,j,k}\). The corresponding projective variety is called Steiner Roman surface [4].
**Theorem 5.4.1.** Let \(\Sigma\subset\mathbb{P}^{3}\) be a surface defined parametrically by
\[x_{i}=q_{i}(u_{1},u_{2}),\ i=0,...,3\]
where \(q_{i}\) are generic nonhomogeneous quadratic polynomials in \(u_{1},u_{2}\). Then \(\Sigma\) is admissible and its rank is \(6\).
**Proof.** It is known that any such surface is equivalent to projectivization of the affine surface defined by \(x_{3}=f(x_{1},x_{2})\) where
\[f(x_{1},x_{2})=x_{1}x_{2}+\frac{x_{1}}{x_{2}}+\frac{x_{2}}{x_{1}}.\]
In this case pair \(f,g\) is admissible iff
\[g(x_{1},x_{2})=c_{1}+\frac{c_{2}}{x_{1}}+\frac{c_{3}}{x_{2}}+c_{4}\,\left(\frac {1}{x_{1}^{2}}-\frac{1}{x_{2}^{2}}\right)+c_{5}\,\left(x_{1}-\frac{x_{1}}{x_{2 }^{2}}\right)+c_{6}\,\left(x_{2}-\frac{x_{2}}{x_{1}^{2}}\right)\]
where \(c_{1},...,c_{6}\) are constants. One can check this by solving the system of differential equations (4.36) with given function \(f\). \(\Box\)
### Generalized Steiner Roman hypersurfaces
Let cone \(C\subset\mathbb{A}^{n+2}\) for \(n\geq 2\) be given by
\[x_{0}^{\frac{1}{2}}+x_{1}^{\frac{1}{2}}+...+x_{n+1}^{\frac{1}{2}}=0.\]
**Remark 5.5.1.** If \(n=2\), then \(C\) is equivalent to Steiner Roman surface (Section 5.4).
**Theorem 5.5.1.** The cone \(C\) (and therefore its dual \(\hat{C}\) ) is admissible with the rank equal to \((n+1)!\).
**Proof.** The equation for the dual cone \(\hat{C}\) can be written as
\[\frac{1}{y_{0}}+\frac{1}{y_{1}}+...+\frac{1}{y_{n+1}}=0\]
or, equivalently, as
\[\sum_{i=0}^{n+1}y_{0}...\widehat{y_{i}}...y_{n+1}=0.\]
Therefore, we can choose the formal wave function in the form
\[\tilde{g}(x_{0},...,x_{n})\delta\Big{(}x_{0}^{\frac{1}{2}}+x_{1}^{\frac{1}{2}}+...+x_{n+1}^{\frac{1}{2}}\Big{)}\]
and it should be annihilated by differential operator
\[\sum_{i=0}^{n+1}\partial_{x_{0}}...\widehat{\partial}_{x_{i}}...\partial_{x_{n +1}}.\]
Let us make the change of variables \(x_{i}=u_{i}^{2},\ \ i=0,...,n+1\). In the new variables our wave functions reads
\[\psi=g(u_{0},...,u_{n})\delta(u_{0}+u_{1}+...+u_{n+1})\]
and the differential operator (up to a coefficient and after multiplication by \(u_{0}u_{1}...u_{n+1}\)) takes the form
\[D=\sum_{i=0}^{n+1}u_{i}\partial_{u_{0}}...\widehat{\partial}_{u_{i}}... \partial_{u_{n+1}}.\]
The equation \(D(\psi)=0\) is equivalent to the following system of differential equations for the function \(g(u_{0},...,u_{n})\)
\[\sum_{i=0}^{n}u_{i}\partial_{u_{i}}g+(n+1)g=0\]
\[\sum_{0\leq i_{1}<i_{2}\leq n}(u_{i_{1}}+u_{i_{2}})\partial_{u_{i_{1}}} \partial_{u_{i_{2}}}g+n\sum_{i=0}^{n}\partial_{u_{i}}g=0\]
\[..
**Lemma 5.5.1.** The system (5.39) is holonomic and has no more than \((n+1)!\)-dimensional space of solutions at generic point \((u_{0},...,u_{n})\).
**Proof.** The system (5.39) defines a cyclic \(D\)-module \(M\) generated by \(g\). The filtration \(D^{\leq i}\) by the degree of differential operators, after applying to the generator \(g\) induces a good filtration on \(M\). The associated graded space \(grM\) is a graded module over polynomial ring \({\bf k}[u_{0},...,u_{n},v_{0},...,v_{n}]\) where \(\deg u_{i}=0,\ \deg v_{i}=1\) and \(v_{i}\) are images of \(\partial_{u_{i}}\) in the polynomial ring. The module \(grM\) is also cyclic and defined by equations
\[\Big{(}u_{0}v_{0}+...+u_{n}v_{n}\Big{)}h=0\]
\[\sum_{0\leq i_{1}<i_{2}\leq n}(u_{i_{1}}+u_{i_{2}})v_{i_{1}}v_{i_{2}}h=0\]
\[................................................................................................\]
\[(u_{0}+...+u_{n})v_{0}...v_{n}h=0.\]
One can check that for generic \(u_{i}\) (e.g. \(u_{0}=...=u_{n}=1\)) the above system gives a \(0\)-dimensional scheme which is a complete intersection of length \((n+1)!\), the product of degrees in \(v_{i}\) of our equations. \(\Box\)
**Lemma 5.5.2.** Let \(\{0,1,...,n+1\}=P_{1}\sqcup...\sqcup P_{m+1},\ \ m=1,...,n+1\) be an arbitrary partition. Set \(u_{n+1}=-u_{0}-...-u_{n}\). Then the following functions
\[g=\frac{1}{u_{0}...u_{n+1}}\cdot\frac{u_{i_{1}}...u_{i_{m+1}}}{\sum_{\alpha\in P _{1}}u_{\alpha}\cdot\sum_{\alpha\in P_{2}}u_{\alpha}\cdot...\cdot\sum_{\alpha \in P_{m}}u_{\alpha}},\ \ \ i_{1}\in P_{1},...,i_{m+1}\in P_{m+1}\]
satisfy the system (5.39). The vector space spent by these functions is \((n+1)!\)-dimensional.
**Proof.** This can be proved by induction as follows. Without loss of generality we can set \(P_{1}=\{0,...,k\}\) and \(i_{1}=k\) because of \(S_{n+1}\) action on the system (5.39). Write
\[g(u_{0},...,u_{n})=\frac{g_{1}(u_{k+1},...,u_{n})}{u_{0}...u_{k-1}(u_{0}+...+u _{k})}\]
and check that \(g\) satisfies (5.39) iff \(g_{1}\) satisfies similar system with \(n-k\) variables \(u_{k+1},...,u_{n}\). \(\Box\)
**Example 5.5.1.** Let \(n=4\), \(P_{1}=\{0,1\},\ P_{2}=\{2,3\},\ P_{3}=\{4,5\}\) and choose \(0\in P_{1},\ 2\in P_{2},\ 4\in P_{3}\). We have
\[g=\frac{1}{u_{0}u_{1}u_{2}u_{3}u_{4}u_{5}}\cdot\frac{u_{0}u_{2}u_{4}}{(u_{0}+u _{1})(u_{2}+u_{3})}.\]
**Remark 5.5.2.** We have \(\deg C=2^{n}\) and \(\deg\hat{C}=n+1\). In particular, \(\hat{C}\) is not equivalent to \(C\) and gives another example of admissible cone with the rank equal to \((n+1)!\).
### Kummer surfaces in \(\mathbb{P}^{3}\)
Recall that the Kummer surface is a quartic surface in \(\mathbb{P}^{3}\) with 16 double points. Abstractly it is the quotient of a principally polarized two dimensional abelian variety by the antipodal involution.
**Theorem 5.6.1.** Any Kummer surface is admissible of rank 6.
**Proof.** We need to check (4.31) for the Kummer surfaces. Choose equation for Kummer surface in the form [5]
\[F_{1}=(x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+a(x_{0}x_{1}+x_{2}x_{3})+b(x_{0} x_{2}+x_{1}x_{3})+c(x_{0}x_{3}+x_{1}x_{2}))^{2}\]
\[-4(a^{2}+b^{2}+c^{2}-abc-4)x_{0}x_{1}x_{2}x_{3}.\]
Computing equation for dual polynomial we get
\[F_{2}=(4a-2bc)(\partial_{x_{0}}\partial_{x_{1}}\partial_{x_{2}}^{2}+\partial_ {x_{0}}^{2}\partial_{x_{2}}\partial_{x_{3}}+\partial_{x_{1}}^{2}\partial_{x_ {2}}\partial_{x_{3}}+\partial_{x_{0}}\partial_{x_{1}}\partial_{x_{3}}^{2})+\]
\[(4b-2ac)(\partial_{x_{0}}\partial_{x_{1}}^{2}\partial_{x_{2}}+\partial_{x_{0} }^{2}\partial_{x_{1}}\partial_{x_{3}}+\partial_{x_{1}}\partial_{x_{2}}^{2} \partial_{x_{3}}+\partial_{x_{0}}\partial_{x_{2}}\partial_{x_{3}}^{2})+\]
\[(4c-2ab)(\partial_{x_{0}}^{2}\partial_{x_{1}}\partial_{x_{2}}+\partial_{x_{0} }\partial_{x_{1}}^{2}\partial_{x_{3}}+\partial_{x_{0}}\partial_{x_{2}}^{2} \partial_{x_{3}}+\partial_{x_{1}}\partial_{x_{2}}\partial_{x_{3}}^{2})+\]
\[(a^{2}-4)(\partial_{x_{0}}^{2}\partial_{x_{1}}^{2}+\partial_{x_{2}}^{2} \partial_{x_{3}}^{2})+(b^{2}-4)(\partial_{x_{0}}^{2}\partial_{x_{2}}^{2}+ \partial_{x_{1}}^{2}\partial_{x_{3}}^{2})+(c^{2}-4)(\partial_{x_{0}}^{2} \partial_{x_{3}}^{2}+\partial_{x_{1}}^{2}\partial_{x_{2}}^{2})+\]
\[(4abc-2a^{2}-2b^{2}-2c^{2}-8)\partial_{x_{0}}\partial_{x_{1}}\partial_{x_{2}} \partial_{x_{3}}.\]
Let us choose \(g\) in the form
\[g=x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}}\Big{(}(b\sqrt{c^{2}-4}+2a-bc)(x_{0}^ {2}+cx_{0}x_{3}+x_{3}^{2})+2\sqrt{c^{2}-4}(x_{0}x_{2}+x_{1}x_{3})+\]
\[+(c\sqrt{c^{2}-4}-c^{2}+4)(x_{0}x_{1}+x_{2}x_{3})\Big{)}^{\frac{1}{2}}.\]
One can check by direct calculation that
\[F_{2}\Big{(}g\cdot\delta(F_{1})\Big{)}=0\]
where \(F_{2}\) is a differential operator and \(g,F_{1}\) are expression written above. Notice that \(F_{1},F_{2}\) are invariant with respect to the action of the group of permutation \(S_{4}\) acting on \(x_{0},...,x_{3}\) and simultaneously on \(a,b,c\). Applying this action to \(g\) we obtain other admissible functions. We can change sign of the square root \(\sqrt{c^{2}-4}\) as well. One can check that in this way we obtain a six-dimensional space of admissible functions \(g\). On the other hand, we know (see Section 7) that if an admissible surface is not ruled, then \(\dim V_{f}\leq 6\). \(\square\)
The space \(V_{f}\) for a Kummer surface can be constructed as follows. Let \(\mathcal{A}_{2}\) be a two-dimensional principally polarized abelian variety. Let \(\sigma:\mathcal{A}\rightarrow\mathcal{A}\) be its involution with a fixed point. Construct the corresponding Kummer surface by \(\mathcal{K}=\mathcal{A}_{2}/\sigma.\) Let \(\mathcal{L}\) be a line bundle of degree 2 on \(\mathcal{A}_{2}\) (hence with 4 sections) and such that \(\mathcal{L}^{\sigma}=\mathcal{L}\). It is known that \(\sigma\) acts as identity operator on the space \(\Gamma(\mathcal{L})\) of sections of \(\mathcal{L}\). We can identify \(x_{0},...,x_{3}\) with a basis in
\(\Gamma({\cal L})\). It is also known that \(\dim\Gamma({\cal L}^{2})=16\) and \(\Gamma({\cal L}^{2})=\Gamma^{+}\oplus\Gamma^{-}\) where \(\sigma\) acts as identity on \(\Gamma^{+}\) and as negative identity on \(\Gamma^{-}\). We have \(\dim\Gamma^{+}=10\) and \(x_{i}x_{j},\ 0\leq i\leq j\leq 3\) is a basis in \(\Gamma^{+}\). The space \(V_{f}\) can be identified with \(\Gamma^{-}\).
These spaces of sections can be constructed explicitly over \(\mathbb{C}\) using theta functions in two variables.
Define the functions \(\theta_{a,b}(u_{1},u_{2}),\ a,b\in\mathbb{Z}/2\) by
\[\theta_{00}(u_{1},u_{2})=\sum_{i,j\in\mathbb{Z}}e^{2\pi\sqrt{-1}\ \left(2iu_{1}+2 ju_{2}+i(i-1)\tau_{11}+2ij\tau_{12}+j(j-1)\tau_{22}\right)},\]
\[\theta_{10}(u_{1},u_{2})=\sum_{i,j\in\mathbb{Z}}e^{2\pi\sqrt{-1}\ \left((2i+1)u_{1}+2 ju_{2}+i^{2}\tau_{11}+(2i+1)j\tau_{12}+j(j-1)\tau_{22}\right)},\]
\[\theta_{01}(u_{1},u_{2})=\sum_{i,j\in\mathbb{Z}}e^{2\pi\sqrt{-1}\ \left(2iu_{1}+(2j+1)u_{2}+i(i-1)\tau_{11}+i(2j+1)\tau_{12}+j^{2}\tau_{22} \right)},\]
\[\theta_{11}(u_{1},u_{2})=\sum_{i,j\in\mathbb{Z}}e^{2\pi\sqrt{-1}\ \left((2i+1)u_{1}+(2j+1)u_{2}+i^{2}\tau_{11}+(2ij+i+j)\tau_{12}+j^{2}\tau_{22} \right)}\]
where \(\tau_{00},\tau_{12},\tau_{22}\) are constants such that the matrix \(\begin{pmatrix}\tau_{11}&\tau_{12}\\ \tau_{11}&\tau_{22}\end{pmatrix}\) belongs to Siegel upper half-space of genus two, i.e. its imaginary part is positive. The functions \(\theta_{a,b}(u_{1},u_{2})\) are holomorphic and satisfy the following periodicity and quasiperiodicity properties
\[\theta_{a,b}(u_{1}+1,u_{2})=\theta_{a,b}(u_{1},u_{2}),\ \ \ \theta_{a,b}(u_{1},u_{2}+1)=\theta_{a,b}(u_{1},u_{2}),\]
\[\theta_{a,b}(u_{1}+\tau_{11},u_{2}+\tau_{12})=e^{-2\pi\sqrt{-1}\ 2u_{1}} \theta_{a,b}(u_{1},u_{2}),\ \ \theta_{a,b}(u_{1}+\tau_{12},u_{2}+\tau_{22})=e^{-2\pi\sqrt{-1}\ 2u_{2}} \theta_{a,b}(u_{1},u_{2}).\]
Moreover, any holomorphic function with these properties is a linear combination of \(\theta_{a,b}(u_{1},u_{2})\), \(a,b\in\mathbb{Z}/2\).
We can define Kummer surface and corresponding functions \(f(x_{1},x_{2}),g(x_{1},x_{2})\) parametrically by
\[f=\frac{\theta_{00}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})},\ x_{1}=\frac{ \theta_{10}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})},\ x_{2}=\frac{\theta_{01}(u _{1},u_{2})}{\theta_{11}(u_{1},u_{2})},\]
\[g=\frac{1}{\Delta}\,\left(c_{1}\!\left(\frac{\theta_{00}(u_{1},u_{2})}{\theta _{11}(u_{1},u_{2})}\right)_{u_{1}}+c_{2}\!\left(\frac{\theta_{10}(u_{1},u_{2}) }{\theta_{11}(u_{1},u_{2})}\right)_{u_{1}}\!\!\!+c_{3}\!\left(\frac{\theta_{01 }(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})}\right)_{u_{1}}\!\!\!+\right.\]
\[\left.c_{4}\!\left(\frac{\theta_{00}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})} \right)_{u_{2}}\!\!\!+c_{5}\!\left(\frac{\theta_{10}(u_{1},u_{2})}{\theta_{11} (u_{1},u_{2})}\right)_{u_{2}}\!\!\!+c_{6}\!\left(\frac{\theta_{01}(u_{1},u_{2}) }{\theta_{11}(u_{1},u_{2})}\right)_{u_{2}}\!\!\!\right)\]
where \(c_{1},...,c_{6}\) are constants, indexes \(u_{1},u_{2}\) stand for partial derivatives and
\[\Delta=\left(\frac{\theta_{10}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})}\right) _{u_{1}}\left(\frac{\theta_{01}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2})}\right) _{u_{2}}\!\!\!-\left(\frac{\theta_{01}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2}) }\right)_{u_{1}}\left(\frac{\theta_{10}(u_{1},u_{2})}{\theta_{11}(u_{1},u_{2}) }\right)_{u_{2}}\!\!\!.\]
Then pair \(f,g\) is admissible.
**Remark 5.6.1.** Kummer surfaces have many degenerations which are not Kummer surfaces anymore but also give examples of admissible surfaces. For example, Steiner Roman surface can be obtained in this way.
**Remark 5.6.2.** It would be interesting to find a more transparent proof of Theorem 5.6.1, for example, using parametrization of Kummer surface by theta functions or using relation of Kummer surface with the quadratic line complex. Notice that our space of sections \(\Gamma^{-}\) described above has a basis \(y_{1},...,y_{6}\) such that
\[y_{1}^{2}+...+y_{6}^{2}=0,\]
\[\lambda_{1}y_{1}^{2}+...+\lambda_{6}y_{6}^{2}=0,\]
\[\lambda_{1}^{2}y_{1}^{2}+...+\lambda_{6}^{2}y_{6}^{2}=0\]
where \(\lambda_{1},...,\lambda_{6}\in\mathbb{C}\) are constants. Therefore, the space \(\Gamma^{-}\) gives an embedding of the quadratic line complex to \(\mathbb{P}^{5}\).
### Extensions of admissible pairs and families of surfaces of degree four
**Theorem 5.7.1.** Let a pair of functions \(\tilde{f}(x_{1},...,x_{k},y_{k+1},...,y_{n}),\ \tilde{g}(x_{1},...,x_{k},y_{k+1},...,y_{n})\) be admissible as functions in variables \(x_{1},...,x_{k}\) where \(y_{k+1},...,y_{n}\) are considered as parameters, and let this pair be also admissible as functions in variables \(y_{k+1},...,y_{n}\) where \(x_{1},...,x_{k}\) are considered as parameters. Let us construct a new pair of functions \(f(x_{1},...,x_{n}),\ g(x_{1},...,x_{n})\) by doing the Fourier transform with respect to variables \(y_{k+1},...,y_{n}\) and using our admissibility assumption
\[\int\tilde{g}(x_{1},...,x_{k},y_{k+1},...,y_{n})e^{\frac{\tilde{f}(x_{1},...,x _{k},y_{k+1},...,y_{n})-x_{k+1}y_{k+1}-...-x_{n}y_{n}}{\hbar}}dy_{k+1}...dy_{ n}=\]
\[(2\pi\hbar)^{\frac{n-k}{2}}g(x_{1},...,x_{n})e^{\frac{\tilde{f}(x_{1},...,x_{n })}{\hbar}}.\]
Then the pair of functions in \(n\) variables \(f,g\) is admissible.
**Proof.** Notice that the pair \(f,g\) is also admissible with respect to variables \(x_{k+1},...,x_{n}\) as the Fourier transform of an admissible pair. We compute the integral in the l.h.s. of (1.6) in two steps so that in each step the integral is one loop exact. First, we integrate with respect to \(x_{k+1},...,x_{n}\) using the admissibility with respect to these variables. Second, we integrate with respect to \(x_{1},...,x_{k}\) using admissibility of \(\tilde{f},\tilde{g}\) with respect to these variables. \(\square\)
**Example 5.7.1.** Let us define functions \(\tilde{f}(x_{1},y_{2}),\ \tilde{g}(x_{1},y_{2})\) by
\[\tilde{f}(x_{1},y_{2})=\sqrt{x_{1}^{2}-1}\sqrt{y_{2}^{2}-1}+(a_{1}x_{1}+a_{2} )\sqrt{y_{2}^{2}-1}+(a_{3}y_{2}+a_{4})\sqrt{x_{1}^{2}-1}+a_{5}x_{1}y_{2},\]
\[\tilde{g}(x_{1},y_{2})=\frac{c_{1}}{\sqrt{x_{1}+1}\sqrt{y_{2}+1}}+\frac{c_{2}}{ \sqrt{x_{1}-1}\sqrt{y_{2}+1}}+\frac{c_{3}}{\sqrt{x_{1}+1}\sqrt{y_{2}-1}}+\frac{c _{4}}{\sqrt{x_{1}-1}\sqrt{y_{2}-1}}\]
where \(a_{1},...,a_{5},c_{1},...,c_{4}\) are constants. It follows from results of Section 6 (see Theorem 6.1) that the pair \(\tilde{f},\tilde{g}\) is admissible with respect to variable \(x_{1}\) and with respect to variable \(y_{2}\) (but it is not admissible as a pair of functions in two variables). After the Fourier transform with respect to \(y_{2}\) we get the following admissible pair of functions in two variables
\[f(x_{1},x_{2})=a_{4}\sqrt{x_{1}^{2}-1}+\]
\[\sqrt{x_{2}-(a_{1}+a_{5})x_{1}-a_{2}-(a_{3}+1)\sqrt{x_{1}^{2}-1}}\sqrt{x_{2}+( a_{1}-a_{5})x_{1}+a_{2}-(a_{3}-1)\sqrt{x_{1}^{2}-1}},\]
\[g(x_{1},x_{2})=\frac{\frac{c_{1}}{\sqrt{x_{1}+1}}+\frac{c_{2}}{\sqrt{x_{1}-1}}} {\sqrt{x_{2}-(a_{1}+a_{5})x_{1}-a_{2}-(a_{3}+1)\sqrt{x_{1}^{2}-1}}}+\]
\[\frac{\frac{c_{3}}{\sqrt{x_{1}+1}}+\frac{c_{4}}{\sqrt{x_{1}-1}}}{\sqrt{x_{2}+( a_{1}-a_{5})x_{1}+a_{2}-(a_{3}-1)\sqrt{x_{1}^{2}-1}}}.\]
A similar example of an admissible pair coming from Theorem 5.7.1 is given below, where explicit formulas for \(\tilde{f},\tilde{g}\) are omitted.
**Theorem 5.7.2.** Define functions \(f(x_{1},x_{2}),g(x_{1},x_{2})\) parametrically as
\[f=u_{1}u_{2}^{4}+2r_{1}r_{2}u_{1}u_{2}^{3}+(r_{1}^{2}r_{2}^{2}+2)u_{1}u_{2}^{2} +2u_{1}u_{2}r_{1}r_{2}+u_{1}+\frac{1}{u_{1}},\ x_{1}=u_{2}^{2}r_{2}-u_{2}r_{1}+ r_{2}+\frac{1}{u_{1}},\ x_{2}=u_{2}^{2}\]
\[g=\sqrt{u_{1}}\ C_{1}+\sqrt{u_{1}}\ \frac{C_{2}}{u_{2}}+u_{1}^{3/2}(r_{1}r_{2}u_{ 2}+u_{2}^{2}+1)C_{3}+u_{1}^{3/2}(r_{1}r_{2}u_{2}+u_{2}^{2}+1)\frac{C_{4}}{u_{2}}\]
where \(r_{1},r_{2},C_{1},C_{2},C_{3},C_{4}\) are constants. Then the pair \(f(x_{1},x_{2}),g(x_{1},x_{2})\) is admissible and its rank is 4.
**Proof.** Since \(f\) is an algebraic function, we just need to check (4.36) which can be done by a direct calculation. Alternatively, notice that for any fixed \(x_{2}\) the pair \(f,g\) is admissible as pair of functions in one variable \(x_{1}\). Moreover, the integral \(\int g(x_{1},x_{2})e^{\frac{f(x_{1},x_{2})+x_{1}y_{1}+x_{2}y_{2}}{h}}dx_{1}dx_{2}\) can be computed in two steps: first with respect to \(x_{1}\), and than with respect to \(x_{2}\). In each step the integral with respect to one variable is one loop exact. \(\Box\)
**Remark 5.7.1.** One can check that Example 5.7.1 and Theorem 5.7.2 both give a two parametric family of admissible cones in \(\mathbb{P}^{3}\) of degree 4. In both cases these cones have non-isolated singularities. We have not checked if these cones are projectively equivalent.
**Remark 5.7.2.** One can generalize the construction from Theorem 5.7.1 considering for example functions in three or more groups of variables. It might be interesting to study admissible pairs obtained in this way. On the other hand, these admissible pairs look degenerate in a certain sense.
### Toric hypersurfaces
Admissible pairs of the form \(f=x_{1}^{a_{1}}...x_{n}^{a_{n}},\ g=x_{1}^{b_{1}}...x_{n}^{b_{n}}\) were completely described in [1]. Here we just reformulate their results in terms of our formalism.
**Theorem 5.8.1.** Let \(C\subset\mathbb{P}^{n+1}\) be a toric projective hypersurface defined by
\[x_{0}^{p_{0}}...x_{m-1}^{p_{m-1}}=x_{m}^{p_{m}}...x_{n+1}^{p_{n+1}} \tag{5.40}\]
where \(p_{0},...,p_{n+1}>0\) are integers and \(p_{0}+...+p_{m-1}=p_{m+}+...+p_{n+1}\). Then \(C\) is admissible if there exist numbers \(r_{0},...,r_{n+1}\in\mathbb{Q}\) such the following identity holds as an identity for functions in one variable \(t\)
\[\Gamma(p_{0}t+r_{0})...\Gamma(p_{m-1}t+r_{m-1})=ae^{bt}\Gamma(p_{m}t+r_{m})... \Gamma(p_{n+1}t+r_{n+1}) \tag{5.41}\]
where \(a,b\) are constants.
We expect that the rank \(C\) is equal to the number of vectors \((r_{0},...,r_{n+1})\in\mathbb{Q}^{n+2}\) up to translation of the form \(r_{i}\to r_{i}+p_{i}u,\ i=0,...,n+1\) for some \(u\), such that the identity (5.41) holds.
**Remark 5.8.1.** The transcendental condition (5.41) is equivalent to a combinatorial one: the union of sets of poles (with multiplicities) in the l.h.s. and in the r.h.s. of equation (5.41) coincide:
\[\bigcup_{i=0}^{m-1}\frac{1}{p_{i}}(\mathbb{Z}_{\geq 0}+r_{i})=\bigcup_{i=m}^{n+ 1}\frac{1}{p_{i}}(\mathbb{Z}_{\geq 0}+r_{i}).\]
Notice that here we multiply \(t\) by \(-1\).
**Proof.** Let \(f(x_{1},...,x_{n})=x_{1}^{a_{1}}...x_{n}^{a_{n}}\) where \(a_{1},...,a_{n}\in\mathbb{Q}\setminus 0\). Let us set
\[g(x_{1},...,x_{n})=x_{1}^{b_{1}}...x_{n}^{b_{n}}.\]
Let
\[I(\vec{y})=\int x_{1}^{b_{1}}...x_{n}^{b_{n}}e^{\frac{1}{h}(x_{1}^{a_{1}}...x_ {n}^{a_{n}}+x_{1}y_{1}+...+x_{n}y_{n})}. \tag{5.42}\]
**Lemma 5.8.1.** The following equality of the formal wave functions holds
\[e^{\frac{x}{h}}=\int\frac{(\frac{x}{\hbar})^{\frac{t}{\hbar}}}{\Gamma(\frac{t }{h}+1)}\frac{dt}{\hbar}. \tag{5.43}\]
**Proof.** First, let us show that the r.h.s. of this equation makes sense as a formal wave function. Let
\[S(t)=\exp\Big{(}-\frac{\zeta(-1)}{1}t-\frac{\zeta(-3)}{3}t^{3}-\frac{\zeta(-5) }{5}t^{5}-...\Big{)}\in\mathbb{Q}[[t]]\]
where \(\zeta\) stands for the Riemann zeta function. Notice that
\[S(t)S(-t)=1.\]
Recall Stirling's formula
\[\Gamma(t+1)\sim(2\pi t)^{\frac{1}{2}}t^{t}e^{-t}S(t^{-1})\]
where \(\sim\) means the asymptotic expansion for \(t\rightarrow\infty\). Since the r.h.s. of (5.43) is also understood as an asymptotic expansion, we can replace the gamma function with the r.h.s. of Stirling's formula and rewrite the equation (5.43) as
\[(2\pi\hbar)^{\frac{1}{2}}e^{\frac{x}{\hbar}}=\int t^{-\frac{1}{2}}S\Big{(}- \frac{\hbar}{t}\Big{)}e^{\frac{t}{\hbar}(1+\ln x-\ln t)}dt.\]
The r.h.s. of this integral can be computed by the expansion at the critical point \(t=x\).
Now let us prove the identity (5.43). Let \(\phi_{l}(x)\) (resp. \(\phi_{r}(x)\)) be the l.h.s. (resp. the r.h.s.) of (5.43). One can check that these functions satisfy the same differential equations \(\partial_{x}\phi_{l}(x)=\frac{x}{\hbar}\phi_{l}(x),\;\partial_{x}\phi_{r}(x)= \frac{x}{\hbar}\phi_{r}(x)\). Therefore, we have \(\phi_{l}(x)=c\phi_{r}(x)\) where \(c\) is independent of \(x\). To compute \(c\) we just compute the first term of (5.43) by the stationary phase method. \(\Box\)
Using Lemma 5.8.1 we replace \(e^{\frac{1}{\hbar}x_{1}^{a_{1}}...x_{n}^{a_{n}}}\) in \(I(\vec{y})\) with the corresponding r.h.s. of the identity (5.43) and obtain
\[I(\vec{y})=\int\prod_{i=1}^{n}x_{i}^{b_{i}+\frac{a_{i}t}{\hbar}}e^{\frac{x_{i} y_{i}}{\hbar}}\;\frac{1}{\Gamma(\frac{t}{\hbar}+1)\hbar^{\frac{t}{\hbar}}}\; \frac{dt}{\hbar}\;dx_{1}...dx_{n}.\]
**Lemma 5.8.2.** Let \(a>0\). The following equalities between formal wave functions hold
\[\int x^{b+\frac{a}{\hbar}}e^{\frac{xy}{\hbar}}dx=\Big{(}-\frac{y}{\hbar}\Big{)} ^{-b-\frac{a}{\hbar}-1}\Gamma(b+\frac{a}{\hbar}+1),\]
\[\int x^{b-\frac{a}{\hbar}}e^{\frac{xy}{\hbar}}dx=\Big{(}\frac{y}{\hbar}\Big{)} ^{-b+\frac{a}{\hbar}-1}\frac{2\pi i}{\Gamma(-b+\frac{a}{\hbar})}.\]
**Proof.** For the first equation notice that if \(x>0,\;y<0,\;\hbar>0\), then we have the usual Euler integral representation of the gamma function in the form
\[\int_{0}^{\infty}x^{b+\frac{a}{\hbar}}e^{\frac{xy}{\hbar}}dx=\Big{(}-\frac{y} {\hbar}\Big{)}^{-b-\frac{a}{\hbar}-1}\Gamma(b+\frac{a}{\hbar}+1)\]
which implies our equality for the formal wave functions. The second equation makes sense only on the level of wave functions and follows from the first one and the equation (5.44). \(\Box\)
Using these formulas we rewrite \(I(\vec{y})\) in (5.45) as
\[I(\vec{y})=\int\frac{1}{\Gamma(\frac{t}{\hbar}+1)\hbar^{\frac{t}{\hbar}}} \prod_{i=1}^{n}\Big{(}\pm\frac{y_{i}}{\hbar}\Big{)}^{-b_{i}-\frac{a_{i}t}{\hbar }-1}\cdot\frac{\prod_{i=1}^{k}\Gamma(b_{i}+\frac{a_{i}t}{\hbar}+1)}{\prod_{i=k+ 1}^{n}\Gamma(-b_{i}-\frac{a_{i}t}{\hbar})}\cdot\frac{dt}{\hbar}\]
where we assume \(a_{1},...,a_{k}>0,\ a_{k+1},...,a_{n}<0\). It follows from this expression that the integral \(I(\vec{y})\) is one loop exact if there exist \(a,b\in\mathbb{Q}\), \(a\neq 0\) such that
\[\frac{1}{\Gamma(\frac{t}{\hbar}+1)}\cdot\frac{\prod_{i=1}^{k}\Gamma(b_{i}+ \frac{a_{i}t}{\hbar}+1)}{\prod_{i=k+1}^{n}\Gamma(-b_{i}-\frac{a_{i}t}{\hbar})}= c_{1}c_{2}^{t}\Gamma(b+\frac{at}{\hbar}+1)\quad\mbox{or}\quad\frac{c_{1}c_{2}^{t}}{ \Gamma(-b-\frac{at}{\hbar})}.\]
Notice that we must have \(a=a_{1}+...+a_{n}-1\). Reformulating this condition in terms of toric hypersurface (5.40) we obtain the condition (5.41). \(\square\)
The statement and the proof of this theorem can be generalized to the field \(\mathbb{R}\) instead of \(\mathbb{Q}\).
**Remark 5.8.2.** If an identity of the form (5.41) holds, then it holds by virtue of Gauss multiplication formula. This can be shown by considering poles of the l.h.s. and the r.h.s. of (5.41).
**Example 5.8.1.** Let \(C\subset\mathbb{P}^{3}\) be a toric projective surface defined by
\[x_{0}x_{1}x_{2}=x_{3}^{3}.\]
Any vector \((r_{0},r_{1},r_{2},r_{3})\in\mathbb{R}^{4}\) such that
\[\Gamma(t+r_{0})\Gamma(t+r_{1})\Gamma(t+r_{2})=ae^{bt}\Gamma(3t+r_{3})\]
is equal to \((0,\frac{1}{3},\frac{2}{3},0)\) up to translation by \((u,u,u,3u)\) and permutation of \(r_{0},r_{1},r_{2}\). Therefore, \(C\) is admissible with rank \(3!=6\).
**Example 5.8.2.** More generally, let \(C\subset\mathbb{P}^{n+1}\) be a toric projective surface defined by
\[x_{0}...x_{n}=x_{n+1}^{n+1}.\]
Then its rank is equal to \((n+1)!\).
**Example 5.8.3.** Let \(C\subset\mathbb{P}^{3}\) be a toric projective surface defined by
\[x_{0}x_{1}x_{2}^{2}=x_{3}^{4}.\]
Any vector \((r_{0},r_{1},r_{2},r_{3})\in\mathbb{R}^{4}\) such that
\[\Gamma(t+r_{0})\Gamma(t+r_{1})\Gamma(2t+r_{2})=ae^{bt}\Gamma(4t+r_{3})\]
is equal to either \((0,\frac{1}{2},\frac{1}{2},0)\) or \((\frac{1}{4},\frac{3}{4},0,0)\) up to translation by \((u,u,2u,4u)\) and permutation of \(r_{0},r_{1}\). Therefore, \(C\) is admissible with rank \(2\cdot 2=4\).
**Example 5.8.4.** Let \(C\subset\mathbb{P}^{3}\) be a toric projective surface defined by
\[x_{0}x_{1}^{p}=x_{2}x_{3}^{p}\]
for some \(p>0\). Any vector \((r_{0},r_{1},r_{2},r_{3})\in\mathbb{R}^{4}\) such that
\[\Gamma(t+r_{0})\Gamma(pt+r_{1})=ae^{bt}\Gamma(t+r_{2})\Gamma(pt+r_{3})\]
is equal to \((0,v,0,v)\) for arbitrary \(v\), up to translation by \((u,pu,u,pu)\). Therefore, \(C\) is admissible with infinite rank.
**Remark 5.8.3.** One can show that if \(n=2\), then any admissible toric surface of the form (5.40) is given by examples 5.8.1, 5.8.3, 5.8.4 (up to permutations of \(x_{0},...,x_{3}\) and multiplication of \(p_{0},...,p_{3}\) by a common factor.
### Segre cubic in \(\mathbb{P}^{4}\)
**Theorem 5.9.1.** Define a cone in \(\mathbb{A}^{5}\) by
\[x_{0}x_{1}x_{2}+x_{1}x_{2}x_{3}+x_{2}x_{3}x_{4}+x_{3}x_{4}x_{0}+x_{4}x_{0}x_{1}=0.\]
This cone is admissible with rank 24.
**Proof.** Let
\[F_{1}=x_{0}x_{1}x_{2}+x_{1}x_{2}x_{3}+x_{2}x_{3}x_{4}+x_{3}x_{4}x_{0}+x_{4}x_{0}x _{1}.\]
Computing dual polynomial we obtain
\[F_{2}=\partial_{x_{0}}^{2}\partial_{x_{1}}^{2}+\partial_{x_{1}}^{2}\partial_{x _{2}}^{2}+\partial_{x_{2}}^{2}\partial_{x_{3}}^{2}+\partial_{x_{3}}^{2} \partial_{x_{4}}^{2}+\partial_{x_{4}}^{2}\partial_{x_{0}}^{2}+\]
\[+2\partial_{x_{0}}\partial_{x_{1}}\partial_{x_{2}}\partial_{x_{3}}+2\partial_ {x_{1}}\partial_{x_{2}}\partial_{x_{3}}\partial_{x_{4}}+2\partial_{x_{2}} \partial_{x_{3}}\partial_{x_{4}}\partial_{x_{0}}+2\partial_{x_{3}}\partial_{x _{4}}\partial_{x_{0}}\partial_{x_{1}}+2\partial_{x_{4}}\partial_{x_{0}}\partial _{x_{1}}\partial_{x_{2}}\]
\[-2\partial_{x_{0}}\partial_{x_{1}}^{2}\partial_{x_{2}}-2\partial_{x_{1}} \partial_{x_{2}}^{2}\partial_{x_{3}}-2\partial_{x_{2}}\partial_{x_{3}}^{2} \partial_{x_{4}}-2\partial_{x_{3}}\partial_{x_{4}}^{2}\partial_{x_{0}}-2 \partial_{x_{4}}\partial_{x_{0}}^{2}\partial_{x_{1}}.\]
Let \(g\) be a cyclic permutation of one of the following expressions
\[\sqrt{\frac{x_{0}x_{1}x_{2}}{x_{3}x_{4}}},\ \sqrt{\frac{x_{0}(x_{1}+x_{3})(x_{1} +x_{4})}{x_{1}x_{2}}},\ \sqrt{\frac{x_{3}(x_{2}+x_{0})(x_{2}+x_{4})}{x_{1}x_{2}}},\]
\[\sqrt{\frac{x_{0}x_{4}(x_{0}+x_{2})}{x_{1}(x_{0}+x_{3})}},\ \sqrt{\frac{x_{0}x_{1}(x_{0}+x_{3})}{x_{4}(x_{0}+x_{2})}},\ \sqrt{\frac{x_{1}x_{2}x_{3}}{(x_{2}+x_{4})(x_{2}+x_{0})}}.\]
Thus we have 30 possible functions \(g\). One can check that the vector space spent by these functions is 24-dimensional and
\[F_{2}\Big{(}g\delta(F_{1})\Big{)}=0\]
for any such \(g\). One can also check that the system for \(g\) is holonomic and has a 24-dimensional space of solutions at a generic point. \(\square\)
**Remark 5.9.1.** Let \(H=\det((\partial_{x_{i}}\partial_{x_{j}}F_{1})_{0\leq i,j\leq 4})\). One can check that divisor on Segre surface \(F_{1}=0\) defined by \(H=0\) is equivalent to the divisor defined by
\[x_{0}x_{1}x_{2}x_{3}x_{4}(x_{0}+x_{2})(x_{1}+x_{3})(x_{2}+x_{4})(x_{3}+x_{0})( x_{4}+x_{1})=0.\]
## 6 Classification of admissible pairs of functions in one variable
In the case of functions \(f(x),g(x)\) in one variable, the classification of admissible pairs \(f,g\) can be done directly, by equating the higher loop contributions to zero and solving the corresponding system of differential equations.
**Lemma 6.1.** If \(f(x)\) is admissible, then it has one of the following forms:
**1)**\(f(x)=(a_{0}+a_{1}x+a_{2}x^{2})^{\frac{1}{2}}+b_{0}+b_{1}x\) where \(a_{0},a_{1},a_{2},b_{0},b_{1}\) are arbitrary constants such that the polynomial \(a_{0}+a_{1}x+a_{2}x^{2}\) has distinct roots. In particular, if \(a_{2}=0\), then \(a_{1}\neq 0\) (root at infinity has multiplicity at most one).
**2)**\(f(x)=\frac{1}{a_{0}+a_{1}x}+b_{0}+b_{1}x\) where \(a_{0},a_{1},b_{0},b_{1}\) are arbitrary constants and \(a_{1}\neq 0\).
**3)**\(f(x)=b_{0}+b_{1}x+b_{2}x^{2}\) where \(b_{0},b_{1},b_{2}\) are arbitrary constants and \(b_{2}\neq 0\).
**Proof.** Equating to zero the terms \(A_{2},A_{3},A_{4}\) of the expansion (1.4) one obtains a system of polynomial differential equations for \(f(x),g(x)\). Assuming that \(f^{\prime\prime}(x),g(x)\neq 0\) one can check by direct computations that this system is equivalent to the following:
\[9f^{\prime\prime}(x)^{2}f^{(5)}(x)-45f^{\prime\prime}(x)f^{\prime\prime\prime }(x)f^{(4)}(x)+40f^{\prime\prime\prime}(x)^{3}=0, \tag{6.46}\]
\[12f^{\prime\prime}(x)^{2}\cdot g^{\prime\prime}(x)-12f^{\prime\prime}(x)f^{ \prime\prime\prime}(x)\cdot g^{\prime}(x)+(5f^{\prime\prime\prime}(x)^{2}-3f^{ \prime\prime}(x)f^{(4)}(x))\cdot g(x)=0.\]
The first equation can be written as
\[\frac{d^{3}}{dx^{3}}\Big{(}f^{\prime\prime}(x)^{-\frac{2}{3}}\Big{)}=0\]
which gives
\[f^{\prime\prime}(x)=(a_{0}+a_{1}x+a_{2}x^{2})^{-\frac{3}{2}}\]
where \(a_{0},a_{1},a_{2}\) are arbitrary constants such that the vector \((a_{0},a_{1},a_{2})\) is non-zero. After the integration we obtain the statement of Lemma. \(\Box\)
**Theorem 6.1.** The following pairs \(f(x),g(x)\) are admissible:
**1)**\(f(x)=\sqrt{x^{2}-1},\quad g(x)=\frac{c_{1}}{\sqrt{x+1}}+\frac{c_{2}}{\sqrt{x-1}}\).
**2)**\(f(x)=\frac{1}{x},\quad g(x)=c_{1}x^{-\frac{1}{2}}+c_{2}x^{-\frac{3}{2}}\).
**3)**\(f(x)=x^{\frac{1}{2}},\quad g(x)=c_{1}+c_{2}x^{-\frac{1}{2}}\).
**4)**\(f(x)=x^{2},\quad g(x)=c_{1}+c_{2}x\).
Moreover, any admissible pair is equivalent to one of these under transformations
\[x\mapsto\lambda_{0}+\lambda_{1}x,\quad f\mapsto\mu_{0}f+\mu_{1}+\mu_{2}x\]
where \(\lambda_{1},\mu_{0}\neq 0\). In particular, the rank of any admissible pair is equal to 2.
**Proof.** In each of the cases **1), 2), 3), 4)** the corresponding cone \(C\) is defined by a non-degenerate quadric. Therefore, all these cases are equivalent with respect to the projective action. But the pair **4)** is admissible because it corresponds to a Gaussian integral. On the other hand, it follows from Lemma 1 that any admissible pair is equivalent to one listed above. The corresponding functions \(g(x)\) can be found by solving the second equation of the system (6.46) for a known function \(f(x)\). \(\Box\)
Toward a classification of admissible pairs of functions in two variables
The following results are based on huge direct computations and therefore we give only a rough scheme of the proof.
**Theorem 7.1.** Let \(f(x_{1},x_{2})\) be admissible. Then its rank is infinite iff the corresponding surface is ruled. If the corresponding surface is not ruled, then \(\dim V_{f}\leq 6\). Moreover, \(\dim V_{f}\) cannot be equal to \(5\).
**Theorem 7.2.** Let \(f(x_{1},x_{2})\) be admissible and \(\dim V_{f}=6\). Then the corresponding surface is either Kummer (including degenerations of Kummer surfaces such as Steiner Roman surface) or a toric surface given by
\[x_{0}x_{1}x_{2}=x_{3}^{3}.\]
**Proof.** Consider first three equations (9.47) for \(f,g\) (see Appendix) as a system of linear equations for \(g\) and denote its space of solutions by \(\widetilde{V}_{f}\). We have \(V_{f}\subset\widetilde{V}_{f}\). Bringing this system of three equations to an involutive form as a system for \(g\) one can see that it has an infinite-dimensional space of solutions iff \(f\) corresponds to a ruled surface. Moreover, if the surface is not ruled, then \(\dim\widetilde{V}_{f}\leq 6\) and \(\dim\widetilde{V}_{f}=6\) only in the cases listed above. On the other hand, we know that for ruled surfaces \(\dim V_{f}\) is infinite and for Kummer surfaces as well as for a toric variety defined above the dimension of \(V_{f}\) is equal to \(6\). One can check that \(\dim V_{f}\neq 5\) in a similar way. \(\square\)
**Remark 7.1.** It would be interesting to obtain the full classification of admissible pairs of functions in two variables. This means
1. Find detailed classification of admissible pairs in the case of ruled surfaces.
2. Find all admissible pairs such that \(\dim V_{f}=1,2,3,4\).
**Remark 7.2.** Equations (9.47) become much simpler if one writes them in terms of local projective invariants. This can be done in the case \(n=2\) using so-called asymptotic coordinates on a hypersurface in \(\mathbb{P}^{3}\)[6]. It would be a good idea to use this approach for the classification of admissible pairs in two variables.
## 8 A potential application to generalized Dirichlet series
In the special case when \(f,g\) are both homogeneous and the equation (1.6) can be lifted to an actual identity between distributions, one can try to imitate the proof of the functional equation for the Riemann \(\zeta\)-function based on the Poisson summation formula and the Mellin transform.
Let
\[\int_{\mathbb{R}^{n}}g(\vec{x})e^{-\frac{f(\vec{x})}{\hbar}+\frac{i}{\hbar} \vec{x}\vec{y}}d\vec{x}=(2\pi\hbar)^{\frac{n}{2}}\hat{g}(\vec{y})e^{-\frac{f (\vec{y})}{\hbar}}.\]
If the Poisson summation formula is applicable (possibly after some regularization), we have
\[\sum_{\vec{x}\in\mathbb{Z}^{n}}g(\vec{x})e^{-\frac{f(\vec{x})}{h}}=(2\pi\hbar)^{ \frac{n}{2}}\sum_{\vec{y}\in\mathbb{Z}^{n}}\hat{g}(2\pi\hbar\vec{y})e^{-\frac{f (2\pi\hbar\vec{y})}{h}}.\]
Multiplying this equation by \(\int_{0}^{\infty}\hbar^{s-1}d\hbar\), integrating by \(\hbar\) and making the change of variable \(t=\frac{1}{\hbar}\) we get in the l.h.s.
\[\sum_{\vec{x}\in\mathbb{Z}^{n}}\int_{0}^{\infty}\hbar^{s-1}g(\vec{x})e^{-\frac {f(\vec{x})}{h}}d\hbar=\sum_{\vec{x}\in\mathbb{Z}^{n}}\int_{0}^{\infty}t^{-1-s }g(\vec{x})e^{-f(\vec{x})t}dt=\Gamma(-s)\sum_{\vec{x}\in\mathbb{Z}^{n}}g(\vec{ x})f(\vec{x})^{s},\]
and in the l.h.s. similarly, assuming that \(\hat{f},\hat{g}\) are homogeneous, \(\deg\hat{f}=a,\ \deg\hat{g}=b\), and with the change of variable \(t=\hbar^{a-1}\) we get
\[(2\pi)^{\frac{n}{2}}\sum_{\vec{y}\in\mathbb{Z}^{n}}\int_{0}^{\infty}\hbar^{ \frac{n}{2}}\hat{g}(2\pi\hbar\vec{y})e^{-\frac{f(2\pi\hbar\vec{y})}{h}}d\hbar =(2\pi)^{\frac{n}{2}+b}\sum_{\vec{y}\in\mathbb{Z}^{n}}\int_{0}^{\infty}\hbar^ {\frac{n}{2}+b+s}\hat{g}(\vec{y})e^{-\hat{f}(2\pi\vec{y})h^{a-1}}\frac{d\hbar} {\hbar}=\]
\[\frac{(2\pi)^{\frac{n}{2}+b}}{a-1}\sum_{\vec{y}\in\mathbb{Z}^{n}}\hat{g}(\vec {y})\int_{0}^{\infty}t^{\frac{\frac{n}{2}+b+s}{a-1}}e^{-\hat{f}(2\pi\vec{y})t }\frac{dt}{t}=\frac{(2\pi)^{\frac{n}{2}+b}}{a-1}\Gamma\Bigg{(}\frac{\frac{n}{2 }+b+s}{a-1}\Bigg{)}\sum_{\vec{y}\in\mathbb{Z}^{n}}\hat{g}(\vec{y})\hat{f}(2\pi \vec{y})^{-\frac{\frac{n}{2}+b+s}{a-1}}.\]
Equating the results of these calculations for the l.h.s. and the r.h.s. we get
\[\Gamma(-s)\sum_{\vec{x}\in\mathbb{Z}^{n}}g(\vec{x})f(\vec{x})^{s}=\frac{(2\pi )^{\frac{n}{2}+b}}{a-1}\Gamma\Bigg{(}\frac{\frac{n}{2}+b+s}{a-1}\Bigg{)}\sum_{ \vec{y}\in\mathbb{Z}^{n}}\hat{g}(\vec{y})\hat{f}(2\pi\vec{y})^{-\frac{\frac{ n}{2}+b+s}{a-1}}.\]
The above arguments do not make rigorous sense for non-smooth functions, and one has to find a way to regularize sums and integrals above and get potentially an example of a functional equation possibly with some correction terms related with singularities.
For example, it would be interesting to extract a functional equation from the actual identity in Schwartz space \(S^{\prime}(\mathbb{R}^{2})\) of distributions of moderate growth:
Let \(\hbar>0\) and
\[\phi(x_{1},x_{2})=\frac{1}{(x_{1}-i0)^{\frac{3}{2}}}e^{\frac{i}{\hbar}(x_{1} +\sqrt{+\imath x_{1}}\sqrt{-\imath x_{2}})^{2}}.\]
This function is the boundary value of a holomorphic function in \(x_{1}\) with Im \(x_{1}<0\) and in \(x_{2}\) with Im \(x_{2}>0\). Then
\[\iint_{\mathbb{R}^{2}}\phi(x_{1},x_{2})e^{\frac{i}{\hbar}(x_{1}y_{1}+x_{2}y_{ 2})}dx_{1}dx_{2}=2\pi i\hbar\ \overline{\phi(y_{2},y_{1}+y_{2})}.\]
## 9 Conjectures and open questions
Let \(\Sigma\subset\mathbb{P}^{n+1}\) be a projective hypersurface (may be a non-algebraic germ), non-degenerate at the generic point, and \(C\subset\mathbb{A}^{n+2}\) the corresponding cone. Assume that \(n\geq 2\).
**1.** It would be interesting to determine when \(rk(C)\) is infinite in terms of projective differential geometry of \(\Sigma\). We know the answer in the simplest nontrivial case \(n=2\): \(rk(C)\) is infinite iff \(\Sigma\) is a ruled surface, see Section 5.3. Notice that if \(\Sigma\subset\mathbb{P}^{3}\) is a ruled surface, then its projective dual \(\widehat{\Sigma}\subset\mathbb{P}^{3}\) is also ruled but the property of being ruled is not self dual if \(n>2\). For arbitrary \(n\geq 2\) we can only suggest the following
**Conjecture 9.1.** If \(rk(C)\) is infinite, then both \(\Sigma\subset\mathbb{P}^{n+1}\) and \(\widehat{\Sigma}\subset\mathbb{P}^{n+1}\) are ruled.
Notice that the Segre cubic has rank \(24\) and ruled, but its dual is not ruled, see Section 5.9.
**2.** Let \(rk(C)\) be finite. It would be interesting to understand which values \(rk(C)\) can take. For example, if \(n=2\), then we know examples with \(rk(C)=4,\ 6\), we also know that \(rk(C)\neq 5\) and \(rk(C)\leq 6\).
**Conjecture 9.2.** If \(rk(C)\) is finite, then \(rk(C)\leq(n+1)!\).
It would be interesting to classify all \(C\) with the finite largest possible \(rk(C)\) for given \(n\).
**Conjecture 9.3.** If \(rk(C)\) is finite, then \(\Sigma\) is algebraic.
**3.** Let \(\Sigma\) be algebraic and \(rk(C)\) finite.
**Conjecture 9.4.** If \(rk(C)>0\) and finite, then \(\Sigma\) is singular.
In all _interesting_ known examples \(\Sigma\) has only isolated singularities (double points). The only exception is described in Section 5.7, these families of surfaces have one-dimensional singularity locus.
Let \(\mathfrak{S}_{n,d}\) be the set of algebraic hypersurfaces \(\Sigma\subset\mathbb{P}^{n+1}\) of degree \(d>2\) with the largest possible number of double points. For some values of \(n,d\) all hypersurfaces from \(\mathfrak{S}_{n,d}\) are admissible (with rank \((n+1)!\)). For example \(\mathfrak{S}_{2,4}\) are Kummer surfaces (16 double points), and \(\mathfrak{S}_{3,3}\) is the Segre cubic (10 double points). It would be interesting to determine for which values of \(n,d\) elements of \(\mathfrak{S}_{n,d}\) are admissible.
**4.** Let \(\Sigma\subset\mathbb{P}^{n+1}\) be algebraic and defined by irreducible polynomial \(F_{1}(x_{0},...,x_{n+1})\), and its projective dual \(\widehat{\Sigma}\) defined by an irreducible polynomial \(F_{2}(y_{0},...,y_{n+1})\). Recall that the natural birational isomorphism \(\sigma:\ \Sigma\rightarrow\widehat{\Sigma}\) is given by \(y_{i}=\partial_{x_{i}}F_{1},\ i=0,...,n+1\) (see also Section 4,3, Remark 4.3.1). Let \(H=\det((\partial_{x_{i}}\partial_{x_{j}}F_{1})_{0\leq i,j\leq n+1})\). Notice that \(H\) is the determinant of the Jacobian of \(\sigma\), and therefore the divisor \(\mathcal{D}\) on \(\Sigma\) defined by \(H=0\) is the singularity locus of \(\sigma\). In certain examples of admissible hypersurfaces \(\Sigma\) the corresponding \(D\)-module is holonomic, has regular singularities on \(\mathcal{D}\), and \(\pi_{1}(\Sigma\setminus\mathcal{D})\) acts on the space of admissible \(g\). See for example Section 5.9. It would be interesting to study this class of admissible hypersurfaces. See also Section 4.5.
**5.** The family of surfaces of degree four in Section 5.7, Theorem 5.7.2 was obtained as all possible deformations of the toric surface
\[x_{0}x_{1}x_{2}^{2}=x_{3}^{4}\]
in class of surfaces of rank \(4\). It would be interesting to study deformations of other admissible
toric hypersurfaces preserving its rank. We have checked that the toric surface
\[x_{0}x_{1}x_{3}=x_{4}^{3}\]
with rank 6 (see Section 5.8) does not have such deformations.
**6.** It would be interesting to study admissible pairs of the form
\[f(x_{1},...,x_{N})=\sum_{i=0}^{N-m}\phi(x_{i+1},...,x_{i+m}),\quad g(x_{1},...,x _{N})=\sum_{i=0}^{N-m}\psi(x_{i+1},...,x_{i+m}),\quad N\gg m\]
and similar with variables \(x_{i_{1},...,i_{k}}\), \(k>1\), \(1\leq i_{1},...,i_{k}\leq N\). Such admissible pairs could be regarded as "integrable lattice models" of certain type.
**7.** It would be interesting to prove or disprove the following
**Conjecture 9.5.** Let \(f(x_{1},...,x_{n})\) be a function in \(n\) variables such that its Hessian is not identically zero and such that
\[\int g(\vec{x},\hbar)e^{\frac{f(\vec{x})+\vec{x}\cdot\vec{y}}{\hbar}}d\vec{x}= (2\pi\hbar)^{\frac{n}{2}}\hat{g}(\vec{y},\hbar)e^{\frac{\hat{f}(\vec{y})}{ \hbar}}\]
where \(g(\vec{x},\hbar),\ \hat{g}(\vec{y},\hbar)\) are both non-zero polynomial in \(\hbar\). Then function \(f\) is admissible.
This conjecture is based on some calculation in the case \(n=1\) and is not supported by any calculations for \(n\geq 2\). Notice that if \(g\) does not depend on \(\hbar\) and \(\hat{g}\) is a polynomial in \(\hbar\), then pair \(f,g\) is not necessarily admissible even in the case \(n=1\).
## Appendix. Explicit formulas for equations
Recall that \(f,g\) are functions in variables \(\vec{x}=(x_{1},x_{2},\ldots,x_{n})\) and we assume that the Hessian matrix \(\partial^{2}f:=(\partial_{i}\partial_{j}f)_{1\leq i,j\leq n}\) is non-degenerate. Denote by
\[(p^{ij})_{1\leq i,j\leq n}:=(\partial^{2}f)^{-1}\]
the inverse matrix-valued function.
Main notation: for \(k\geq 1\) (all summation variables and indices are assumed to be integers),
\[A_{k}:=\sum_{v\in[0,2k]}\sum_{\begin{subarray}{c}d_{0}\in[0,\infty);\\ d_{1},\ldots,d_{v}\in[3,\infty)\end{subarray}}\sum_{\begin{subarray}{c}a_{ij} \in[0,\infty)\\ \text{where }0\leq i\leq j\leq v,\\ \text{satisfying }\forall i:\\ d_{i}=\sum_{j<i}a_{ji}+\\ +2a_{ii}+\sum_{j>i}a_{ij}\end{subarray}}\frac{(-1)^{v}}{Sym_{(d_{i}),(a_{ij})} }\sum_{\begin{subarray}{c}b_{ijl},c_{ijl}\in[1,n]\\ \text{where }1\leq i\leq j\leq v\\ \text{and }1\leq l\leq a_{ij}\end{subarray}}\left(\prod_{\begin{subarray}{c}i,j \in[1,v]\\ l\in[1,a_{ij}]\\ \text{such that }\end{subarray}}p^{b_{ijl},c_{ijl}}\right).\]
\[\cdot\left[\prod_{l_{1}\in[1,a_{00}]}(\partial_{b_{00t_{1}}}\partial_{c_{00t _{1}}})\prod_{\begin{subarray}{c}i\in[1,v]\\ l_{2}\in[1,a_{0i}]\end{subarray}}\partial_{b_{0i_{2}}}g\right]\cdot\prod_{i\in[ 1,v]}\left[\prod_{\begin{subarray}{c}j\in[0,i)\\ l_{1}\in[1,a_{ji}]\end{subarray}}\partial_{c_{ji_{1}}}\prod_{l_{2}\in[1,a_{ii}]}( \partial_{b_{ii_{2}}}\partial_{c_{ii_{2}}})\prod_{\begin{subarray}{c}j\in(i,v ]\\ l_{3}\in[1,a_{ij}]\end{subarray}}\partial_{b_{ijl_{3}}}f\right]\]
Here the symmetry factor is defined by
\[Sym_{(d_{i}),(a_{ij})}=\prod_{i}m_{i}!\cdot\prod_{\begin{subarray}{c}i,j\in[0,v]\\ \text{such that }\\ i\leq j\end{subarray}}a_{ij}!\cdot\prod_{i\in[0,v]}2^{a_{ii}}\]
where \(m_{1},m_{2},\cdots\geq 1\) are multiplicities of the repeating terms in sequence \((d_{1},d_{2},\ldots,d_{v})\), i.e.
\[d_{1}=\cdots=d_{m_{1}}>d_{m_{1}+1}=\cdots=d_{m_{1}+m_{2}}>d_{m_{1}+m_{2}+1}=\cdots\]
Meaning: let us expand \(f\) at some point \(\vec{x}^{(0)}\) as
\[f=f_{0}+f_{1}+f_{2}+f_{\geq 3}\]
where \(f_{0}=f(\vec{x}^{(0)})\) is a constant, \(f_{1},f_{2}\) are homogeneous polynomials in \(\vec{x}-\vec{x}^{(0)}\) of degree \(1\) and \(2\) respectively, and \(f_{\geq 3}\) is a series in \(\vec{x}-\vec{x}^{(0)}\) containing terms of degrees \(\geq 3\) only.
Then the formal Fourier transform, at point
\[\vec{y}^{(0)}=\partial f_{|\vec{x}^{(0)}}:=(\partial_{1}f,\ldots,\partial_{n} f)_{|\vec{x}^{(0)}}\]
is equal, after normalization, to
\[\int g\,e^{-\frac{f-f_{0}-f_{1}}{h}}d^{n}\vec{x}=\int g\,e^{- \frac{f_{2}+f_{\geq 3}}{h}}d^{n}\vec{x}=\\ =\sum_{v\geq 0}\frac{(-1)^{v}}{v!}\hbar^{-v}\int gf^{v}_{\geq 3} \,e^{-\frac{f_{2}}{h}}d^{n}\vec{x}=\\ =(2\pi\hbar)^{n/2}\det(\partial^{2}f_{|\vec{x}^{(0)}})^{-1/2} \left(g_{|\vec{x}^{(0)}}+\sum_{k\geq 1}\hbar^{k}A_{k|\vec{x}^{(0)}}\right)\]
In terms of (not connected) Feynman graphs, \(v\geq 0\) denotes the number of vertices at which we put Taylor coefficients of \(f_{\geq 3}\) (and at exactly one exceptional vertex we put Taylor coefficients of \(g\)). We label vertices by \(\{0,1,\ldots,v\}=[0,v]\cap\mathbb{Z}\) where \(0\) corresponds to \(g\), and the rest to \(f_{\geq 3}\). Moreover, we assume that the ordering of vertices is chosen in such a way that \(d_{1}\geq d_{2}\geq\cdots\geq d_{v}\geq 3\) where for all \(i\in[0,v]\) number \(d_{i}\) is the degree (valency) of vertex labeled by \(i\). Denote by \(a_{ij}\geq 0\) the number of edges connecting vertices \(i\) and \(j\). We enumerate edges connecting \(i\) and \(j\) by \(\{1,\ldots,a_{ij}\}\). Then we put two space indices \(b_{ijl},c_{ijl}\in[1,n]\) on two ends of the edge corresponding to \(l\in[1,a_{ij}]\). The factors \(\prod_{i}m_{i}!\), \(\prod_{ij}a_{ij}!\) and \(\prod_{i}2^{a_{ii}}\) come from symmetry, the rest is the usual Wick formula.
The total number of edges \(e\) satisfies constraints:
\[e\geq\frac{3}{2}v,\quad k=e-v\implies e\in\{k,\ldots,3k\}\,,\]
hence in the expression \(A_{k}\) the propagator \((p^{ij})_{1\leq i,j\leq n}\) appears at most \(3k\) times.
One-loop exactness is equivalent to an infinite sequence of differential equations:
\[A_{1}=0,\ A_{2}=0,\ldots \tag{9.47}\]
Up to symmetry, the number of distinct graphs for \(A_{1},A_{2},A_{3}\) is \(5,41,378\) respectively.
For example, \(5\) graphs appearing in \(A_{1}\) are the following:
\(\Gamma_{1}\)\(\Gamma_{2}\)\(\Gamma_{3}\)\(\Gamma_{4}\)\(\Gamma_{5}\)\(\Gamma_{6}\)\(\Gamma_{7}\)\(\Gamma_{8}\)\(\Gamma_{9}\)\(\Gamma_{1}\)\(\Gamma_{2}\)\(\Gamma_{3}\)\(\Gamma_{4}\)\(\Gamma_{5}\)\(\Gamma_{6}\)\(\Gamma_{7}\)\(\Gamma_{8}\)\(\Gamma_{9}\)\(\Gamma_{1}\)\(\Gamma_{2}\)\(\Gamma_{3}\)\(\Gamma_{4}\)\(\Gamma_{5}\)\(\Gamma_{6}\)\(\Gamma_{7}\)\(\Gamma_{8}\)\(\Gamma_{9}\)\(\Gamma_{10}\)\(\Gamma_{11}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{10}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{10}\)\(\Gamma_{12}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{10}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{10}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{15}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{13}\)\(\Gamma_{19}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{17}\)\(\Gamma_{19}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{12}\)\(\Gamma_{13}\)\(\Gamma_{14}\)\(\Gamma_{15}\)\(\Gamma_{16}\)\(\Gamma_{17}\)\(\Gamma_{18}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{19}\)\(\Gamma_{18}\
## Acknowledgements
We are grateful to Robert Bryant and Joseph M. Landsberg for useful discussions. We are grateful to Nikolai Perkhunkov for useful advises and help with managing huge Maple computations. A.O. is grateful to IHES for invitations and an excellent working atmosphere.
|
2305.03736 | Warp drive solutions in spherical coordinates with anisotropic matter
configurations | In this work we study the influence of isotropic and anisotropic fluids on
the spherically symmetric warp metric. We evaluate the energy conditions and
the influence of including a cosmological constant type term. We find that,
considering this term, there is a trade-off between the weak and strong energy
conditions. The obtained solutions are numerical and we solve the system for
both the stationary and the full regime. The influence of imposing the zero
expansion condition has been explored. We find a wide diversity of behaviours
for the solutions. In general there are regions of spacetime where the energy
conditions can be at least partially satisfied. Finally, we calculate the value
of the total mass using the density found in the numerical simulations, finding
examples where it remains positive during the entire evolution of the system. | Gabriel Abellán, Nelson BolÃvar, Ivaylo Vasilev | 2023-05-04T21:16:26Z | http://arxiv.org/abs/2305.03736v1 | # Warp drive solutions in spherical coordinates with anisotropic matter configurations
###### Abstract
In this work we study the influence of isotropic and anisotropic fluids on the spherically symmetric warp metric. We evaluate the energy conditions and the influence of including a cosmological constant type term. We find that, considering this term, there is a trade-off between the weak and strong energy conditions. The obtained solutions are numerical and we solve the system for both the stationary and the full regime. The influence of imposing the zero expansion condition has been explored. We find a wide diversity of behaviours for the solutions. In general there are regions of spacetime where the energy conditions can be at least partially satisfied. Finally, we calculate the value of the total mass using the density found in the numerical simulations, finding examples where it remains positive during the entire evolution of the system.
warpdrive, energy conditions, spherical symmetric warp, anisotropic solutions
## 1 Introduction
It is known that in general relativity particles can travel globally at superluminal velocities [1; 2; 3; 4]. Alcubierre explored this idea [5] and propose a way to drive matter at velocities higher than the speed of light. The mechanism proposed in his work creates a distortion of space-time, called a warp bubble, resulting in a portion of spacetime contracting in front of the bubble and expanding behind of it as the bubble moves through a geodesic. The line element proposed by Alcubierre was,
\[ds^{2}=-dt^{2}+\left(dx-f(r_{s})v_{s}dt\right)^{2}+dy^{2}+dz^{2}\, \tag{1}\]
with \(r_{s}=\sqrt{(x-x_{s})^{2}+y^{2}+z^{2}}\) and \(v_{s}=\frac{dx_{s}}{dt}\). This corresponds to an ADM-like decomposition of the line element [6; 7; 8]. Studying this metric, Alcubierre already note that the proposed metric implied the violation of energy conditions, since it seemed that a negative energy density would be necessary to sustain such a bubble.
Since Alcubierre's seminal work, there have been numerous contributions pointing to improve and a better understanding of the physics behind the warp metric he proposed. A key interesting aspect has been to apprehend the properties of spacetimes that would allow superluminal velocities to be achieved [3; 4]. Among the most relevant contributions are the modifications to the original metric, allowing for a significant decrease in the energy involved [9]. Other explorations implicated the imposition of constraints on the system, either on the spacetime or on the matter fluid, as can be seen in Natario's work [10] where he proposed a warp drive with zero expansion. Lobo and Visser [11] also discuss several characteristics of energy-matter of the warp bubble, determining some interesting properties, for instance, it must be massless at the centre. It is worth to remark that in these studies the amount of energy needed is significantly reduced. One aspect that has received much attention from the very beginning has been the study of energy conditions as a means of validating the physical feasibility of the warp drive [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. The occurrence of horizons and closed time curves as well as the development of instabilities due to the presence of quantum matter has also been studied [26; 27; 28; 29; 30; 31]. This has been the subject of extensive debate in the community and although much progress has been made, there is still no definitive consensus.
In the works mentioned so far, the starting point is the metric and from there the properties of the matter that supports this spacetime are deduced. However, it is possible to proceed in the reverse way, that is to say, by fixing some properties of matter and studying how the corresponding spacetime must be. In a series of papers [32; 33; 34; 35; 36], Santos et al. propose to study the warp problem from this last point of view. They examined how the elements of the warp metric should be constrained if some kind of matter configuration is imposed, for instance dust or a perfect fluid. In this way they were able to obtain some relations for the deformation function given by the Einstein equations. The
present work adheres to this approach. In a previous publication we have proposed an alternative metric to study the issues associated with warp drive [37]. This metric exploits the advantages of having spherical symmetry in the description of spacetime.
In this work we analyse in depth the consequences on spacetime when considering different types of fluids.
In section 2 we discuss the most important results related to the warp metric in spherical coordinates. We also present the energy-momentum tensor that we will use to describe the type of fluids we will consider. On the other hand we will give a description of the system of equations obtained in terms of traveling patterns. This allows us to study the stationary behavior of the system under study. Section 3 is devoted to evaluate the energy conditions associated with the problem in question. This will be done from both the fluid and metric points of view. We will then look at the consequences to the system after imposing the zero expansion condition. Subsequently, in sections 4 and 5 we will go on to solve numerically the system of equations obtained. Basically we will study solutions for isotropic fluids and then for anisotropic fluids. The solutions studied will be both in the steady state and in the full regime. We also include a discussion of the effect of adding a cosmological constant type term to the system. In section 6 we will use the results of the simulations to determine the behavior of the mass as the system evolves. Finally we make some concluding remarks and considerations for future work.
## 2 Warp drive in spherical coordinates
In this section we present the ingredients of the model we intend to build. We want to write Einstein's equations for the metric proposed. That is
\[G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=8\pi T_{\mu\nu}\;. \tag{2}\]
Here \(R_{\mu\nu}\) is the Ricci tensor, \(R\) the Ricci scalar, \(g_{\mu\nu}\) the metric and \(T_{\mu\nu}\) the energy-momentum tensor. Also we are working in geometric units where \(G=c=1\).
### Metric and Einstein Tensor
In a recent work [37], motivated by Alcubierre's work we propose a warp line element using spherical coordinates
\[ds^{2}=-dt^{2}+(dr-\beta dt)^{2}+r^{2}d\Omega^{2}\;, \tag{3}\]
with \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\) and \(\beta\) the form function. Note the non-diagonal element and the fact that \(\beta=0\) returns a Minkowski flat spacetime.
Using this metric we calculate the elements of the Einstein tensor, which are
\[G_{00} = \frac{\beta}{r^{2}}\Bigg{[}\left(1-\beta^{2}\right)\left(\beta+2r \frac{\partial\beta}{\partial r}\right)-2r\beta\frac{\partial\beta}{\partial t} \Bigg{]}, \tag{4}\] \[G_{01} = \frac{\beta}{r^{2}}\left(\beta^{2}+2r\beta\frac{\partial\beta}{ \partial r}+2r\frac{\partial\beta}{\partial t}\right),\] (5) \[G_{11} = -\frac{1}{r^{2}}\left(\beta^{2}+2r\beta\frac{\partial\beta}{ \partial r}+2r\frac{\partial\beta}{\partial t}\right),\] (6) \[G_{22} = -r\Bigg{\{}\beta\left(2\frac{\partial\beta}{\partial r}+r\frac{ \partial^{2}\beta}{\partial r^{2}}\right)+\frac{\partial\beta}{\partial t}+ \ r\left[\left(\frac{\partial\beta}{\partial r}\right)^{2}+\frac{\partial^{2 }\beta}{\partial t\partial r}\right]\Bigg{\}},\] (7) \[G_{33} = -r\sin^{2}\theta\Bigg{\{}\beta\left(2\frac{\partial\beta}{ \partial r}+r\frac{\partial^{2}\beta}{\partial r^{2}}\right)+\frac{\partial \beta}{\partial t}+r\left[\left(\frac{\partial\beta}{\partial r}\right)^{2}+ \frac{\partial^{2}\beta}{\partial t\partial r}\right]\Bigg{\}}. \tag{8}\]
In general, we consider the form function as a quantity dependent of both time and radial coordinates. Note also that we only have five non-zero components. This represents a remarkable simplification with respect to the Alcubierre's metric written in Cartesian coordinates.
### Energy-Momentum tensor
The next step in writing the Einstein equations consists in fixing the material content of the system under study. For this we consider an anisotropic fluid given by
\[T^{(a)}_{\mu\nu}=(\rho+p_{\perp})u_{\mu}u_{\nu}+p_{\perp}g_{\mu\nu}+(p_{r}-p_{ \perp})s_{\mu}s_{\nu}\;, \tag{9}\]
where we have used the following four-vectors
\[u_{\mu}=(-1,0,0,0)\;,\qquad s_{\mu}=(-\beta,1,0,0)\;, \tag{10}\]
these vectors are timelike and spacelike respectively and satisfy the relations \(u^{\mu}u_{\mu}=-1\), \(s^{\mu}s_{\mu}=1\) and \(u^{\mu}s_{\mu}=0\). In (9) \(\rho\) is the relativistic energy density, \(p_{r}\) the radial pressure and \(p_{\perp}\) the tangential pressure.
We are interested in considering the effects of including a cosmological constant type term. To facilitate the subsequent numerical solution of the system of equations, we will include this term in the energy-momentum tensor, namely
\[T^{(\Lambda)}_{\mu\nu}=-\frac{\Lambda}{8\pi}g_{\mu\nu}=-\rho_{\Lambda}g_{\mu \nu}\;, \tag{11}\]
with \(\Lambda\) (and \(\rho_{\Lambda}\)) a constant.
So, the total energy-momentum tensor is \(T_{\mu\nu}\!=\!T_{\mu\nu}^{(a)}+T_{\mu\nu}^{(\Lambda)}\). Using equations (9) and (11) we can write in matrix form
\[T_{\mu\nu}=\left[\begin{array}{cccc}\rho+\beta^{2}p_{r}+(1-\beta)\rho_{ \Lambda}&-\beta(p_{r}-\rho_{\Lambda})&0&0\\ -\beta(p_{r}-\rho_{\Lambda})&p_{r}-\rho_{\Lambda}&0&0\\ 0&0&r^{2}(p_{\perp}-\rho_{\Lambda})&0\\ 0&0&0&r^{2}\sin^{2}\!\theta\left(p_{\perp}-\rho_{\Lambda}\right)\end{array} \right]. \tag{12}\]
It is interesting to compare with the tensor proposed in [36]. We can easily check that the system admits anisotropic configurations.
### Einstein's equations
Using the Einstein tensor components (4)-(8) and the energy-momentum tensor (12) we find the Einstein's equations for this system
\[\beta\left(\beta+2r\frac{\partial\beta}{\partial r}\right) = 8\pi r^{2}\rho^{(e)}\,, \tag{13}\] \[\beta\left(\beta+2r\frac{\partial\beta}{\partial r}\right)+2r \frac{\partial\beta}{\partial t} = -8\pi r^{2}p_{r}^{(e)},\] (14) \[\beta^{2}+r\frac{\partial\beta}{\partial t}-r^{2}\left[\frac{ \partial}{\partial r}\left(\beta\frac{\partial\beta}{\partial r}\right)+ \frac{\partial^{2}\beta}{\partial t\partial r}\right] = 8\pi r^{2}\Delta\,, \tag{15}\]
with the effective quantities \(\rho^{(e)}=\rho+\rho_{\Lambda}\), \(p_{r}^{(e)}=p_{r}-\rho_{\Lambda}\) and \(\Delta=p_{\perp}-p_{r}\) the anisotropy factor. From Einstein tensor (5) and (6) the equation (14) arise. The same is true for the components (7) and (8) which produce the same equation for \(p_{\perp}^{(e)}=p_{\perp}-\rho_{\Lambda}\). That is, we have just three independent equations to solve. The system (13)-(15) has four degrees of freedom: \(\beta\), \(\rho\), \(p_{r}\) and \(\Delta\).
### Stationary, traveling-type solutions
We are interested in the possibility that the system of equations produces a traveling wave type solution. For this we use the parametrization
\[\beta(t,r)=f(r_{s})v_{s}(t)\;. \tag{16}\]
Here \(r_{s}=\|r-v_{s}(t)t\|\), with \(v_{s}(t)\) a time-dependent spatial velocity function. In the following we consider this velocity a constant, so the displacement is uniform in radial direction.
Using this parametrization we find that the equations to be solved for \(r-v_{s}t\geq 0\) are
\[f^{2}+2(r_{s}+v_{s}t)ff^{\prime}=\frac{8\pi}{v_{s}^{2}}(r_{s}+v_ {s}t)^{2}\rho^{(e)}\;, \tag{17}\] \[f^{2}+2(r_{s}+v_{s}t)(f-1)f^{\prime}=-\frac{8\pi}{v_{s}^{2}}(r_{ s}+v_{s}t)^{2}p_{r}^{(e)}\;, \tag{18}\]
\[f^{2}-(r_{s}+v_{s}t)f^{\prime}-(r_{s}+v_{s}t)^{2}[(f^{\prime})^{2}+(f-1)f^{\prime \prime}]=\frac{8\pi}{v_{s}^{2}}(r_{s}+v_{s}t)^{2}\Delta\,, \tag{19}\]
where primes denote derivatives with respect to \(r_{s}\). Moreover, for \(r-v_{s}t<0\) the system vanish and we obtain the traveling wave behavior.
The systems of equations (13)-(15) and (17)-(19) describe the complete and stationary system respectively. To solve it is necessary to provide boundary conditions. In both cases we will use
\[\lim_{r\to 0}\beta=\lim_{r\rightarrow\infty}\beta=0\;. \tag{20}\]
An important point to note is that only the complete system requires initial conditions. This is one of the advantages of working with the equations that explore travelling patterns.
## 3 Energy conditions and expansion
In this section we study energy conditions, which are constraints imposed on the energy-momentum tensor so that we can evaluate the viability of the model and control non-physical aspects of the system [7; 38; 39; 40; 41; 42].
### Weak energy condition
Weak energy condition (WEC) requires that \(u^{\mu}\), \(T_{\mu\nu}u^{\mu}u^{\nu}\geq 0\). That is
\[T_{\mu\nu}u^{\mu}u^{\nu}=\rho+\rho_{\Lambda}\geq 0\;. \tag{21}\]
Here \(u_{\mu}\) is given by (10) and is a timelike vector. In fact, this vector defines an Eulerian observer for the system. So, we find that weak energy condition is satisfied if \(\rho+\rho_{\Lambda}\geq 0\). Using Einstein's equations, the equivalent condition on the \(G_{\mu\nu}\) requires that
\[\beta\left(\beta+2r\frac{\partial\beta}{\partial r}\right)\geq 0\;. \tag{22}\]
For this result we have used \(r^{2}>0\), which simplifies the final expression.
### Dominant energy condition
The dominant energy condition (DEC) is equivalent to the WEC, with the additional requirement that \(T_{\nu}^{\mu}u^{\nu}\) is a future-pointing causal vector. Thus, the weak energy condition and \(F^{\mu}F_{\mu}\leq 0\), with \(F^{\mu}=T^{\mu\nu}u_{\nu}\), must be satisfied.
After some straightforward calculations, we find that
\[F^{\mu}F_{\mu}=-(\rho+\rho_{\Lambda})^{2}\leq 0\;, \tag{23}\]
Note that, if the conservation of the \(T_{\mu\nu}\) is also considered, both conditions (21) and (23) guarantee the causal structure in local matter configurations. In terms of the metric components, the dominant condition is equivalent to
\[\frac{\partial\beta}{\partial t}\leq 0\;. \tag{31}\]
That is, as time evolves, \(\beta\) decreases.
Finally a comment about the cosmological constant type term. It is important to note that the addition of \(\rho_{\Lambda}\) in the model gives additional freedom, allowing a shift to be made in case of finding conditions that are not satisfied by themselves. We consider this to be important because it can help to establish more general procedures to deal with the problems that warp drive has traditionally suffered with respect to energy conditions. However, we can see that there is an interplay between the weak and strong conditions. That is, we can manipulate the term and enforce one of them but necessarily make the other one worse. We will see examples of this when studying the solutions.
### Expansion
As can be seen in (3), the geometry corresponding to 3-space is flat and therefore the extrinsic curvature tensor \(K_{ij}\) allows encoding information about the curvature. The extrinsic curvature tensor serves to describe how the 3-dimensional hypersurfaces are embedded in the 4-dimensional space. It is given by the expression
\[K_{ij}=\frac{1}{2\alpha}\Big{(}D_{i}\beta_{j}+D_{j}\beta_{i}-\frac{\partial g _{ij}}{\partial t}\Big{)}\;, \tag{32}\]
where \(D_{i}\) is the covariant derivative relative to the 3-space metric, and \(\alpha\) correspond to lapse function which we assume \(\alpha=1\) as usual. This choice means that timelike curves normal to the hypersurfaces in 3-space are geodesics, namely Eulerian observers are free falling. According to these observers, the expansion \(\Theta\) of the volume elements is written in terms of \(K_{ij}\) and is given by
\[\Theta = -\mathrm{Tr}[K_{ij}] \tag{33}\] \[= 2\frac{\beta}{r}+\frac{\partial\beta}{\partial r}\;.\]
In fig. 1 is possible to observe the behaviour of expansion for a typical \(\beta\) solution (see next sections). Analogously to Alcubierre's findings, there is a deformation that lives in a compact domain. The volume elements are expanding behind and contracting in a sort of wavefront. We also note the existence of a flat region inside the bubble. All this will have effects on the dynamical equations and energy conditions as we will see next.
It is possible to use equation (33) and impose a zero expansion condition on the Einstein equations. This restricts the possibilities of fluid evolution and hence the shape of spacetime. With \(\Theta=0\), we find that
\[\beta^{2} = -\frac{8\pi}{3}r^{2}\rho\;,\] \[\frac{2}{r}\frac{\partial\beta}{\partial t}-\frac{3}{r^{2}}\beta^ {2} = -8\pi p_{r}\;, \tag{34}\]
\[\frac{3}{r}\frac{\partial\beta}{\partial t}-\frac{9}{r^{2}}\beta^{2}-\frac{\beta} {r^{2}}\frac{dr}{dt} = 8\pi\Delta\;. \tag{35}\]
These equations are interesting because it is evident that in order to have a \(\beta\) real, \(\rho\) must be negative. So we have a close relation between the expansion factor and the energy density as was already pointed out in [10].
To conclude this section we find it is worthwhile to evaluate the energy conditions using the null expansion condition. Using the constraint \(\Theta=0\) over
Figure 1: The plot depicts the expansion of the volume elements \(\Theta\) of the spherical warp bubble in the non-stationary case with \(\rho_{0}\) being a form of parametrization to account for the coordinates “perpendicular to \(r\)”. The full expansion is shown in the upper plot. A transverse cut is shown in the lower plot where it can be noted the flat sitting region and the expansion and contraction.
expression (33) we find in terms of \(\beta\) that
\[\text{WEC:}\qquad\beta^{2}\leq 0\;, \tag{36}\] \[\text{DEC:}\qquad\beta^{2}\geq 0\;,\] (37) \[\text{SEC:}\qquad\beta^{2}\geq 0\;,\] (38) \[\text{NEC:}\qquad\frac{\partial\beta}{\partial t}\leq 0\;. \tag{39}\]
It is clear that the weak condition cannot be satisfied under the zero expansion regime. However, the dominant and strong conditions are always satisfied. The null condition does not undergo any modification using the constraint.
## 4 Case 1: isotropic solutions
Since the system of equations is highly non-linear, we will solve it numerically. In order to do so, we consider several cases separately.
### Non-stationary solutions (\(\beta(t,r)\))
First we solve for system (13)-(15) without imposing any constraint. In this case we set \(\Delta=0\) so we reduce de degrees of freedom to three and the system closes. Also we set \(\rho_{\Lambda}=0\) and later we consider the effects when this is included. In order to solve it is necessary to give an initial condition for \(\beta\). We will use for convenience a Gaussian given by
\[\beta(0,r)=\exp\Big{[}-\frac{(r-0.5r_{e})^{2}}{0.25r_{e}}\Big{]}\;. \tag{40}\]
Here \(r_{e}\) represents the spatial maximum value for the numerical integration domain.
In figure 2 we have the form function \(\beta\), radial pressure \(p_{r}\) and the empirical equation of state \(p_{r}(\rho)\) for several times. We can notice that \(\beta\) is decreasing in amplitude as it becomes wider. We also see that the radial pressure supporting this warp bubble has regions where it is positive and others where it is negative. This is interesting because it tells us that this distribution of matter generates non-zero pressure gradients that could affect the motion of the system. On the other hand, empirical equation of state shows a completely non-linear behaviour and is multivalued.
Below in the graph 3 we have the energy conditions. In general we can observe that for all of them there are regions where they are fulfilled and others where they are not. However, we notice that for all the cases they are bounded and therefore they do not have singular behaviour. This is relevant since it allows us to use the cosmological constant term to enforce some of them. However, we see the interplay between the strong and weak conditions
as we have already mentioned. We also find that the null condition is violated in some regions as has been suggested in [24].
### Stationary solutions (\(v_{s}f(r_{s})\))
The traveling wave type solutions are interesting because cover the purpose of what was originally proposed by Alcubierre. They also represent a sort of stationary behavior for the system under study. In fig. 4 and fig. 5 we show the stationary solutions of \(\beta\), pressure \(p_{r}\) and the empirical equation of state \(p_{r}(\rho)\). These are shown for different velocities \(v_{s}\), ranging from \(v_{s}=0.3\) (subluminal) to \(v_{s}=1.2\) (superluminal). In fig. 4 velocities \(v_{s}=0.3\) and \(v_{s}=0.8\) are depicted. As expected the amplitude of \(\beta\) decreases with time, although the lower velocity shows the largest initial amplitudes. This behavior is also seen in the superluminal case, fig. 5. The form function \(\beta\) displaces its origin with the velocity, the bubble shape shrinks with larger \(v_{s}\).
The radial pressure as in the non-stationary case has positive and negative regions constrained to the size of \(\beta\) for each time and velocity. Nevertheless, the distribution of the pressures differs from the non-stationary case in what
Figure 2: Non–stationary Case 1. Form function \(\beta\), radial pressure \(p_{r}\) and empirical equation of state for \(t=1\) (blue), \(3\) (orange), \(5\) (green) and \(8\) (red).
we think is an interesting feature. For instance, the boundary values of the pressure that sustains the bubble are finite, negative at lower \(r\) (the origin point of the \(\beta\) form), and positive by the external boundary. This is in contrast with the zero boundary in non-stationary scenarios. It is worth noting then that the pressure distribution suggests, for larger subluminal velocities a pressure gradient along the bubble.
Looking at the empirical equations of state, one can notice the appearance of regions where \(P_{r}\) can be linearly approximated. Nevertheless, we note, as with the non-stationary solutions, the fact that the relationship between \(P_{r}\) and \(\rho\) is multivalued.
We can divide the empirical plots into quadrants, and then observe that the system possesses regions with different combinations for signs. We want to remark the upper right quadrant which shows positive signs for \(\rho\) and \(P_{r}\). These positive regions has larger contributions for times chosen when \(v_{s}=0.3\), but as the velocity increases the contributions for larger times are smaller, as can be seen in fig. 4 and even more pronounced in fig. 5 where only shorter times has both positive contributions.
Figure 3: Non–stationary case 1. From the top we show NEC, SEC and WEC for \(t=1\) (blue), \(t=3\) (orange), \(t=5\) (green) and \(t=8\) (red).
## 5 Case 2: anisotropic solutions
In this case we consider an arbitrary anisotropy factor \(\Delta\), so it is necessary to give an additional condition to close the system.
First we will give an equation of state in the extended polytrope form
\[p_{r}=K\rho^{\gamma}+\epsilon\rho\;, \tag{41}\]
with \(K\), \(\gamma\) and \(\epsilon\) parameters that need to be provided.
### Non-stationary solutions (\(\beta(t,r)\))
Solving for this system we show the results in figs. 8 and 9. For the chosen set of parameters, we can observe in fig. 8 that the form function \(\beta\) is positive
Figure 4: Stationary case 1 for \(v_{s}=0.3\) (left column) and \(v_{s}=0.8\) (right column). Each curve corresponds to times \(t=15\) (orange), \(t=20\) (blue) and \(t=25\) (green)
and evolves towards a shock wave type behaviour as time progresses. On the other hand, the pressure has regions where it is positive and negative and becomes narrower as time progresses. The empirical equation of state shows a practically linear behaviour for \(K=1\) (left column); however, for \(K=50\) (right column) the system moves further away from linearity, especially when the density acquires negative values.
In the fig. 9 we have the energy conditions for the anisotropic case. We can observe again that for all of them there are regions where they are fulfilled and others where they are not. Moreover, for all cases we notice that energy conditions are bounded and therefore do not have a singular or divergent behaviour. As in the isotropic case, it is possible to use the cosmological constant term to enforce either the weak condition or the strong condition, but not both as mentioned above. We also find that the null condition is violated in some regions.
Figure 5: Stationary case 1 for \(v_{s}=1.0\) (left column) and \(v_{s}=1,2\) (right column). Each curve corresponds to times \(t=15\) (orange), \(t=20\) (blue) and \(t=25\) (green)
### Stationary solutions (\(v_{s}f(r_{s})\))
Now we study the traveling wave behaviour of the anisotropic system. In figs. 10 - 15 we show the stationary solutions of \(\beta\), pressure \(p_{r}\) and the empirical equation of state \(p_{r}(\rho)\). These are shown for different velocities, from \(v_{s}=0.3\) (subluminal) to \(v_{s}=1.2\) (superluminal). Different sets of parameters have been chosen for the equation of state so that the behaviour of the system in different regimes of the fluid under consideration can be observed.
In general, the form function \(\beta\) displaces its origin with the velocity. For \(\epsilon=-0.5\) in (41), it can be seen in figs. 10 and 11 that \(\beta\) is decreasing as time elapses for \(v_{s}=0.3\). However, for \(v_{s}=0.8,1.0,1.2\), we observe that beta is negative and its value for \(t=15\) is null while for intermediate times an increase in the amplitude of the form function is obtained. It is worth mentioning that as vs get larger, beta gets narrower, which affects the shape of the bubble as can be deduced from (33). In the case of radial pressure \(p_{r}\), we find that there are regions where it is positive and others where it is negative. We can
Figure 6: Stationary case 1. From the top down we show the NEC, SEC and WEC for \(t=15\) (orange), \(t=20\) (blue) and \(t=25\) (green). Left column \(v_{s}=0.3\) and right column \(v_{s}=0.8\).
also see that as \(v_{s}\) increases, the pressure becomes narrower in accordance with \(\beta\). Next, we see the equation of state which produces essentially a linear relationship between \(p_{r}\) and \(\rho\) except for \(v_{s}=0.3\) where some non-linearity can be observed.
The analysis for figs. 12-15 is similar to the previous paragraph. However, it is worth noting that for \(\epsilon=-10^{-17}\) you have a completely non-linear equation of state because both terms in (41) have comparable order of magnitude.
As in case 1, it is observed in figs. 16 and 17 that the energy conditions have regions where it is positive and others where it is negative. Again, it is possible to use the cosmological constant term to fix either WEC or SEC and
Figure 7: Stationary case 1. From the top down we show the NEC, SEC and WEC for \(t=15\) (orange), \(t=20\) (blue) and \(t=25\) (green). Left column \(v_{s}=1.0\) and right column \(v_{s}=1.2\).
the interplay between them is observed where if one is fulfilled, the other is necessarily worsened.
One aspect worth noting is what happens with the weak energy condition which corresponds to the energy density. We can observe that the energy density takes positive or negative values in various regions of the domain. However, we notice that in some cases there are time values where the density is completely positive. This behaviour is then partially lost, which may suggest some kind of instability.
Figure 8: Non-Stationary case 2. Each curve corresponds to times \(t=1\) (orange), \(t=3\) (blue) and \(t=5\) (green). The polytrope parameters are \(K=1\) (left column), \(K=50\) (right column). The solutions are numerically stable around \(\epsilon=-0.5\), \(\gamma=2\).
## 6 Warp density energy requirements
Using the results obtained, it is natural to investigate how the mass is described by the configurations of matter used. We are interested in determine to what extend the system violates or not the energy conditions, specifically the local weak energy condition. To this end we begin from the relation,
\[T^{0}_{\ 0}=\frac{G^{0}_{\ 0}}{8\pi}\, \tag{42}\]
which gives an expression for the density
\[\rho_{\rm warp}\equiv\rho=\frac{\beta}{8\pi r^{2}}\left(\beta+2r\frac{\partial \beta}{\partial r}\right)-\rho_{\Lambda}. \tag{43}\]
Figure 9: Non-Stationary energy conditions case 2. From the top down we show the NEC, SEC and WEC for \(t=1\) (orange), \(t=3\) (blue) and \(t=5\) (green). The polytrope parameters are \(K=1\) (left column), \(K=50\) (right column). The solutions are numerically stable around \(\epsilon=-0.5\), \(\gamma=2\).
Next, we can obtain a numerical form of (43) from the numerical solutions obtained for \(\beta\). We use the "volume integral quantifier" proposed by Visser in [43], which amounts to calculating a definite integral for the relevant coordinate domains.
\[M_{\rm warp}=\int_{\mathcal{V}}\rho_{\rm warp}=4\pi\int\!\!dr\ r^{2}T^{0}_{\ 0}=\frac{1}{2}\int dr\ r^{2}G^{0}_{\ 0}. \tag{44}\]
In this way we can quantify the amount of violation or not of the energy density to the extent in which these integrals can become positive or negative for several defined times.
Integrating in the appropriate numerical domain \(\mathcal{D}\) given in our solutions \(\beta\) with \(\rho_{\Lambda}=0\), we obtain the behaviour for the analysed cases using several values of \(t\) which can be seen in Figs. 22-25. We note first of all that for both examples taken from case 1 we obtain positive mass values throughout the whole time progression. The same is true for the non-stationary case 2.
Figure 10: Stationary case 2 for \(v_{s}=0.3\) (left column) and \(v_{s}=0.8\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-0.5\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
On the other hand, for the stationary case 2 we find some negative mass values. However, most of the points are positive. This result is remarkable and contrasts with the widely debated original Alcubierre's warp. Apart from observing a large sample of positive values for the required masses of the warp, we also notice that they are of a rather low order of magnitude compared to what has been reported in previous work.
For instance if we pick the mass point of case 1, fig. 23 associated with time \(t=2.0\) which has order of magnitude \(10^{-11}\) it corresponds to masses of order \(10^{-16}\) kg. Compared to some familiar Newtonian masses, it is equivalent to \(10^{-7}\) the mass of the moon, or 57 times the mass of mount Everest.
Figure 11: Stationary case 2 for \(v_{s}=1.0\) (left column) and \(v_{s}=1.2\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-0.5\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
It is expected that since the null energy conditions demands in equation (31) the \(\beta\) function to diminish in time, the values of \(M_{\rm warp}\) will tend to zero. An interesting aspect of this solutions is the oscillating behavior shown, so the quantity \(M_{\rm warp}\) seems oscillates and damps as times evolves. A remarkable difference occurs in fig. 24, where although the warp mass stays positive it seems to increase in time. We believe that because the method for finding these solutions is unstable for long times, we were not able to find the expected decaying regions.
## 7 Final Remarks
In this paper we have explored various solutions to a spherically symmetric metric describing a warp drive [37]. For them we have considered as a source of matter a generic fluid which admits anisotropy and cosmological constant. Once the Einstein equations for the system under study have been established,
Figure 12: Stationary case 2 for \(v_{s}=0.3\) (left column) and \(v_{s}=0.8\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-50\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
we realise that the anisotropic fluid is the most general that we can describe with the proposed metric. The metric used here has obvious advantages over the traditional metric originally proposed by Alcubierre. This is due to the incorporation of spherical symmetry in describing the residual flat space that remains in any warp metric. Doing so allows a much more neat and ready to be solved set of equations.
We have rewritten the set of equations found to accommodate travelling wave solutions. We found this useful for several reasons. Firstly, it is this type of solution that was originally proposed by Alcubierre when he gave an explicit form of the metric, so we wanted to get as close as possible to this approach in order to be able to compare. Secondly, the set of differential equations becomes dependent on a single variable and this greatly simplifies their solution. And thirdly, as a consequence of the previous point, we avoid the use of initial conditions, which can add a certain degree of arbitrariness to
Figure 13: Stationary case 2 for \(v_{s}=1.0\) (left column) and \(v_{s}=1.2\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-50\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
the solutions obtained. The solutions described in this way can be considered as the stationary regime to the problem posed.
As we have already described, the problem of energy conditions has dominated the discussion regarding the feasibility of warp solutions. For this reason, we include a detailed analysis of the energy conditions from the point of view of both the energy-momentum tensor and the metric coefficients. Trivially the dominant energy condition is always satisfied. On the other hand the null energy condition imposes that the \(\beta\) form function must decrease in time or at least remain unchanged. An interesting aspect occurs when including the cosmological constant term as a material source. We find that there is a trade-off between weak and strong energy conditions. We could use the cosmological constant to fix one of them but then we would necessarily make the other worse. This relationship is very interesting and worth further exploration. Another aspect worked on is the incorporation of the zero expansion condition in the
Figure 14: Stationary case 2 for \(v_{s}=0.3\) (left column) and \(v_{s}=0.8\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-10^{-17}\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
system. This produces a set of equations with little room for experimentation. However, we believe it is worth taking into account for future work.
The equations found were solved using numerical methods. For this purpose, we considered both stationary solutions using the above-mentioned description of travelling patterns, and complete solutions without imposing any restrictions. Both types of solutions were applied to isotropic and anisotropic fluids. In general, it is observed that for certain time instants there are spatial regions where the energy conditions are satisfied. In the isotropic case it is interesting to observe the type of empirical equation that appears, reflecting a multivalued behaviour which may suggest some type of fluid that allows the occurrence of phase transitions. The characterisation of these fluids is a very interesting aspect but clearly beyond the scope of the present work. Regarding the energy conditions, we can observe that all of them show plots with bounded values. This absence of divergence is a good sign and motivates us to continue exploring mechanisms to satisfy the conditions in a general way. In
Figure 15: Stationary case 2 for \(v_{s}=1.0\) (left column) and \(v_{s}=1.2\) (right column). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-10^{-17}\). Each curve corresponds to times \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green).
this work we explore the effect of adding a cosmological constant-type term to the momentum energy tensor. This allows us to correct the behaviour of either the weak or the strong energy condition but not both. This trade-off between the two conditions seems to be a general property. We can say that exploring different kinds of fluids opens up new possibilities and may provide a way to deal with the constraints imposed by the energy conditions.
With respect to the \(\beta\) form function, in general it is observed that its amplitude is decreasing. However, there are some of the simulated cases in which the opposite behaviour appears and the amplitude increases. This seems to suggest a certain instability in the solutions. Something similar can be observed for the radial pressure, which in most cases is at least partially positive and only in a few cases completely negative.
Figure 16: Stationary case 2. From the top down we show the NEC, SEC and WEC for \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green). Left column \(v_{s}=0.3\) and right column \(v_{s}=0.8\). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-0.5\).
Finally, we calculate the value of the total warp mass using the results found in the numerical simulations. This is useful because it acts as an average indicator of the amount of total energy needed to sustain the warp structure. We were able to find values in the parameters that make the integrated energy density positive at all instants of the system's evolution. In most cases we noticed that these integrated values become smaller and smaller with time. This may support the argument of some instability in the solutions, although we believe that more tests are needed to have a conclusive result. However, the fact of finding solutions that produce positive mass values is remarkable and we believe that further study of this system will allow us to understand the fundamental characteristics of warp drives and also to understand why solutions like Alcubierre's behave as they do.
Figure 17: Stationary case 2. From the top down we show the NEC, SEC and WEC for \(t=11\) (orange), \(t=12\) (blue) and \(t=15\) (green). Left column \(v_{s}=1.0\) and right column \(v_{s}=1.2\). The polytrope parameters are \(K=100\), \(\gamma=2\) and \(\epsilon=-0.5\).
In summary, we believe that research related to the warp spherical metric should be continued as it presents an efficient way to study analytically and numerically the problems associated with the warp drive phenomenon. Furthermore, it is crucial to continue studying the material content that supports the warp geometry, distinct and more complex matter configurations, different metric realizations that could include dissipation, heat flux and electromagnetic fields can be studied, in this way to pointing to tailor feasible space-times with these configurations, at least at a theoretical level.
|
2307.16224 | On Abelian Groups Having Isomorphic Proper Strongly Invariant Subgroups | We consider two variants of those Abelian groups with all proper strongly
invariant subgroups isomorphic and give an in-depth study of their basic and
specific properties in either parallel or contrast to the Abelian groups with
all proper fully invariant (respectively, characteristic) subgroups isomorphic,
which are studied in details by the current authors in Commun. Algebra (2015)
and in J. Commut. Algebra (2023). In addition, we also explore those Abelian
groups having at least one proper strongly invariant subgroup isomorphic to the
whole group. | Andrey R. Chekhlov, Peter V. Danchev | 2023-07-30T13:20:15Z | http://arxiv.org/abs/2307.16224v1 | # On Abelian Groups Having Isomorphic Proper Strongly Invariant Subgroups
###### Abstract
We consider two variants of those Abelian groups with all proper strongly invariant subgroups isomorphic and give an in-depth study of their basic and specific properties in either parallel or contrast to the Abelian groups with all proper fully invariant (respectively, characteristic) subgroups isomorphic, which are studied in details by the current authors in Commun. Algebra (2015) and in J. Commut. Algebra (2023). In addition, we also explore those Abelian groups having at least one proper strongly invariant subgroup isomorphic to the whole group.
0
Footnote 0: 2010 AMS Subject Classification: Primary 20K10, Secondary 20K12. Key words and phrases: Abelian groups, characteristic subgroups, fully invariant subgroups, strongly invariant subgroups.
## 1 Introduction and Definitions
Throughout the present paper, let all groups into consideration be _additively_ written and _Abelian_. Our notations and terminology from group theory are mainly standard and follow those from [9], [10] and [14], respectively. Another useful sources on the explored subject are [4, 5, 6, 7] as well. For instance, if \(p\) is a prime integer and \(G\) is an arbitrary group, \(p^{n}G=\{p^{n}g\ |\ g\in G\}\) denotes the \(p^{n}\)_-th power subgroup_ of \(G\) consisting of all elements of \(p\)-height greater than or equal to \(n\in\mathbb{N}\), \(G[p^{n}]=\{g\in G\ |\ p^{n}g=0,n\in\mathbb{N}\}\) denotes the \(p^{n}\)_-socle_ of \(G\), and \(G_{p}=\cup_{n<\omega}G[p^{n}]\) denotes the \(p\)_-component_ of the _torsion part_\(tG=\oplus_{p}G_{p}\) of \(G\).
On the other hand, if \(G\) is a torsion-free group and \(a\in G\), then let \(\chi_{G}(a)\) denote the _characteristic_ and let \(\tau_{G}(a)\) denote the _type_ of \(a\), respectively. Specifically, the class of equivalence in the set of all characteristics is just called _type_
and we write \(\tau\). If \(\chi_{G}(a)\in\tau\), then we write \(\tau_{G}(a)=\tau\), and so \(\tau(G)=\{\tau_{G}(a)\mid 0\neq a\in G\}\) is the set of types of all non-zero elements of \(G\). The set \(G(\tau)=\{g\in G\mid\tau(g)\geqslant\tau\}\) forms a pure fully invariant subgroup of the torsion-free group \(G\). Recall that a torsion-free group \(G\) is called _homogeneous_ if all its non-zero elements have the same type.
Concerning ring theory, suppose that all rings which we consider are _associative_ with _identity_ element. For any ring \(R\), the letter \(R^{+}\) will denote its _additive group_. To simplify the notation and to avoid a risk of confusion, we shall write \(\mathrm{E}(G)\) for the endomorphism ring of \(G\) and \(\mathrm{End}(G)=\mathrm{E}(G)^{+}\) for the endomorphism group of \(G\).
As usual, a subgroup \(F\) of a group \(G\) is called _fully invariant_ (hereafter abbreviated as a _fi-subgroup_ for simpleness) if \(\phi(F)\subseteq F\) for any \(\phi\in\mathrm{E}(G)\), while if \(\phi\) is an invertible endomorphism (= an automorphism), then \(F\) is called a _characteristic_ subgroup. Likewise, imitating [1], we shall say that a subgroup \(S\) of \(G\) is _strongly invariant_, provided that \(\psi(S)\subseteq S\) for any homomorphism \(\psi:S\to G\), and in what follows we shall abbreviate it for short by a _si-subgroup_. It is well-known that the following relations are fulfilled:
\[\text{strongly invariant}\Rightarrow\text{fully invariant}\Rightarrow\text{ characteristic}.\]
Classical examples of important fully invariant subgroups of an arbitrary group \(G\) are the defined above subgroups \(p^{n}G\) and \(G[p^{n}]\) for any natural \(n\) as well as \(tG\) and the maximal divisible subgroup \(dG\) of \(G\); actually \(dG\) is a fully invariant direct summand of \(G\) (see, for instance, [9]). To avoid any confusion and misunderstanding, we shall say that a group \(G\) has only _trivial fully invariant subgroups_ if \(\{0\}\) and \(G\) are the only ones. Same appears for the characteristic and strongly invariant subgroups, respectively.
Let us notice that the si-subgroups were intensively studied in [1, 3], respectively, as well as in some other relevant articles cited therewith.
Note also that, for all subgroups \(F\leq G\), the subgroup
\[\mathrm{Hom}(F,G)F=\sum_{\varphi\in\mathrm{Hom}(F,G)}\varphi(F)\]
is the minimal si-subgroup of \(G\) containing \(F\). In particular, if \(F\) is a si-subgroup, then \(F=\mathrm{Hom}(F,G)F\). Likewise, we also set
\[S_{A}(B)=\sum_{f\in\mathrm{Hom}(A,B)}\mathrm{Im}\,f\]
to be the _\(A\)-socle_ of \(B\).
The following key notions, necessary for our successful presentation, were stated in [4].
**Definition 1**. A non-zero group \(G\) is said to be an _IFI-group_ if either it has only trivial fully invariant subgroups, or all its non-trivial fully invariant subgroups are isomorphic otherwise.
**Definition 2**. A non-zero group \(G\) is said to be an _IC-group_ if either it has only trivial characteristic subgroups, or all its non-trivial characteristic subgroups are isomorphic otherwise.
Note that Definition 2 implies Definition 1. In other words, any IC-group is an IFI-group; in fact every fully invariant subgroup is characteristic.
**Definition 3**. A non-zero group \(G\) is said to be a _strongly IFI-group_ if either it has only trivial fully invariant subgroups, or all its non-zero fully invariant subgroups are isomorphic otherwise.
**Definition 4**. A non-zero group \(G\) is said to be a _strongly IC-group_ if either it has only trivial characteristic subgroups, or all its non-zero characteristic subgroups are isomorphic otherwise.
Notice that Definition 4 implies Definition 3.
On another vein, Definition 4 obviously yields Definition 2, but the converse is false. In fact, in [14] was constructed a single non-trivial characteristic subgroup of a 2-group that are pairwise non-isomorphic, thus giving an example of an IC-group which is surely _not_ a strongly IC-group; in other words, Definition 4 properly implies Definition 2.
We now arrive at our basic tools as follows:
**Definition 5**. A non-zero group \(G\) is called an _ISI-group_ if either it has only trivial strongly invariant subgroups (namely, \(\{0\}\) or \(G\)), or all its non-trivial strongly invariant subgroups are isomorphic otherwise.
**Definition 6**. A non-zero group \(G\) is called a _strongly ISI-group_ if either it has only trivial strongly invariant subgroups, or all its non-zero strongly invariant subgroups are isomorphic otherwise.
It is clear that any strongly ISI-group is either a \(p\)-group for some prime \(p\), or is a homogeneous torsion-free group.
It is apparent that the next relationships are valid:
IC-groups \(\Rightarrow\) IFI-groups \(\Rightarrow\) ISI-groups.
Our objective in the current article is to explore some fundamental and exotic properties of the defined above classes of groups, especially the ISI-groups and the strongly ISI-groups. In addition, we shall investigate even something more, namely the existence of a non-trivial strongly invariant subgroup of a given group, which subgroup is isomorphic to the whole group, calling these groups _weakly ISI-groups_. Our motivation to do that is to exhibit and compare certain similarities and discrepancies of these group classes.
The major results established by us are formulated and proved in the next section. Concretely, our work is organized thus: In the next section, we provide our basic statements and their proofs by breaking the results into three parts in conjunction with the definitions stated above and pertained to the three types of the ISI property, namely to the so-termed (strongly, weakly) ISI-groups. In order to prove our results, we shall use some specific ideas and instruments to materialize them (see Theorems 2.11, 2.13 as well as the corresponding lemmas, propositions and examples). We end off our presentation with a series of six questions which, hopefully, will motivate a further intensive study of the explored subject (see Problems 1-6).
Main Theorems and Examples
For completeness of the exposition and for the reader's convenience, we first and foremost will give a brief retrospection of the most principal results achieved in [4] and [7], respectively, concerning IFI-groups and strongly IFI-groups as well as IC-groups and strongly IC-groups, respectively.
As usual, the symbol \(\oplus_{m}G=G^{(m)}\) will denote the _external_ direct sum of \(m\) copies of the group \(G\), where \(m\) is some ordinal (finite or infinite).
We begin with a retrospection of some of the results obtained in [4].
**Theorem 2.1**: _Let \(G\) be a \(p\)-group and let \(m\geqslant 2\) be an ordinal. Then \(G^{(m)}\) is an IFI-group if, and only if, \(G\) is an IC-group._
**Proposition 2.2**: _Let \(G\) be a torsion-free group. Then \(G\) is an IFI-group if, and only if, \(G\) is a strongly IFI-group._
**Lemma 2.3**: _(a) A fully invariant subgroup of an IFI-group is an IFI-group._
_(b) A fully invariant subgroup of a strongly IFI-group is a strongly IFI-group._
**Proposition 2.4**: _A non-zero IFI-group is either divisible or reduced._
**Theorem 2.5**: _The following two points hold:_
_(i) A non-zero group \(G\) is an IFI-group if, and only if, one of the following holds:_
\(\bullet\) _For some prime \(p\) either \(pG=\{0\}\), or \(p^{2}G=\{0\}\) with \(\operatorname{rank}(G)=\operatorname{rank}(pG)\)._
\(\bullet\)_\(G\) is a homogeneous torsion-free IFI-group of idempotent type._
_(ii) A non-zero torsion group \(G\) is a strongly IFI-group if, and only if, it is an elementary \(p\)-group for some prime \(p\)._
**Proposition 2.6**: _Every homogeneous fully transitive torsion-free group of idempotent type is an IFI-group._
**Corollary 2.7**: _A direct summand of a fully transitive torsion-free IFI-group is again a fully transitive IFI-group._
We now continue with some of the most important corresponding assertions from [7].
**Proposition 2.8**: _Every torsion IFI-group is an IC-group._
**Theorem 2.9**: _There exists an IFI-group which is not an IC-group._
### ISI-groups
We start our work here with some elementary but useful observations:
**Proposition 2.10**: _Let \(G\) be a non si-simple group. Then \(G\) is an ISI-group if, and only if, \(G\) has a single non-trivial si-subgroup \(H\)._
Proof. "\(\Rightarrow\)". Assume that both \(H\) and \(F\) are non-trivial si-subgroups. Thus, \(H\cong F\), so \(F\leq\operatorname{Hom}(H,G)H\) and hence \(F\leq H\). Analogously, we derive that \(H\leq F\), i.e., \(H=F\), as required.
"\(\Leftarrow\)". It is evident, so we omit the details.
The next Theorem is quite similar to [4, Theorem 2.5], but however is somewhat its reminiscent.
**Theorem 2.11**: _A non-zero torsion group is an ISI-group if, and only if, for some prime \(p\), either \(pG=\{0\}\) or \(p^{2}G=\{0\}\)._
Proof. Suppose first that \(G\) is torsion, that is, \(G=tG\). If \(G=G[p]\), the assertion follows automatically. So, assume next that \(G\neq G[p]\). We now assert that \(G=G[p^{2}]\). If \(G\neq G[p^{2}]\), then \(G[p]\cong G[p^{2}]\) which is an absurd, so that the claim is sustained.
Reciprocally, if \(G\) is an elementary \(p\)-group, then it contains only trivial si-subgroups, and thus we are done. So, let \(G\) be \(p^{2}\)-bounded. It is well known in this case that the only proper si-subgroups of \(G\) is \(G[p]\), as needed.
In some cases the subgroup \(H\) from Proposition 2.10 is si-simple (which is the case not only in Theorem 2.11).
**Proposition 2.12**: _If \(G=A\oplus B\), where \(A\neq\{0\}\) is fully invariant in \(G\), then \(G\) is an ISI-group if, and only if, \(A\) and \(B\) are such si-simple groups that \(\operatorname{Hom}(B,A)\neq\{0\}\)._
Proof. "\(\Rightarrow\)". If we assume in a way of contradiction that \(A\) is not a si-simple group, then it will contain an si-subgroup, say \(\{0\}\neq F\neq A\). But then both \(A\) and \(F\) are simultaneously si-subgroups in \(G\), so that, by assumption, we can deduce that \(F\cong A\). In fact, if \(S=\operatorname{Hom}(F,B)F\), then \(A\oplus S\) is a si-subgroup in \(G\), whence \(A\cong A\oplus S\), but thus contradicting \(\operatorname{Hom}(A,B)=\{0\}\). So, with no harm of generality, we may assume that \(A\leq F\), i.e., \(F=A\), which is the desired contradiction.
Also, if \(\operatorname{Hom}(B,A)=\{0\}\), then both \(A\) and \(B\) are non-isomorphic si-subgroups. If, however, \(\{0\}\neq C\neq B\) is a si-subgroup, then \(A\oplus C\cong A\) and, therefore, \(\operatorname{Hom}(A,B)\neq\{0\}\), again a contradiction, whence \(B\) is si-simple, as wanted.
"\(\Leftarrow\)". Let \(\{0\}\neq H\leq G\) be a si-subgroup and let \(\pi:G\to B\) be the corresponding projection. If \(H\leq A\), then \(H\) is just a si-subgroup in \(A\), and so \(H\cong A\). Assuming now that \(H\nleq A\), then the relations \(\{0\}\nleq\pi(H)\leq B\) hold. So,
\[B\leq\operatorname{Hom}(\pi(H),B)\pi(H)\leq\operatorname{Hom}(\pi(H),G)\pi(H) \leq H\]
and, consequently, \(H=B\oplus(H\cap A)\), where \(H\cap A\) is a si-subgroup in \(A\). Finally, \(H\cap A=A\) and thus \(H=G\), as pursued.
It follows from Proposition 2.12 that the sufficiency in the following two statements are directly true.
**Theorem 2.13**: _The mixed group \(G\) is an ISI-group if, and only if, \(G=T\oplus R\), where \(T\) is an elementary \(p\)-group for some \(p\) and \(R\) is such a si-simple torsion-free group that \(pR\neq R\)._
Proof. "\(\Rightarrow\)". It is clear that the torsion part \(tG\) of such a group \(G\) is an elementary \(p\)-group, so one can decompose \(G=T\oplus R\), where \(T=tG\) and \(R\) is torsion-free. If \(F\neq\{0\}\) is a si-subgroup in \(R\) and \(F\neq R\), then \(T\) and \(T\oplus F\) are apparently non-isomorphic si-subgroups of \(G\), so that \(R\) is a si-simple group, as claimed.
**Proposition 2.14**: _The non-reduced group \(G\) is an ISI-group if, and only if, \(G=D\oplus R\), where \(D\) is a torsion-free divisible group and \(R\) is a si-simple torsion-free group._
Proof. "\(\Rightarrow\)". Clearly, the divisible part of \(G\) is torsion-free (by following the same arguments as in Theorem 2.13) and, moreover, we may apply the same Theorem 2.13 to get that the group \(R\) is a si-simple, as asserted.
We shall say that a torsion-free group \(G\) is _strongly irreducible_ (or, shortly, _s-irreducible_) if \(G\) does not have proper pure si-subgroups.
It is very clear that if \(G\) is a torsion-free group and \(nG=G\) for some \(n\in\mathbb{N}\), then \(nH=H\) for each fi-subgroup \(H\) of \(G\). This follows immediately from the fact that \(n^{-1}\cdot 1_{G}\in\mathrm{E}(G)\).
The next two examples are crucial for our considerations.
**Example 2.15**: _There exists a non si-simple homogeneous torsion-free group._
Proof. Let \(A\) be a module on the ring \(\widehat{\mathbb{Z}}_{p}\) consisting of \(p\)-adic integers such that, as a group, \(A\) is reduced torsion-free, and let \(B\) be a separable homogeneous torsion-free group of type \(\tau=\tau(\widehat{\mathbb{Z}}_{p}^{+})\). Hence, \(G=A\oplus B\) is homogeneous and also it is readily checked that \(G\) is an ISI-group which is not si-simple, because \(A\) is a si-direct summand in \(G\).
**Example 2.16**: _There exists an s-irreducible group that is not an irreducible group._
Proof. Let \(G\) be the group from [10, SS 88, Exersize 6]. Then, \(\mathrm{E}(G)=\mathbb{Z}\) and the rank of \(G\) can be chosen to be a free natural number \(n\geq 2\). So, every subgroup of \(G\) is fully invariant and, consequently, \(G\) is not irreducible. Likewise, each subgroup \(A\) in \(G\) of rank \(1\leq rank(A)\leq n-1\) is a free group, so that \(\mathrm{Hom}(A,G)A=G\). But, if the subgroup \(B\) is of rank \(n\), then one inspects that \(B\) is essential in \(G\), whence \(F=\mathrm{Hom}(A,G)A=G\) is also essential in \(G\) and thus \(F_{*}=G\). Therefore, \(G\) is s-irreducible. Note also that since all pure subgroups in \(G\) of rank \(\leq n-1\) are free, they are not si-subgroups, thus answering in the negative question (1), posed in [1].
The following technicalities are worthwhile for further applications.
**Proposition 2.17**: _If \(G\) is a torsion-free ISI-group and \(\{0\}\neq H\lneqq G\) is a si-subgroup, then \(H\) is \(q\)-pure in \(G\) for all prime numbers \(q\) except, maybe, one prime number \(p\). Moreover, if \(H\) is not \(p\)-pure in \(G\) for some prime \(p\), then \(pG\lneqq H\)._
Proof. Assume that \(pg\in H\) for some prime \(p\), where \(g\in G\setminus H\). Letting
\[F=\operatorname{Hom}(\langle H,g\rangle,G)\langle H,g\rangle,\]
then \(F\) is a si-subgroup in \(G\), so either \(F\cong H\) or \(F=G\). Besides, the condition \(F\cong H\) yields \(H=F\) that contradicts the fact that \(g\in G\setminus H\). So, it follows that \(F=G\).
Furthermore, each element \(y\in G\) can be written in the form
\[y=f_{1}(x_{1}+l_{1}g)+\cdots+f_{n}(x_{n}+l_{n}g)\]
for some \(f_{i}\in\operatorname{Hom}(\langle H,g\rangle,G)\), \(x_{i}\in H\) and \(l_{i}\in\operatorname{Z}\); \(i=1,\ldots,n\). That is why,
\[py=f_{1}(px_{1}+pl_{1}g)+\cdots+f_{n}(px_{n}+pl_{n}g)\in H\]
since \(px_{i}+pl_{i}g\in H\). So, \(pG\leq H\), where \(pG\neq H\) since \(pG\cong G\) and \(H\neq G\). If, however, \(H\) is not \(q\)-pure for some prime \(q\neq p\), then similarly \(qG\leq H\), which implies that \(H=G\), so \(H\) is indeed \(q\)-pure for all primes \(q\neq p\).
**Remark 2.18**: _Let us note that Proposition 2.14 and Example 2.15 manifestly give valuable examples of proper pure si-subgroups in torsion-free ISI-groups. Also, note that the si-subgroup \(H\) in Proposition 2.17 has the following property:_
\[\operatorname{Hom}(A,G)A=H,\]
_provided_
\[\{0\}\neq\operatorname{Hom}(A,G)A\neq G\]
_for every group \(A\)._
The next assertion is rather curious by comparing with the inheritance of the direct summand property by the IFI and IS groups, respectively.
**Lemma 2.19**: _A direct summand of an ISI-group \(G\) is again an ISI-group._
Proof. Write \(G=A\oplus B\) and given \(S\leq A\) is a si-subgroup in \(A\). If we set \(F=\operatorname{Hom}(S,B)S\), then \(S\oplus F\) is obviously a si-subgroup in \(G\). However, if \(S\oplus F\neq G\), then according to Proposition 2.10 we conclude that \(S\oplus F\) is the only non-trivial si-subgroup. Hence, it follows that \(A\) is an ISI-group, as formulated.
We also notice that it follows immediately from Theorem 2.13 and Lemma 2.19 that the following is true:
**Proposition 2.20**: _The non-zero divisible group \(D\) is an ISI-group if, and only if, \(D\) is torsion-free._
We are now ready to give our two desired examples.
**Example 2.21**: _Pure si-subgroups of an ISI-group \(G\) are not necessarily direct summands of \(G\)._
Proof. Let \(A\) and \(B\) be both torsion-free groups of rank \(1\) with \(\tau(A)<\tau(B)=\tau\), and let
\[G=\langle A\oplus B,p^{-1}(a+b)\,|\,a\in A\setminus pA,b\in B\setminus pB\rangle.\]
Then, \(B\) is pure in \(G\) and \(B=G(\tau)\), so \(B\) is a si-subgroup in \(G\). Suppose that \(\pi:A\oplus B\to A\) is a projection. If \(H\neq B\) is a si-subgroup of \(G\), then one deduces that \(H\nleqslant B\) since \(B\) is si-simple, and hence \(\pi(pH)\neq\{0\}\). Thus, it follows that \(A\leq H\) and so \(H=G\), as expected.
Notice that an example of si-subgroups in non-homogeneous torsion-free groups is given in the previous Example 2.21, and an example of pure si-subgroups in homogeneous torsion-free groups is given in Example 2.15.
**Example 2.22**: _There is a homogeneous torsion-free group \(A\) of rank \(r\geq 2\) with a si-subgroup \(H\) as exhibited in Proposition 2.17._
Proof. Let \(A\) be the group as constructed in [10, Example 5 from SS88], that is,
\[A=\langle a_{1},\dots,a_{r},x_{1},x_{2},\dots\rangle,\]
where \(x_{n}=p^{-n}(a_{1}+\pi_{2n}a_{2}+\dots+\pi_{rn}a_{r})\), \(\pi_{in}=s_{i0}+s_{i1}p+\dots+s_{i,n-1}p^{n-1}\) is the \(n-1\) partial sum of the \(p\)-adic reversible number \(\pi_{i}\) (\(i=2,\dots,r\)), \(\pi_{i}\) are algebraically independent on \(\mathbb{Q}\) and \(\pi_{1}=1\). It is principally known that \(\mathrm{E}(A)=\mathbb{Z}\), the subgroup \(\langle a_{2},\dots,a_{r}\rangle\) is pure in \(A\) and all of the elements from \(A\setminus A_{0}\) are of the form \(kx_{n}+k_{2}a_{2}+\dots+k_{r}a_{r}\), where \(k,k_{2},\dots,k_{r}\in\mathbb{Z}\), \(p\nmid k\) and \(n\) is some natural number. Letting
\[H=\langle a_{1},a_{2}\dots,a_{r},px_{1},px_{2},\dots\rangle,\]
it is clear then that \(pA\leq H\) and \(H\neq A\).
We now manage to show that \(\mathrm{Hom}(H,A)=\mathrm{Z}\). To that goal, we shall use some techniques from [10, Example 5 of SS88]. And so, choose \(0\neq\eta\in\mathrm{Hom}(H,A)\) and define the homomorphism \(\eta\) thus:
\[\eta a_{i}=\sum_{j=1}^{r}t_{ij}a_{j},\ t_{ij}\in\mathbb{Z}.\]
Notice that, if required, multiply \(\eta\) on some \(m\in\mathbb{N}\) so that the new map will send \(A_{0}=\langle a_{1},\dots,a_{r}\rangle\) into itself. Therefore,
\[\eta(px_{n})=p^{1-n}\sum_{i}\pi_{in}\eta a_{i}=p^{1-n}\sum_{j}\Bigl{(}\sum_{i }\pi_{in}t_{ij}\Bigr{)}a_{j}=k_{n}px_{n}+l_{2n}a_{2}+\dots+l_{rn}a_{r}\]
for some \(k_{n},l_{2n},\dots,l_{rn}\in\mathbb{Z}\). We also have
\[\sum_{i=1}^{r}\pi_{in}t_{i1}=k_{n}\ \text{and}\ p^{1-n}\sum_{j=2}^{r}\Bigl{[} \sum_{i=1}^{r}\pi_{in}t_{ij}-k_{n}\pi_{jn}\Bigr{]}a_{j}\in\langle a_{2},\dots, a_{r}\rangle.\]
Furthermore, one readily can verify that the coefficients in the square brackets are divided by \(p^{n-1}\). Also, under taking \(n\to\infty\), for each index \(j=2,\dots,r\) we obtain that \(\sum_{i=1}^{r}\pi_{i}t_{ij}-\kappa\pi_{j}=0\), where \(\sum_{i=1}^{r}\pi_{i}t_{i1}=\kappa\).
Now, in view of the algebraic independence of the \(\pi_{i}\)'s, it follows that \(t_{jj}=t_{11}\) and \(t_{ij}=0\) assuming \(i\neq j\). Thereby, the homomorphism \(\eta\) acts as multiplying on the integer \(t_{11}\). Since \(\chi(a_{i})=(0,0,\dots)\), there is no problem to take \(m>1\). So, we infer that \(\operatorname{Hom}(H,A)=\mathbb{Z}\), as asked for, and hence \(H\) is a si-subgroup in \(A\), as wanted.
The following technicality is pretty simple but useful.
**Lemma 2.23**: _(1) If \(G\) is a torsion-free group and \(H\leq G\) is a si-subgroup, then \(H_{*}\) also is a si-subgroup in \(G\)._
_(2) A direct summand of s-irreducible (si-simple) group \(G\) is again an s-irreducible (si-simple)._
Proof. (1) Letting \(f\in\operatorname{Hom}(H_{*},G)\) and \(x\in H_{*}\), we have \(nx\in H\) for some \(n\in\mathbb{N}\), so that \(f(nx)=nf(x)\in H\) whence \(f(x)\in H_{*}\).
(2) Write \(G=A\oplus B\) and assume that \(H\leq A\) is a si-subgroup in \(A\) and put \(F=\operatorname{Hom}(H,B)H\). Thus, \(H\oplus F\) is a si-subgroup in \(G\). If, however, \(H\neq A\), then one has that \(H\oplus F\neq G\) that contradicts the condition that the group \(G\) is si-simple.
Similarly, if \(H_{*}\neq A\), then we will have that \(H_{*}\oplus F_{*}\neq G\) which contradicts the condition that the group \(G\) is s-irreducible.
The next two statements are worthy of noticing.
**Proposition 2.24**: _Let \(G\) be a torsion-free s-irreducible group which is not a si-simple ISI-group. Then its non-trivial si-subgroup \(H\) is also \(s\)-irreducible._
Proof. Owing to Proposition 2.17, we derive that \(pG<H\) for some prime \(p\). Let \(S\neq\{0\}\) be a pure si-subgroup in \(H\) and \(F=\operatorname{Hom}(S,G)S\). Then, one sees that either \(F=H\) or \(F=G\). If, for a moment, \(F=H\), then
\[H=\operatorname{Hom}(S,G)S=\operatorname{Hom}(S,H)S=S.\]
Assume now that \(F=G\). Therefore,
\[pG=\sum_{f\in\operatorname{Hom}(S,G)}pf(S)=\sum_{f\in\operatorname{Hom}(S,H)} pf(S)\]
since all \(pf(S)\leq H\). Further, \(pG\leq S\) as \(\sum_{f\in\operatorname{Hom}(S,H)}f(S)=S\). In particular, \(pH\leq S\) and so \(S=H\) in view of purity of \(S\) in \(H\), as promised.
The next assertion is closely relevant to Example 2.21.
**Proposition 2.25**: _(1) The pure si-subgroup of a direct sum \(G\) of s-irreducible torsion-free groups coincides with some fully invariant direct summand of \(G\)._
_(2) The direct sum of s-irreducible torsion-free groups \(A_{i}\), where \(i\in I\), \(|I|\geq 2\), is an s-irreducible group if, and only if, \(\operatorname{Hom}(A_{i},A_{j})\neq\{0\}\) for all indices \(i,j\in I\)._
_(3) The direct sum \(G\) of si-simple groups \(A_{i}\), where \(i\in I\), \(|I|\geq 2\), is a si-simple group if, and only if, \(\operatorname{Hom}(A_{i},A_{j})\neq\{0\}\) for all indexes \(i,j\in I\)._
Proof. (1) Let us write \(G=\bigoplus_{i\in I}A_{i}\), where all \(A_{i}\) are s-irreducible groups and \(H\leq G\) is a si-subgroup. Then, \(H=\bigoplus_{i\in I}(H\cap A_{i})\). However, since \(H_{*}=\bigoplus_{i\in I}(H\cap A_{i})_{*}\), we have that \(H_{*}=\bigoplus_{j\in J}A_{j}\), where \(J\subseteq I=\{j\in I\,|\,H\cap A_{j}\neq\{0\}\}\), as required.
(2) "\(\Rightarrow\)". Assume that \(\operatorname{Hom}(A_{i},A_{j})=\{0\}\) for some \(i\neq j\). Setting
\[S=\{s\in I\,|\operatorname{Hom}(A_{i},A_{s})\neq\{0\}\},\]
we observe that \(S\neq\emptyset\), because \(i\in S\) and \(S\neq I\) since \(j\in I\setminus S\), so \(\bigoplus_{s\in S}A_{s}\) is a proper pure si-subgroup (i.e., \(\neq\{0\},G\)) that contradicts the s-irreducibility of \(G\).
"\(\Leftarrow\)". Given \(H\) is a pure si-subgroup in \(G\). We thus may write \(H=\bigoplus_{i\in I}(H\cap A_{i})\) and \(H\cap A_{i}=A_{i}\) with \(H\cap A_{i}\neq\{0\}\) which holds in view of s-irreducibility of \(A_{i}\) for each index \(i\). So, one follows that \(H=G\) as \(\operatorname{Hom}(A_{i},A_{j})\neq\{0\}\) for all \(i,j\in I\).
(3) The proof is similar to that in (2).
Furthermore, in virtue of Proposition 2.12, in the next statement the fulfillment of the inequalities \(\operatorname{Hom}(A,B)\neq\{0\}\) and \(\operatorname{Hom}(B,A)\neq\{0\}\) is guaranteed.
**Proposition 2.26**: _Let \(A\), \(B\) be ISI-groups, where \(\operatorname{Hom}(A,B)\neq\{0\}\), \(\operatorname{Hom}(B,A)\neq\{0\}\) and \(\{0\}\neq H\leq A\), \(\{0\}\neq F\leq B\) are si-subgroups of \(A\), \(B\), respectively (in particular, \(H=A\), \(F=B\) provided \(A\), \(B\) are si-simple groups). Then the group \(G=A\oplus B\) is an ISI-group if, and only if, \(\operatorname{Hom}(H,B)H=F\) and \(\operatorname{Hom}(F,A)F=H\), as moreover \(\operatorname{Hom}(A,B)A=B\) and \(\operatorname{Hom}(B,A)B=A\) provided both \(A\), \(B\) are not si-simple._
Proof. "\(\Rightarrow\)". If, for a moment,
\[U=\operatorname{Hom}(H,B)H\neq F,\]
then \(H\oplus U\) is a si-subgroup in \(G\) with \(H\oplus U\neq H\oplus F\) that contradicts the uniqueness of the si-subgroup of an ISI-group. Thus, hereafter, if both \(A\) and \(B\) are not si-simple, i.e., \(H\neq A\) and \(F\neq B\) and, for instance, \(\operatorname{Hom}(A,B)A=F\), then we have for the si-subgroup \(A\oplus F\neq H\oplus F\).
"\(\Leftarrow\)". Assume that \(\{0\}\neq U,V\leq G\) are some si-subgroups in \(G\). It suffices to show that \(U=V\). To that goal, we have
\[U=(U\cap A)\oplus(U\cap B),V=(V\cap A)\oplus(V\cap B).\]
On this vein, since \(U\cap A\), \(V\cap A\) are si-subgroups in \(A\) as well as \(U\cap B\), \(V\cap B\) are si-subgroups in \(B\), and \(A\),\(B\) are ISI-groups, one has that \(U\cap A=V\cap A\) and \(U\cap B=V\cap B\) hold, provided that these subgroups are not trivial.
Note that the condition \(U\cap A=\{0\}\) is impossible. In fact, if we assume in a way of contradiction that \(U\cap A=\{0\}\), then \(U\cap B\neq\{0\}\) since \(U\neq\{0\}\). If, however, \(U\cap B=F\), then
\[\{0\}\neq H=\operatorname{Hom}(F,A)F\leq U\cap A,\]
thus contradicting the equality \(U\cap A=\{0\}\). But, if \(U\cap B=B\), then \(U\cap A=\{0\}\), thus contradicting the condition \(\operatorname{Hom}(B,A)\neq\{0\}\). Analogously, one obtains that \(U\cap B\neq\{0\}\), \(V\cap A\neq\{0\}\), \(V\cap B\neq\{0\}\).
Assume now that \(U\cap A=A\). Then, the condition \(\mbox{\rm Hom}(A,B)\neq\{0\}\) forces that \(U\cap B\neq\{0\}\). So, we have \(U=A\oplus F\), where \(F=U\cap B\neq B\), because \(U\neq G\). In particular, \(B\) is not si-simple.
Next, we consider the first case when \(A\) is si-simple. To this purpose, assume that \(V\cap B=B\). Since \(\mbox{\rm Hom}(B,A)\neq\{0\}\), it follows that \(V\cap A\neq\{0\}\) and so \(V\cap A=A\) taking into account that \(A\) is si-simple. Therefore, it must be that \(V=G\), thus contradicting the given assumption on \(V\). So, \(V\cap B=F\neq B\) and since \(V\cap A\neq\{0\}\), we obtain that \(V\cap A=A\), i.e., \(V=A\oplus F\). Consequently, \(U=V\) holds in the case when \(A\) is si-simple.
Finally, assume \(A\) is not si-simple. If, however, \(B\) is si-simple, then we deduce that \(U\cap B=B\) and so \(U=G\) follows, a contradiction. So, the condition \(U\cap A=A\) follows, provided that \(B\) is not si-simple. By assumptions, we know that \(\mbox{\rm Hom}(A,B)A=B\) and \(\mbox{\rm Hom}(B,A)B=A\). Thus, the equality \(U\cap A=A\) assures that \(U\cap B=B\), i.e., \(U=G\), thus contradicting \(U\neq G\), as required.
As an immediate consequence, we extract the following:
**Corollary 2.27**: _If \(G\) is either a si-simple, s-irreducible torsion-free group or an ISI-group, then, for every ordinal \(m>1\), the group \(G^{(m)}\) retains the same properties._
Note that some other interesting results for si-subgroups and irreducible groups are proved in [2, 3], respectively. We now will give some additions to that as follows. Let us recall that, for a group \(G\), the notation \(\tau(G)\) means the set consisting of all types of the non-zero elements of \(G\).
**Proposition 2.28**: _If \(G\) is a non-homogeneous torsion-free ISI-group, then \(\tau(G)=\{t_{1},t_{2}\}\), where \(t_{1}<t_{2}\) and the non-trivial si-subgroup of \(G\) is exactly \(G(t_{2})\)._
Proof. We clearly have \(\{0\}\neq G(t_{2})\neq G\) for some type \(t_{2}\). Since the si-subgroup is unique (see Proposition 2.10), the group \(G(t_{2})\) is then homogeneous, i.e., the type \(t_{2}\) is maximal. Letting \(t_{1}\in\tau(G)\setminus\{t_{2}\}\), it follows from \(G(t_{1})\not\cong G(t_{2})\) that \(G(t_{1})=G\). Consequently, we deduce that \(t_{1}<t_{2}\) and \(\tau(G)=\{t_{1},t_{2}\}\), as stated.
Note that the si-subgroup \(H=G(t_{2})\) from Proposition 2.28 is manifestly s-irreducible. Indeed, if \(\{0\}\neq F<H\) is a pure si-subgroup in \(H\), then we can plainly inspect that \(\{0\}\neq\mbox{\rm Hom}(F,G)F\leq H\), so that \(F=H\), as claimed.
We are now recording the following consequence.
**Corollary 2.29**: _If the torsion-free ISI-group \(G\) has a non-\(p\)-pure si-subgroup \(H\), then \(G\) is homogeneous and, if \(H\) is not si-simple, then \(pG\leq F\) for every non-trivial si-subgroup \(F\) of \(H\)._
Proof. Arguing as above, we are aware that
\[\sum_{\alpha\in\mbox{\rm Hom}(F,G)}\mbox{\rm Im }\alpha=G,\]
whence
\[pG=\sum_{\alpha\in\mbox{\rm Hom}(F,G)}\mbox{\rm Im }(p\alpha)\leq F\]
as \(p\alpha\in\mathrm{Hom}(F,H)\) by exploiting Proposition 2.17.
Now, we are able to show the validity of the following two constructions.
**Example 2.30**: _The next two points are true:_
_(i) For every type \(\tau\neq(\infty,\infty,\dots)\), there exists a homogeneous torsion-free group \(G\) of type \(\tau\) such that, for each natural number \(r>1\), \(G\) has a si-subgroup of rank \(\frac{(r+2)(r-1)}{2}\)._
_(ii) There exists a torsion-free group \(G\) such that, for each natural number \(r\geq 2\), \(G\) possesses a si-subgroup of rank \(2r\)._
Proof. (i) Let \(G_{r}\) be the group from [10, SS88, Example 5] having rank \(r\) and set \(G=\bigoplus_{r>1}G_{r}\). Then, the so-constructed group \(G\) will have type \((0,0,\dots)\). By virtue of [10, SS88, Exersize 6], any subgroup in \(G_{r}\) of rank \(\leq r-1\) is a free group, and hence it follows that the following equality is true
\[\mathrm{Hom}(\bigoplus_{2\leq i\leq r}G_{r},\bigoplus_{j>r}G_{j})=\{0\},\]
because \(G_{r}\) does not have unfree direct summands, i.e., each direct component \(\bigoplus_{2\leq i\leq r}G_{r}\) is a si-subgroup of \(G\).
If now \(R\) is a torsion-free group of rank \(1\) and type \(\tau\neq(\infty,\infty,\dots)\), then one readily verifies that the standard tensor product \(G\otimes R\) over \(\mathbb{Z}\) is a homogeneous group of type \(\tau\), equipped with the same properties as these of \(G\), as needed.
(ii) Let \(G_{r}\) be the group of rank \(2r\) from [10, SS88, Exersize 7] and put \(G=\bigoplus_{r\geq 2}G_{r}\). Then, since every subgroup in \(G_{r}\) of rank \(\leq r\) is free and all factor-groups of \(G_{r}\) of rank \(\leq r\) are divisible, one easily checks that each \(G_{r}\) is a si-subgroup in \(G\), as required.
We shall now discover when ISI-groups are IFI-groups by arranging to prove the following (as mentioned in the introductory section, the converse is always true, because si-subgroups are fi-subgroups).
**Proposition 2.31**: _(i) A torsion ISI-group \(G\) is an IFI-group if, and only if, \(G\) is a \(p\)-group for some prime \(p\) having the property \(p^{2}G=\{0\}\) and, moreover, if \(pG\neq\{0\}\), then \(\mathrm{rank}(G)=\mathrm{rank}(pG)\)._
_(ii) A mixed ISI-group need not be an IFI-group._
_(iii) A torsion-free IFI-group is a si-simple group of idempotent type._
Proof. (i) It follows automatically from Theorems 2.5 and 2.11.
(ii) It follows directly from a combination of Theorem 2.5 (since there not exists a mixed IFI-group) and Theorem 2.13 in which mixed ISI-groups are described.
(iii) It follows immediately from the fact in Proposition 2.2 that torsion-free IFI-groups are strongly IFI-groups, as in this case every si-subgroup is isomorphic to the whole group and, consequently, all torsion-free IFI-groups are themselves si-simple.
Note that si-simple groups are always ISI-groups since they have only trivial si-subgroups. Also, there are si-simple groups that are _not_ IFI-groups; indeed, all separable torsion-free groups of a non-idempotent type are such groups. However, a question which immediately arises is whether all si-simple torsion-free groups of idempotent type are IFI-groups?
### Strongly ISI-groups
We continue here with more details than the comments listed after Definition 6.
**Lemma 2.32**: _The following conditions are equivalent:_
_(a) The group \(G\) is a strongly ISI-group;_
_(b) The equality \(\operatorname{Hom}(F,G)F=G\) holds for every subgroup \(\{0\}\neq F\leq G\);_
_(c) The group \(G\) is a si-simple group._
Proof. (a) \(\Rightarrow\) (c). Indeed, if \(H\neq\{0\}\) is a si-subgroup, then \(H\cong G\), and if \(f:H\to G\) is an isomorphism, then \(f(H)=G\leq H\), i.e., \(H=G\), as needed.
The relationships (b) \(\Leftrightarrow\) (c) and (c) \(\Rightarrow\) (a) are obvious, so we omit the details in proving them.
It is clear that a si-simple torsion-free group is a homogeneous group and that every fi-simple group is si-simple. Moreover, in view of [1, Proposition 24], each fi-simple group is either \(p\)-elementary for some prime \(p\) or is divisible torsion-free.
Some other examples of si-simple groups are the following ones:
(1) Fully transitive homogeneous torsion-free groups of idempotent type are si-simple (see, e.g., [1, Proposition 26]).
(2) For any type \(\tau\), there exists a torsion-free si-simple group of homogeneous type \(\tau\). In fact, as such a group, it is possible to take any separable homogeneous torsion-free group \(G\) (see, for instance, [3, Proposition 1(1)]).
(3) A non-zero torsion group \(G\) is a si-simple group if, and only if, it is a non-zero elementary \(p\)-group for some prime \(p\). This claim follows at once from [1, Proposition 24].
Furthermore, we give here some elementary but helpful observations:
\(\bullet\) A torsion-free IFI-group is always a strongly ISI-group.
Indeed, we just need to combine Proposition 2.31 (iii) and Lemma 2.32 (c).
\(\bullet\) A non-zero torsion group is a strongly ISI-group if, and only if, it is a non-zero elementary \(p\)-group for some prime \(p\).
Indeed, it suffices just to combine point (3) quoted above with Lemma 2.32 (c).
### Weakly ISI-groups
Before starting our work, it is worth to notice that in [11], [12] and [13] were examined those Abelian groups, both torsion and torsion-free, having isomorphic proper fully invariant subgroup. Likewise, in [8] were investigated in detail those Abelian groups in which all subgroups of infinite index are free.
In this subsection, we will initiate the examination of the case when we have a proper strongly invariant subgroup isomorphic to the whole group. For simpleness of the exposition, we shall call these groups just _weakly ISI-groups_. However, unfortunately, this class of groups is definitely _not_ so interesting, because each si-subgroup which is a weakly ISI-group coincides with the whole group.
Concluding Discussion and Open Problems
We conclude this final section with some valuable comments on the obtained results. In fact, summarizing all of what we have established so far, we can just say that the properties of ISI-groups are totally different from these of ISI-groups and IC-groups which were completely described in [4] and [7], respectively. The primary reason for this discrepancy is that there is no abundance of so many si-subgroups (compare also with [1]).
We close our work with six questions of interest and importance.
**Problem 1**.: Does there exist s-irreducible that are _not_ ISI-groups?
**Problem 2**.: Is it true that si-subgroups of ISI-groups are also ISI-groups?
**Problem 3**.: Does it follow that an s-irreducible torsion-free ISI-group is si-simple?
Knowing with the aid of [4, Example 2.6] that there exists a homogeneous torsion-free group of idempotent type and rank strictly greater that 1 which is _not_ an IFI-group, it is reasonably logical to state the following.
**Problem 4**.: Decide when a homogeneous torsion-free group of idempotent type and rank 2 is an ISI-group?
We know that each strongly invariant subgroup is always fully invariant, and thus every IFI-group is an ISI-group, but the reverse implication fails. So, in regard to Proposition 2.31 and the first bullet from Subsection 2.2, we may ask the following.
**Problem 5**.: Find suitable conditions under which each torsion-free ISI-group of an idempotent type is an IFI-group.
In a way of similarity to Theorem 2.1 and Corollary 2.27, both listed above, we may state our next query.
**Problem 6**.: For a group \(G\) of an idempotent type, does it follow that the square \(G\oplus G\) is an ISI-group if, and only if, \(G\oplus G\) is an IFI-group?
The restriction on the idempotent type is essential and cannot be ignored: in fact, if \(G\) is a torsion-free separable group of a non-idempotent type, then it is necessarily si-simple and thus \(G\oplus G\) is too si-simple and hence an ISI-group but _not_ an IFI-group, because \(G\oplus G\) is still of a non-idempotent type.
**Acknowledgement**.: The authors would like to thank the unknown expert referee for his/her valuable comments and suggestions which led to an improvement of the article's shape.
**Funding:** The scientific work of Andrey R. Chekhlov was supported by the Ministry of Science and Higher Education of Russia (agreement No. 075-02-2023-943). The scientific work of Peter V. Danchev was supported in part by the Bulgarian National Science Fund under Grant KP-06 No 32/1 of December 07, 2019, as well as by the Junta de Andalucia, Grant FQM 264, and by the BIDEB 2221 of TUBITAK. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.